WorldWideScience

Sample records for fixed scale approach

  1. Fixed mass and scaling sum rules

    International Nuclear Information System (INIS)

    Ward, B.F.L.

    1975-01-01

    Using the correspondence principle (continuity in dynamics), the approach of Keppell-Jones-Ward-Taha to fixed mass and scaling current algebraic sum rules is extended so as to consider explicitly the contributions of all classes of intermediate states. A natural, generalized formulation of the truncation ideas of Cornwall, Corrigan, and Norton is introduced as a by-product of this extension. The formalism is illustrated in the familiar case of the spin independent Schwinger term sum rule. New sum rules are derived which relate the Regge residue functions of the respective structure functions to their fixed hadronic mass limits for q 2 → infinity. (Auth.)

  2. On BLM scale fixing in exclusive processes

    International Nuclear Information System (INIS)

    Anikin, I.V.; Pire, B.; Szymanowski, L.; Teryaev, O.V.; Wallon, S.

    2005-01-01

    We discuss the BLM scale fixing procedure in exclusive electroproduction processes in the Bjorken regime with rather large x B . We show that in the case of vector meson production dominated in this case by quark exchange the usual way to apply the BLM method fails due to singularities present in the equations fixing the BLM scale. We argue that the BLM scale should be extracted from the squared amplitudes which are directly related to observables. (orig.)

  3. On BLM scale fixing in exclusive processes

    Energy Technology Data Exchange (ETDEWEB)

    Anikin, I.V. [JINR, Bogoliubov Laboratory of Theoretical Physics, Dubna (Russian Federation); Universite Paris-Sud, LPT, Orsay (France); Pire, B. [Ecole Polytechnique, CPHT, Palaiseau (France); Szymanowski, L. [Soltan Institute for Nuclear Studies, Warsaw (Poland); Univ. de Liege, Inst. de Physique, Liege (Belgium); Teryaev, O.V. [JINR, Bogoliubov Laboratory of Theoretical Physics, Dubna (Russian Federation); Wallon, S. [Universite Paris-Sud, LPT, Orsay (France)

    2005-07-01

    We discuss the BLM scale fixing procedure in exclusive electroproduction processes in the Bjorken regime with rather large x{sub B}. We show that in the case of vector meson production dominated in this case by quark exchange the usual way to apply the BLM method fails due to singularities present in the equations fixing the BLM scale. We argue that the BLM scale should be extracted from the squared amplitudes which are directly related to observables. (orig.)

  4. An overview of CFD modelling of small-scale fixed-bed biomass pellet boilers with preliminary results from a simplified approach

    International Nuclear Information System (INIS)

    Chaney, Joel; Liu Hao; Li Jinxing

    2012-01-01

    Highlights: ► Overview of the overall approach of modelling fixed-bed biomass boilers in CFD. ► Bed sub-models of moisture evaporation, devolatisation and char combustion reviewed. ► A method of embedding a combustion model in discrete fuel zones within the CFD is suggested. ► Includes sample of preliminary results for a 50 kW pellet boiler. ► Clear physical trends predicted. - Abstract: The increasing global energy demand and mounting pressures for CO 2 mitigation call for increased efficient utilization of biomass, particularly for heating domestic and commercial buildings. The authors of the present paper are investigating the optimization of the combustion performance and NO x emissions of a 50 kW biomass pellet boiler fabricated by a UK manufacturer. The boiler has a number of adjustable parameters including the ratio of air flow split between the primary and secondary supplies, the orientation, height, direction and number of the secondary inlets. The optimization of these parameters provides opportunities to improve both the combustion efficiency and NO x emissions. When used carefully in conjunction with experiments, Computational Fluid Dynamics (CFD) modelling is a useful tool for rapidly and at minimum cost examining the combustion performance and emissions from a boiler with multiple variable parameters. However, modelling combustion and emissions of a small-scale biomass pellet boiler is not trivial and appropriate fixed-bed models that can be coupled with the CFD code are required. This paper reviews previous approaches specifically relevant to simulating fixed-bed biomass boilers. In the first part it considers approaches to modelling the heterogeneous solid phase and coupling this with the gas phase. The essential components of the sub-models are then overviewed. Importantly, for the optimization process a model is required that has a good balance between accuracy in predicting physical trends, with low computational run time. Finally, a

  5. Fixed Point Approach to Bagley Torvik Problem

    Directory of Open Access Journals (Sweden)

    Lale CONA

    2017-10-01

    Full Text Available In the present paper, a sufficient condition for existence and uniqueness of Bagley Torvik problem is obtained. The theorem on existence and uniqueness is established. This approach permits us to use fixed point iteration method to solve problem for differential equation involving derivatives of nonlinear order.

  6. A comprehensive small and pilot-scale fixed-bed reactor approach for testing Fischer–Tropsch catalyst activity and performance on a BTL route

    Directory of Open Access Journals (Sweden)

    Piyapong Hunpinyo

    2017-05-01

    Full Text Available Ruthenium (Ru-based catalysts were prepared by the sol–gel technique for biomass-to-liquid (BTL operation and had their performance tested under different conditions. The catalytic study was carried out in two steps using a simple and reliable method. In the first step, the effects of reaction temperatures and inlet H2/CO molar feed ratios obtained from biomass gasification were investigated on the catalyst performance. A set of experimental results obtained in a laboratory fixed bed reactor was described and summarized. Moreover, a simplified Langmuir–Hinshelwood–Hougen–Watson (LHHW kinetic model was proposed with two promising models, where the surface decomposition of carbon monoxide was assumed as the rate determining step (RDS. In the second step, a FT pilot plant was conducted to validate the catalyst performance, especially the conversion efficiency, heat and mass transfer effects, and system controllability. The results indicated that our catalyst performances under mild conditions were not significantly different in many regards from those previously reported for a severe condition, as especially Ru-based catalyst can be performed to vary over a wide range of conditions to yield specific liquid productivity. The results in terms of the hydrocarbon product distribution obtained from the pilot scale operations were similar with that obtained from the related lab scale experiments.

  7. Renormalization Scale-Fixing for Complex Scattering Amplitudes

    Energy Technology Data Exchange (ETDEWEB)

    Brodsky, Stanley J.; /SLAC; Llanes-Estrada, Felipe J.; /Madrid U.

    2005-12-21

    We show how to fix the renormalization scale for hard-scattering exclusive processes such as deeply virtual meson electroproduction by applying the BLM prescription to the imaginary part of the scattering amplitude and employing a fixed-t dispersion relation to obtain the scale-fixed real part. In this way we resolve the ambiguity in BLM renormalization scale-setting for complex scattering amplitudes. We illustrate this by computing the H generalized parton distribution at leading twist in an analytic quark-diquark model for the parton-proton scattering amplitude which can incorporate Regge exchange contributions characteristic of the deep inelastic structure functions.

  8. Impulsive differential inclusions a fixed point approach

    CERN Document Server

    Ouahab, Abdelghani; Henderson, Johnny

    2013-01-01

    Impulsive differential equations have been developed in modeling impulsive problems in physics, population dynamics, ecology, biotechnology, industrial robotics, pharmacokinetics, optimal control, etc. The questions of existence and stability of solutions for different classes of initial values problems for impulsive differential equations and inclusions with fixed and variable moments are considered in detail. Attention is also given to boundary value problems and relative questions concerning differential equations. This monograph addresses a variety of side issues that arise from its simple

  9. An approach for fixed coefficient RNS-based FIR filter

    Science.gov (United States)

    Srinivasa Reddy, Kotha; Sahoo, Subhendu Kumar

    2017-08-01

    In this work, an efficient new modular multiplication method for {2k-1, 2k, 2k+1-1} moduli set is proposed to implement a residue number system (RNS)-based fixed coefficient finite impulse response filter. The new multiplication approach reduces the number of partial products by using pre-loaded product block. The reduction in partial products with the proposed modular multiplication improves the clock frequency and reduces the area and power as compared with the conventional modular multiplication. Further, the present approach eliminates a binary number to residue number converter circuit, which is usually needed at the front end of RNS-based system. In this work, two fixed coefficient filter architectures with the new modular multiplication approach are proposed. The filters are implemented using Verilog hardware description language. The United Microelectronics Corporation 90 nm technology library has been used for synthesis and the results area, power and delay are obtained with the help of Cadence register transfer level compiler. The power delay product (PDP) is also considered for performance comparison among the proposed filters. One of the proposed architecture is found to improve PDP gain by 60.83% as compared with the filter implemented with conventional modular multiplier. The filters functionality is validated with the help of Altera DSP Builder.

  10. An Integer Programming Approach to Solving Tantrix on Fixed Boards

    Directory of Open Access Journals (Sweden)

    Yushi Uno

    2012-03-01

    Full Text Available Tantrix (Tantrix R ⃝ is a registered trademark of Colour of Strategy Ltd. in New Zealand, and of TANTRIX JAPAN in Japan, respectively, under the license of M. McManaway, the inventor. is a puzzle to make a loop by connecting lines drawn on hexagonal tiles, and the objective of this research is to solve it by a computer. For this purpose, we first give a problem setting of solving Tantrix as making a loop on a given fixed board. We then formulate it as an integer program by describing the rules of Tantrix as its constraints, and solve it by a mathematical programming solver to have a solution. As a result, we establish a formulation that can solve Tantrix of moderate size, and even when the solutions are invalid only by elementary constraints, we achieved it by introducing additional constraints and re-solve it. By this approach we succeeded to solve Tantrix of size up to 60.

  11. Small-scale fixed wing airplane software verification flight test

    Science.gov (United States)

    Miller, Natasha R.

    The increased demand for micro Unmanned Air Vehicles (UAV) driven by military requirements, commercial use, and academia is creating a need for the ability to quickly and accurately conduct low Reynolds Number aircraft design. There exist several open source software programs that are free or inexpensive that can be used for large scale aircraft design, but few software programs target the realm of low Reynolds Number flight. XFLR5 is an open source, free to download, software program that attempts to take into consideration viscous effects that occur at low Reynolds Number in airfoil design, 3D wing design, and 3D airplane design. An off the shelf, remote control airplane was used as a test bed to model in XFLR5 and then compared to flight test collected data. Flight test focused on the stability modes of the 3D plane, specifically the phugoid mode. Design and execution of the flight tests were accomplished for the RC airplane using methodology from full scale military airplane test procedures. Results from flight test were not conclusive in determining the accuracy of the XFLR5 software program. There were several sources of uncertainty that did not allow for a full analysis of the flight test results. An off the shelf drone autopilot was used as a data collection device for flight testing. The precision and accuracy of the autopilot is unknown. Potential future work should investigate flight test methods for small scale UAV flight.

  12. A New Approach for the Approximations of Solutions to a Common Fixed Point Problem in Metric Fixed Point Theory

    Directory of Open Access Journals (Sweden)

    Ishak Altun

    2016-01-01

    Full Text Available We provide sufficient conditions for the existence of a unique common fixed point for a pair of mappings T,S:X→X, where X is a nonempty set endowed with a certain metric. Moreover, a numerical algorithm is presented in order to approximate such solution. Our approach is different to the usual used methods in the literature.

  13. Multidisciplinary approach of ectodermal dysplasia with implant retained fixed prosthesis

    Directory of Open Access Journals (Sweden)

    Vishnu Priya

    2013-01-01

    Full Text Available Ectodermal dysplasia represents a group of rare inherited conditions in which two or more ectodermally derived anatomical structures fail to develop. Early dental intervention can improve patient′s appearance, thereby minimizing associated emotional and psychological problems in these patients. Treatment requires a teamwork by medical personnel along with dental professionals of various specialties. Here, a rare case of a young female patient is presented with prosthetic management with implant supported fixed partial denture.

  14. Functional Genomics Approaches to Studying Symbioses between Legumes and Nitrogen-Fixing Rhizobia.

    Science.gov (United States)

    Lardi, Martina; Pessi, Gabriella

    2018-05-18

    Biological nitrogen fixation gives legumes a pronounced growth advantage in nitrogen-deprived soils and is of considerable ecological and economic interest. In exchange for reduced atmospheric nitrogen, typically given to the plant in the form of amides or ureides, the legume provides nitrogen-fixing rhizobia with nutrients and highly specialised root structures called nodules. To elucidate the molecular basis underlying physiological adaptations on a genome-wide scale, functional genomics approaches, such as transcriptomics, proteomics, and metabolomics, have been used. This review presents an overview of the different functional genomics approaches that have been performed on rhizobial symbiosis, with a focus on studies investigating the molecular mechanisms used by the bacterial partner to interact with the legume. While rhizobia belonging to the alpha-proteobacterial group (alpha-rhizobia) have been well studied, few studies to date have investigated this process in beta-proteobacteria (beta-rhizobia).

  15. Stochastic Discount Factor Approach to International Risk-Sharing: Evidence from Fixed Exchange Rate Episodes

    NARCIS (Netherlands)

    Hadzi-Vaskov, M.; Kool, C.J.M.

    2007-01-01

    This paper presents evidence of the stochastic discount factor approach to international risk-sharing applied to fixed exchange rate regimes. We calculate risk-sharing indices for two episodes of fixed or very rigid exchange rates: the Eurozone before and after the introduction of the Euro, and

  16. arXiv A Simple No-Scale Model of Modulus Fixing and Inflation

    CERN Document Server

    Ellis, John; Romano, Antonio Enea; Zapata, Oscar

    We construct a no-scale model of inflation with a single modulus whose real and imaginary parts are fixed by simple power-law corrections to the no-scale K{\\" a}hler potential. Assuming an uplift of the minimum of the effective potential, the model yields a suitable number of e-folds of expansion and values of the tilt in the scalar cosmological density perturbations and of the ratio of tensor and scalar perturbations that are compatible with measurements of the cosmic microwave background radiation.

  17. Fixing the EW scale in supersymmetric models after the Higgs discovery

    CERN Document Server

    Ghilencea, D M

    2013-01-01

    TeV-scale supersymmetry was originally introduced to solve the hierarchy problem and therefore fix the electroweak (EW) scale in the presence of quantum corrections. Numerical methods testing the SUSY models often report a good likelihood L (or chi^2=-2ln L) to fit the data {\\it including} the EW scale itself (m_Z^0) with a {\\it simultaneously} large fine-tuning i.e. a large variation of this scale under a small variation of the SUSY parameters. We argue that this is inconsistent and we identify the origin of this problem. Our claim is that the likelihood (or chi^2) to fit the data that is usually reported in such models does not account for the chi^2 cost of fixing the EW scale. When this constraint is implemented, the likelihood (or chi^2) receives a significant correction (delta_chi^2) that worsens the current data fits of SUSY models. We estimate this correction for the models: constrained MSSM (CMSSM), models with non-universal gaugino masses (NUGM) or higgs soft masses (NUHM1, NUHM2), the NMSSM and the ...

  18. Finite size scaling of the Higgs-Yukawa model near the Gaussian fixed point

    Energy Technology Data Exchange (ETDEWEB)

    Chu, David Y.J.; Lin, C.J. David [National Chiao-Tung Univ., Hsinchu, Taiwan (China); Jansen, Karl [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Knippschild, Bastian [HISKP, Bonn (Germany); Nagy, Attila [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Humboldt-Univ. Berlin (Germany)

    2016-12-15

    We study the scaling properties of Higgs-Yukawa models. Using the technique of Finite-Size Scaling, we are able to derive scaling functions that describe the observables of the model in the vicinity of a Gaussian fixed point. A feasibility study of our strategy is performed for the pure scalar theory in the weak-coupling regime. Choosing the on-shell renormalisation scheme gives us an advantage to fit the scaling functions against lattice data with only a small number of fit parameters. These formulae can be used to determine the universality of the observed phase transitions, and thus play an essential role in future investigations of Higgs-Yukawa models, in particular in the strong Yukawa coupling region.

  19. Evaluation on Optimal Scale of Rural Fixed-asset Investment-Based on Microcosmic Perspective of Farmers’ Income Increase

    Institute of Scientific and Technical Information of China (English)

    Jinqian; DENG; Kangkang; SHAN; Yan; ZHANG

    2014-01-01

    The rural fundamental and productive fixed-asset investment not only makes active influence on the changes of farmers’ operational,wages and property income,but it also has an optimal scale range for farmers’ income increase. From the perspective of farmers’ income increase,this article evaluates the optimal scale of rural fixed-asset investment by setting up model with statistic data,and the results show that the optimal scale of per capita rural fixed-asset investment is 76. 35% of per capita net income of rural residents,which has been reached in China in 2009. Therefore,compared with the adding of rural fixed-asset investment,a better income increase effect can be achieved through the adjustment of rural fixed-asset investment structure.

  20. T-Fix endoscopic meniscal repair: technique and approach to different types of tears.

    Science.gov (United States)

    Barrett, G R; Richardson, K; Koenig, V

    1995-04-01

    Endoscopic meniscus repair using the T-Fix suture device (Acufex Microsurgical, Inc, Mansfield, MA) allows ease of suture placement for meniscus stability without the problems associated with ancillary incisions such as neurovascular compromise. It is ideal for the central posterior horn tears that are difficult using conventional techniques. Vertical tears, bucket handle tears, flap tears, and horizontal tears can be approached using a temporary "anchor stitch" to stabilize the meniscus before T-Fix repair. The basic method of repair and our approach to these different types of tears is presented.

  1. Calculation and Identification of the Aerodynamic Parameters for Small-Scaled Fixed-Wing UAVs

    Directory of Open Access Journals (Sweden)

    Jieliang Shen

    2018-01-01

    Full Text Available The establishment of the Aircraft Dynamic Model (ADM constitutes the prerequisite for the design of the navigation and control system, but the aerodynamic parameters in the model could not be readily obtained especially for small-scaled fixed-wing UAVs. In this paper, the procedure of computing the aerodynamic parameters is developed. All the longitudinal and lateral aerodynamic derivatives are firstly calculated through semi-empirical method based on the aerodynamics, rather than the wind tunnel tests or fluid dynamics software analysis. Secondly, the residuals of each derivative are proposed to be identified or estimated further via Extended Kalman Filter(EKF, with the observations of the attitude and velocity from the airborne integrated navigation system. Meanwhile, the observability of the targeted parameters is analyzed and strengthened through multiple maneuvers. Based on a small-scaled fixed-wing aircraft driven by propeller, the airborne sensors are chosen and the model of the actuators are constructed. Then, real flight tests are implemented to verify the calculation and identification process. Test results tell the rationality of the semi-empirical method and show the improvement of accuracy of ADM after the compensation of the parameters.

  2. Calculation and Identification of the Aerodynamic Parameters for Small-Scaled Fixed-Wing UAVs.

    Science.gov (United States)

    Shen, Jieliang; Su, Yan; Liang, Qing; Zhu, Xinhua

    2018-01-13

    The establishment of the Aircraft Dynamic Model(ADM) constitutes the prerequisite for the design of the navigation and control system, but the aerodynamic parameters in the model could not be readily obtained especially for small-scaled fixed-wing UAVs. In this paper, the procedure of computing the aerodynamic parameters is developed. All the longitudinal and lateral aerodynamic derivatives are firstly calculated through semi-empirical method based on the aerodynamics, rather than the wind tunnel tests or fluid dynamics software analysis. Secondly, the residuals of each derivative are proposed to be identified or estimated further via Extended Kalman Filter(EKF), with the observations of the attitude and velocity from the airborne integrated navigation system. Meanwhile, the observability of the targeted parameters is analyzed and strengthened through multiple maneuvers. Based on a small-scaled fixed-wing aircraft driven by propeller, the airborne sensors are chosen and the model of the actuators are constructed. Then, real flight tests are implemented to verify the calculation and identification process. Test results tell the rationality of the semi-empirical method and show the improvement of accuracy of ADM after the compensation of the parameters.

  3. Artificial neural network modelling approach for a biomass gasification process in fixed bed gasifiers

    International Nuclear Information System (INIS)

    Mikulandrić, Robert; Lončar, Dražen; Böhning, Dorith; Böhme, Rene; Beckmann, Michael

    2014-01-01

    Highlights: • 2 Different equilibrium models are developed and their performance is analysed. • Neural network prediction models for 2 different fixed bed gasifier types are developed. • The influence of different input parameters on neural network model performance is analysed. • Methodology for neural network model development for different gasifier types is described. • Neural network models are verified for various operating conditions based on measured data. - Abstract: The number of the small and middle-scale biomass gasification combined heat and power plants as well as syngas production plants has been significantly increased in the last decade mostly due to extensive incentives. However, existing issues regarding syngas quality, process efficiency, emissions and environmental standards are preventing biomass gasification technology to become more economically viable. To encounter these issues, special attention is given to the development of mathematical models which can be used for a process analysis or plant control purposes. The presented paper analyses possibilities of neural networks to predict process parameters with high speed and accuracy. After a related literature review and measurement data analysis, different modelling approaches for the process parameter prediction that can be used for an on-line process control were developed and their performance were analysed. Neural network models showed good capability to predict biomass gasification process parameters with reasonable accuracy and speed. Measurement data for the model development, verification and performance analysis were derived from biomass gasification plant operated by Technical University Dresden

  4. A fixed point approach towards stability of delay differential equations with applications to neural networks

    NARCIS (Netherlands)

    Chen, Guiling

    2013-01-01

    This thesis studies asymptotic behavior and stability of determinsitic and stochastic delay differential equations. The approach used in this thesis is based on fixed point theory, which does not resort to any Liapunov function or Liapunov functional. The main contribution of this thesis is to study

  5. Large-scale laboratory study of breaking wave hydrodynamics over a fixed bar

    Science.gov (United States)

    van der A, Dominic A.; van der Zanden, Joep; O'Donoghue, Tom; Hurther, David; Cáceres, Iván.; McLelland, Stuart J.; Ribberink, Jan S.

    2017-04-01

    A large-scale wave flume experiment has been carried out involving a T = 4 s regular wave with H = 0.85 m wave height plunging over a fixed barred beach profile. Velocity profiles were measured at 12 locations along the breaker bar using LDA and ADV. A strong undertow is generated reaching magnitudes of 0.8 m/s on the shoreward side of the breaker bar. A circulation pattern occurs between the breaking area and the inner surf zone. Time-averaged turbulent kinetic energy (TKE) is largest in the breaking area on the shoreward side of the bar where the plunging jet penetrates the water column. At this location, and on the bar crest, TKE generated at the water surface in the breaking process reaches the bottom boundary layer. In the breaking area, TKE does not reduce to zero within a wave cycle which leads to a high level of "residual" turbulence and therefore lower temporal variation in TKE compared to previous studies of breaking waves on plane beach slopes. It is argued that this residual turbulence results from the breaker bar-trough geometry, which enables larger length scales and time scales of breaking-generated vortices and which enhances turbulence production within the water column compared to plane beaches. Transport of TKE is dominated by the undertow-related flux, whereas the wave-related and turbulent fluxes are approximately an order of magnitude smaller. Turbulence production and dissipation are largest in the breaker zone and of similar magnitude, but in the shoaling zone and inner surf zone production is negligible and dissipation dominates.

  6. Fixed dynamometry is more sensitive than vital capacity or ALS rating scale.

    Science.gov (United States)

    Andres, Patricia L; Allred, Margaret Peggy; Stephens, Helen E; Proffitt Bunnell, Mary; Siener, Catherine; Macklin, Eric A; Haines, Travis; English, Robert A; Fetterman, Katherine A; Kasarskis, Edward J; Florence, Julaine; Simmons, Zachary; Cudkowicz, Merit E

    2017-10-01

    Improved outcome measures are essential to efficiently screen the growing number of potential amyotrophic lateral sclerosis (ALS) therapies. This longitudinal study of 100 (70 male) participants with ALS compared Accurate Test of Limb Isometric Strength (ATLIS), using a fixed, wireless load cell, with ALS Functional Rating Scale-Revised (ALSFRS-R) and vital capacity (VC). Participants enrolled at 5 U.S. sites. Data were analyzed from 66 participants with complete ATLIS, ALSFRS-R, and VC data over at least 3 visits. Change in ATLIS was less variable both within- and among-person than change in ALSFRS-R or VC. Additionally, participants who had normal ALSFRS-R arm and leg function averaged 12 to 32% below expected strength values measured by ATLIS. ATLIS was more sensitive to change than ALSFRS-R or VC and could decrease sample size requirements by approximately one-third. The ability of ATLIS to detect prefunctional change has potential value in early trials. Muscle Nerve 56: 710-715, 2017. © 2017 Wiley Periodicals, Inc.

  7. Evaluating Information System Integration approaches for fixed asset management framework in Tanzania

    Directory of Open Access Journals (Sweden)

    Theophil Assey

    2017-10-01

    Full Text Available Information systems are developed based on different requirements and different technologies. Integration of these systems is of vital importance as they cannot work in isolation, they need to share and exchange data with other information systems. The Information Systems handle data of different types and formats’, finding a way to make them communicate is important as they need to exchange data during transactions, communication and different aspects which may require their interactions. In Tanzanian Local Government Authorities (LGAs, fixed asset data are not centralized, individual Local Government Authority stores their own data in isolation yet accountability is required through the provision of centralized storage for easy data access and easier data integration with other Information Systems in order to enhance fixed asset accountability. The study was carried out through reviewing of literature on the existing Information System integration approaches in order to identify and propose the best approach to be used in fixed asset management systems in LGA’s in Tanzania. The different approaches which are used for systems integration such as Service Oriented Architecture (SOA, Common Object Request Broker (CORBA, Common Object Model (COM and eXtensible Markup Language (XML were evaluated under the factors considered at the LGA. The XML was preferred over SOA, CORBA and COM because of some challenges in governance, data security, availability of expertise for support, maintenance, implementation cost, performance, compliance with government changing policies and service reliability. The proposed approach integrates data for all the Local Government Authorities at a centralized location and middleware transforms the centralized data into XML so it can easily be used by other Information Systems.

  8. Important aspects of Eastern Mediterranean large-scale variability revealed from data of three fixed observatories

    Science.gov (United States)

    Bensi, Manuel; Velaoras, Dimitris; Cardin, Vanessa; Perivoliotis, Leonidas; Pethiakis, George

    2015-04-01

    Long-term variations of temperature and salinity observed in the Adriatic and Aegean Seas seem to be regulated by larger-scale circulation modes of the Eastern Mediterranean (EMed) Sea, such as the recently discovered feedback mechanisms, namely the BiOS (Bimodal Oscillating System) and the internal thermohaline pump theories. These theories are the results of interpretation of many years' observations, highlighting possible interactions between two key regions of the EMed. Although repeated oceanographic cruises carried out in the past or planned for the future are a very useful tool for understanding the interaction between the two basins (e.g. alternating dense water formation, salt ingressions), recent long time-series of high frequency (up to 1h) sampling have added valuable information to the interpretation of internal mechanisms for both areas (i.e. mesoscale eddies, evolution of fast internal processes, etc.). During the last 10 years, three deep observatories were deployed and maintained in the Adriatic, Ionian, and Aegean Seas: they are respectively, the E2-M3A, the Pylos, and the E1-M3A. All are part of the largest European network of Fixed Point Open Ocean Observatories (FixO3, http://www.fixo3.eu/). Herein, from the analysis of temperature and salinity, and potential density time series collected at the three sites from the surface down to the intermediate and deep layers, we will discuss the almost perfect anti-correlated behavior between the Adriatic and the Aegean Seas. Our data, collected almost continuously since 2006, reveal that these observatories well represent the thermohaline variability of their own areas. Interestingly, temperature and salinity in the intermediate layer suddenly increased in the South Adriatic from the end of 2011, exactly when they started decreasing in the Aegean Sea. Moreover, Pylos data used together with additional ones (e.g. Absolute dynamic topography, temperature and salinity data from other platforms) collected

  9. A Novel Riemannian Metric Based on Riemannian Structure and Scaling Information for Fixed Low-Rank Matrix Completion.

    Science.gov (United States)

    Mao, Shasha; Xiong, Lin; Jiao, Licheng; Feng, Tian; Yeung, Sai-Kit

    2017-05-01

    Riemannian optimization has been widely used to deal with the fixed low-rank matrix completion problem, and Riemannian metric is a crucial factor of obtaining the search direction in Riemannian optimization. This paper proposes a new Riemannian metric via simultaneously considering the Riemannian geometry structure and the scaling information, which is smoothly varying and invariant along the equivalence class. The proposed metric can make a tradeoff between the Riemannian geometry structure and the scaling information effectively. Essentially, it can be viewed as a generalization of some existing metrics. Based on the proposed Riemanian metric, we also design a Riemannian nonlinear conjugate gradient algorithm, which can efficiently solve the fixed low-rank matrix completion problem. By experimenting on the fixed low-rank matrix completion, collaborative filtering, and image and video recovery, it illustrates that the proposed method is superior to the state-of-the-art methods on the convergence efficiency and the numerical performance.

  10. Multidisciplinary approach to restoring anterior maxillary partial edentulous area using an IPS Empress 2 fixed partial denture: a clinical report.

    Science.gov (United States)

    Dundar, Mine; Gungor, M Ali; Cal, Ebru

    2003-04-01

    Esthetics is a major concern during restoration of anterior partial edentulous areas. All-ceramic fixed partial dentures may provide better esthetics and biocompatibility in the restoration of anterior teeth. This clinic report describes a multidisciplinary approach and treatment procedures with an IPS Empress 2 fixed partial denture to restore missing anterior teeth.

  11. A Chern-Simons gauge-fixed Lagrangian in a 'non-canonical' BRST approach

    International Nuclear Information System (INIS)

    Constantinescu, R; Ionescu, C

    2009-01-01

    This paper presents a possible path which starts from the extended BRST Hamiltonian formalism and ends with a covariant Lagrangian action, using the equivalence between the two formalisms. The approach allows a simple account of the form of the master equation and offers a natural identification of some 'non-canonical' operators and variables. These are the main items which solve the major difficulty of the extended BRST Lagrangian formalism, i.e., the gauge-fixing problem. The algorithm we propose applies to a non-Abelian Chern-Simons model coupled with Dirac fields

  12. Micro-scale piezoelectric vibration energy harvesting: From fixed-frequency to adaptable-frequency devices

    Science.gov (United States)

    Miller, Lindsay Margaret

    hundred milliwatts and are falling steadily as improvements are made, it is feasible to use energy harvesting to power WSNs. This research begins by presenting the results of a thorough survey of ambient vibrations in the machine room of a large campus building, which found that ambient vibrations are low frequency, low amplitude, time varying, and multi-frequency. The modeling and design of fixed-frequency micro scale energy harvesters are then presented. The model is able to take into account rotational inertia of the harvester's proof mass and it accepts arbitrary measured acceleration input, calculating the energy harvester's voltage as an output. The fabrication of the micro electromechanical system (MEMS) energy harvesters is discussed and results of the devices harvesting energy from ambient vibrations are presented. The harvesters had resonance frequencies ranging from 31 - 232 Hz, which was the lowest reported in literature for a MEMS device, and produced 24 pW/g2 - 10 nW/g2 of harvested power from ambient vibrations. A novel method for frequency modification of the released harvester devices using a dispenser printed mass is then presented, demonstrating a frequency shift of 20 Hz. Optimization of the MEMS energy harvester connected to a resistive load is then presented, finding that the harvested power output can be increased to several microwatts with the optimized design as long as the driving frequency matches the harvester's resonance frequency. A framework is then presented to allow a similar optimization to be conducted with the harvester connected to a synchronously switched pre-bias circuit. With the realization that the optimized energy harvester only produces usable amounts of power if the resonance frequency and driving frequency match, which is an unrealistic situation in the case of ambient vibrations which change over time and are not always known a priori, an adaptable-frequency energy harvester was designed. The adaptable-frequency harvester

  13. Drying kinetics characteristic of Indonesia lignite coal (IBC) using lab scale fixed bed reactor

    Energy Technology Data Exchange (ETDEWEB)

    Kang, TaeJin; Jeon, DoMan; Namkung, Hueon; Jang, DongHa; Jeon, Youngshin; Kim, Hyungtaek [Ajou Univ., Suwon (Korea, Republic of). Div. of Energy Systems Research

    2013-07-01

    Recent instability of energy market arouse a lot of interest about coal which has a tremendous amount of proven coal reserves worldwide. South Korea hold the second rank by importing 80 million tons of coal in 2007 following by Japan. Among various coals, there is disused coal. It's called Low Rank Coal (LRC). Drying process has to be preceded before being utilized as power plant. In this study, drying kinetics of LRC is induced by using a fixed bed reactor. The drying kinetics was deduced from particle size, the inlet gas temperature, the drying time, the gas velocity, and the L/D ratio. The consideration on Reynold's number was taken for correction of gas velocity, particle size, and the L/D ratio was taken for correction packing height of coal. It can be found that active drying of free water and phase boundary reaction is suitable mechanism through the fixed bed reactor experiments.

  14. Light Dilaton at Fixed Points and Ultra Light Scale Super Yang Mills

    DEFF Research Database (Denmark)

    Antipin, Oleg; Mojaza, Matin; Sannino, Francesco

    2012-01-01

    of pure supersymmetric Yang-Mills. We can therefore determine the exact nonperturbative fermion condensate and deduce relevant properties of the nonperturbative spectrum of the theory. We also show that the intrinsic scale of super Yang-Mills is exponentially smaller than the scale associated...

  15. Mass scale of vectorlike matter and superpartners from IR fixed point predictions of gauge and top Yukawa couplings

    Science.gov (United States)

    Dermíšek, Radovan; McGinnis, Navin

    2018-03-01

    We use the IR fixed point predictions for gauge couplings and the top Yukawa coupling in the minimal supersymmetric model (MSSM) extended with vectorlike families to infer the scale of vectorlike matter and superpartners. We quote results for several extensions of the MSSM and present results in detail for the MSSM extended with one complete vectorlike family. We find that for a unified gauge coupling αG>0.3 vectorlike matter or superpartners are expected within 1.7 TeV (2.5 TeV) based on all three gauge couplings being simultaneously within 1.5% (5%) from observed values. This range extends to about 4 TeV for αG>0.2 . We also find that in the scenario with two additional large Yukawa couplings of vectorlike quarks the IR fixed point value of the top Yukawa coupling independently points to a multi-TeV range for vectorlike matter and superpartners. Assuming a universal value for all large Yukawa couplings at the grand unified theory scale, the measured top quark mass can be obtained from the IR fixed point for tan β ≃4 . The range expands to any tan β >3 for significant departures from the universality assumption. Considering that the Higgs boson mass also points to a multi-TeV range for superpartners in the MSSM, adding a complete vectorlike family at the same scale provides a compelling scenario where the values of gauge couplings and the top quark mass are understood as a consequence of the particle content of the model.

  16. Evaluation on Optimal Scale of Rural Fixed-asset Investment – Based on Microcosmic Perspective of Farmers’ Income Increase

    OpenAIRE

    DENG, Jinqian; SHAN, Kangkang; ZHANG, Yan

    2014-01-01

    The rural fundamental and productive fixed-asset investment not only makes active influence on the changes of farmers’ operational, wages and property income, but it also has an optimal scale range for farmers’ income increase. From the perspective of farmers’ income increase, this article evaluates the optimal scale of rural fixed-asset investment by setting up model with statistic data, and the results show that the optimal scale of per capita rural fixed-asset investment is 76.35% of...

  17. Racial Inequality in Education in Brazil: A Twins Fixed-Effects Approach.

    Science.gov (United States)

    Marteleto, Letícia J; Dondero, Molly

    2016-08-01

    Racial disparities in education in Brazil (and elsewhere) are well documented. Because this research typically examines educational variation between individuals in different families, however, it cannot disentangle whether racial differences in education are due to racial discrimination or to structural differences in unobserved neighborhood and family characteristics. To address this common data limitation, we use an innovative within-family twin approach that takes advantage of the large sample of Brazilian adolescent twins classified as different races in the 1982 and 1987-2009 Pesquisa Nacional por Amostra de Domicílios. We first examine the contexts within which adolescent twins in the same family are labeled as different races to determine the characteristics of families crossing racial boundaries. Then, as a way to hold constant shared unobserved and observed neighborhood and family characteristics, we use twins fixed-effects models to assess whether racial disparities in education exist between twins and whether such disparities vary by gender. We find that even under this stringent test of racial inequality, the nonwhite educational disadvantage persists and is especially pronounced for nonwhite adolescent boys.

  18. Social interactions and college enrollment: A combined school fixed effects/instrumental variables approach.

    Science.gov (United States)

    Fletcher, Jason M

    2015-07-01

    This paper provides some of the first evidence of peer effects in college enrollment decisions. There are several empirical challenges in assessing the influences of peers in this context, including the endogeneity of high school, shared group-level unobservables, and identifying policy-relevant parameters of social interactions models. This paper addresses these issues by using an instrumental variables/fixed effects approach that compares students in the same school but different grade-levels who are thus exposed to different sets of classmates. In particular, plausibly exogenous variation in peers' parents' college expectations are used as an instrument for peers' college choices. Preferred specifications indicate that increasing a student's exposure to college-going peers by ten percentage points is predicted to raise the student's probability of enrolling in college by 4 percentage points. This effect is roughly half the magnitude of growing up in a household with married parents (vs. an unmarried household). Copyright © 2015 Elsevier Inc. All rights reserved.

  19. Freeway Multisensor Data Fusion Approach Integrating Data from Cellphone Probes and Fixed Sensors

    Directory of Open Access Journals (Sweden)

    Shanglu He

    2016-01-01

    Full Text Available Freeway traffic state information from multiple sources provides sufficient support to the traffic surveillance but also brings challenges. This paper made an investigation into the fusion of a new data combination from cellular handoff probe system and microwave sensors. And a fusion method based on the neural network technique was proposed. To identify the factors influencing the accuracy of fusion results, we analyzed the sensitivity of those factors by changing the inputs of neural-network-based fusion model. The results showed that handoff link length and sample size were identified as the most influential parameters to the precision of fusion. Then, the effectiveness and capability of proposed fusion method under various traffic conditions were evaluated. And a comparative analysis between the proposed method and other fusion approaches was conducted. The results of simulation test and evaluation showed that the fusion method could complement the drawback of each collection method, improve the overall estimation accuracy, adapt to the variable traffic condition (free flow or incident state, suit the fusion of data from cellphone probes and fixed sensors, and outperform other fusion methods.

  20. The scaling of burnout data for a single fluid at a fixed pressure

    International Nuclear Information System (INIS)

    Kirby, G.J.

    1966-12-01

    The success of the scaling factor concept in linking burnout measurements made in two different fluids has been amply demonstrated. This memorandum investigates the possibility of linking measurements made on two different systems in the same fluid. It seems that good accuracy may be obtained for systems whose linear dimensions differ by as much as a factor of two; this offers the possibility of saving very substantial amounts of power in testing reactor fuel element. A novel conclusion is that systems do not need to be geometrically similar in order to be linked by scaling factors. (author)

  1. A New Approach to Energy Calculation of Road Accidents against Fixed Small Section Elements Based on Close-Range Photogrammetry

    Directory of Open Access Journals (Sweden)

    Alejandro Morales

    2017-11-01

    Full Text Available This paper presents a new approach for energetic analyses of traffic accidents against fixed road elements using close-range photogrammetry. The main contributions of the developed approach are related to the quality of the 3D photogrammetric models, which enable objective and accurate energetic analyses through the in-house tool CRASHMAP. As a result, security forces can reconstruct the accident in a simple and comprehensive way without requiring spreadsheets or external tools, and thus avoid the subjectivity and imprecisions of the traditional protocol. The tool has already been validated, and is being used by the Local Police of Salamanca (Salamanca, Spain for the resolution of numerous accidents. In this paper, a real accident of a car against a fixed metallic pole is analysed, and significant discrepancies are obtained between the new approach and the traditional protocol of data acquisition regarding collision speed and absorbed energy.

  2. A fixed partial appliance approach towards treatment of anterior single tooth crossbite: Report of two cases

    Directory of Open Access Journals (Sweden)

    M Gawthaman

    2017-01-01

    Full Text Available Crossbite can be treated using both removable and fixed appliances. This paper describes the report of two cases by a method of treating anterior single tooth in crossbite which is locked out of arch form with a simple fixed partial appliance. Orthodontic treatment was initiated by creating space for the locked out incisor using open coil spring and further corrected using MBT brackets and nitinol archwire for alignment. Treatment goals were achieved, and esthetics and occlusion were maintained postoperatively. Treatment objectives were obtained within a short duration using this technique, and there was an improvement in patients' smile.

  3. Planning alternative organizational frameworks for a large scale educational telecommunications system served by fixed/broadcast satellites

    Science.gov (United States)

    Walkmeyer, J.

    1973-01-01

    This memorandum explores a host of considerations meriting attention from those who are concerned with designing organizational structures for development and control of a large scale educational telecommunications system using satellites. Part of a broader investigation at Washington University into the potential uses of fixed/broadcast satellites in U.S. education, this study lays ground work for a later effort to spell out a small number of hypothetical organizational blueprints for such a system and for assessment of potential short and long term impacts. The memorandum consists of two main parts. Part A deals with subjects of system-wide concern, while Part B deals with matters related to specific system components.

  4. Evaluation of Two Biosorbents in the Removal of Metal Ions in Aqueous Using a Pilot Scale Fixed-bed System

    Directory of Open Access Journals (Sweden)

    Andre Gadelha Oliveira

    2014-05-01

    Full Text Available The aim of the present work was to investigate the adsorption of toxic metal ions copper, nickel and zinc from aqueous solutions using low cost natural biomass (sugar cane bagasse and green coconut fiber in pilot scale fixed-bed system. The Hydraulic retention time (HRT was 229 minutes and the lowest adsorbent usage rate (AUR found was 0.10 g.L-1 for copper using green coconut fibers. The highest values of adsorption capacities founded were 1.417 and 2.772 mg.g-1 of Cu(II ions for sugarcane bagasse and green coconut fibers, respectively. The results showed that both sugarcane bagasse and green coconut fiber presented potential in the removal of metal ions copper, nickel and zinc ions from aqueous solution and the possible use in wastewater treatment station.

  5. ERL with non-scaling fixed field alternating gradient lattice for eRHIC

    Energy Technology Data Exchange (ETDEWEB)

    Trbojevic, D. [Brookhaven National Lab. (BNL), Upton, NY (United States); Berg, J. S. [Brookhaven National Lab. (BNL), Upton, NY (United States); Brooks, S. [Brookhaven National Lab. (BNL), Upton, NY (United States); Hao, Y. [Brookhaven National Lab. (BNL), Upton, NY (United States); Litvinenko, V. N. [Brookhaven National Lab. (BNL), Upton, NY (United States); Liu, C. [Brookhaven National Lab. (BNL), Upton, NY (United States); Meot, F. [Brookhaven National Lab. (BNL), Upton, NY (United States); Minty, M. [Brookhaven National Lab. (BNL), Upton, NY (United States); Ptitsyn, V. [Brookhaven National Lab. (BNL), Upton, NY (United States); Roser, T. [Brookhaven National Lab. (BNL), Upton, NY (United States); Thieberger, P. [Brookhaven National Lab. (BNL), Upton, NY (United States); Tsoupas, N. [Brookhaven National Lab. (BNL), Upton, NY (United States)

    2015-05-03

    The proposed eRHIC electron-hadron collider uses a "non-scaling FFAG" (NS-FFAG) lattice to recirculate 16 turns of different energy through just two beam lines located in the RHIC tunnel. This paper presents lattices for these two FFAGs that are optimized for low magnet field and to minimize total synchrotron radiation across the energy range. The higher number of recirculations in the FFAG allows a shorter linac (1.322GeV) to be used, drastically reducing cost, while still achieving a 21.2 GeV maximum energy to collide with one of the existing RHIC hadron rings at up to 250GeV. eRHIC uses many cost-saving measures in addition to the FFAG: the linac operates in energy recovery mode, so the beams also decelerate via the same FFAG loops and energy is recovered from the interacted beam. All magnets will be constructed from NdFeB permanent magnet material, meaning chillers and large magnet power supplies are not needed. This paper also describes a small prototype ERL-FFAG accelerator that will test all of these technologies in combination to reduce technical risk for eRHIC.

  6. Scaling Consumers' Purchase Involvement: A New Approach

    Directory of Open Access Journals (Sweden)

    Jörg Kraigher-Krainer

    2012-06-01

    Full Text Available A two-dimensional scale, called ECID Scale, is presented in this paper. The scale is based on a comprehensive model and captures the two antecedent factors of purchase-related involvement, namely whether motivation is intrinsic or extrinsic and whether risk is perceived as low or high. The procedure of scale development and item selection is described. The scale turns out to perform well in terms of validity, reliability, and objectivity despite the use of a small set of items – four each – allowing for simultaneous measurements of up to ten purchases per respondent. The procedure of administering the scale is described so that it can now easily be applied by both, scholars and practitioners. Finally, managerial implications of data received from its application which provide insights into possible strategic marketing conclusions are discussed.

  7. Fixing Formalin: A Method to Recover Genomic-Scale DNA Sequence Data from Formalin-Fixed Museum Specimens Using High-Throughput Sequencing.

    Directory of Open Access Journals (Sweden)

    Sarah M Hykin

    Full Text Available For 150 years or more, specimens were routinely collected and deposited in natural history collections without preserving fresh tissue samples for genetic analysis. In the case of most herpetological specimens (i.e. amphibians and reptiles, attempts to extract and sequence DNA from formalin-fixed, ethanol-preserved specimens-particularly for use in phylogenetic analyses-has been laborious and largely ineffective due to the highly fragmented nature of the DNA. As a result, tens of thousands of specimens in herpetological collections have not been available for sequence-based phylogenetic studies. Massively parallel High-Throughput Sequencing methods and the associated bioinformatics, however, are particularly suited to recovering meaningful genetic markers from severely degraded/fragmented DNA sequences such as DNA damaged by formalin-fixation. In this study, we compared previously published DNA extraction methods on three tissue types subsampled from formalin-fixed specimens of Anolis carolinensis, followed by sequencing. Sufficient quality DNA was recovered from liver tissue, making this technique minimally destructive to museum specimens. Sequencing was only successful for the more recently collected specimen (collected ~30 ybp. We suspect this could be due either to the conditions of preservation and/or the amount of tissue used for extraction purposes. For the successfully sequenced sample, we found a high rate of base misincorporation. After rigorous trimming, we successfully mapped 27.93% of the cleaned reads to the reference genome, were able to reconstruct the complete mitochondrial genome, and recovered an accurate phylogenetic placement for our specimen. We conclude that the amount of DNA available, which can vary depending on specimen age and preservation conditions, will determine if sequencing will be successful. The technique described here will greatly improve the value of museum collections by making many formalin-fixed specimens

  8. Fixing Formalin: A Method to Recover Genomic-Scale DNA Sequence Data from Formalin-Fixed Museum Specimens Using High-Throughput Sequencing.

    Science.gov (United States)

    Hykin, Sarah M; Bi, Ke; McGuire, Jimmy A

    2015-01-01

    For 150 years or more, specimens were routinely collected and deposited in natural history collections without preserving fresh tissue samples for genetic analysis. In the case of most herpetological specimens (i.e. amphibians and reptiles), attempts to extract and sequence DNA from formalin-fixed, ethanol-preserved specimens-particularly for use in phylogenetic analyses-has been laborious and largely ineffective due to the highly fragmented nature of the DNA. As a result, tens of thousands of specimens in herpetological collections have not been available for sequence-based phylogenetic studies. Massively parallel High-Throughput Sequencing methods and the associated bioinformatics, however, are particularly suited to recovering meaningful genetic markers from severely degraded/fragmented DNA sequences such as DNA damaged by formalin-fixation. In this study, we compared previously published DNA extraction methods on three tissue types subsampled from formalin-fixed specimens of Anolis carolinensis, followed by sequencing. Sufficient quality DNA was recovered from liver tissue, making this technique minimally destructive to museum specimens. Sequencing was only successful for the more recently collected specimen (collected ~30 ybp). We suspect this could be due either to the conditions of preservation and/or the amount of tissue used for extraction purposes. For the successfully sequenced sample, we found a high rate of base misincorporation. After rigorous trimming, we successfully mapped 27.93% of the cleaned reads to the reference genome, were able to reconstruct the complete mitochondrial genome, and recovered an accurate phylogenetic placement for our specimen. We conclude that the amount of DNA available, which can vary depending on specimen age and preservation conditions, will determine if sequencing will be successful. The technique described here will greatly improve the value of museum collections by making many formalin-fixed specimens available for

  9. Does grandchild care influence grandparents' self-rated health? Evidence from a fixed effects approach.

    Science.gov (United States)

    Ates, Merih

    2017-10-01

    The present study aims to identify, whether and how supplementary grandchild care is causally related to grandparents' self-rated health (SRH). Based on longitudinal data drawn from the German Aging Survey (DEAS; 2008-2014), I compare the results of pooled OLS, pooled OLS with lagged dependant variables (POLS-LD), random and fixed effects (RE, FE) panel regression. The results show that there is a positive but small association between supplementary grandchild care and SRH in POLS, POLS-LD, and RE models. However, the fixed effects model shows that the intrapersonal change in grandchild care does not cause a change in grandparents' SRH. The FE findings indicate that supplementary grandchild care in Germany does not have a causal impact on grandparents' SRH, suggesting that models with between-variation components overestimate the influence of grandchild care on grandparents' health because they do not control for unobserved (time-constant) heterogeneity. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Structural Analysis Approach to Fault Diagnosis with Application to Fixed-wing Aircraft Motion

    DEFF Research Database (Denmark)

    Izadi-Zamanabadi, Roozbeh

    2002-01-01

    The paper presents a structural analysis based method for fault diagnosis purposes. The method uses the structural model of the system and utilizes the matching idea to extract system's inherent redundant information. The structural model is represented by a bipartite directed graph. FDI...... Possibilities are examined by further analysis of the obtained information. The method is illustrated by applying on the LTI model of motion of a fixed-wing aircraft....

  11. Structural Analysis Approach to Fault Diagnosis with Application to Fixed-wing Aircraft Motion

    DEFF Research Database (Denmark)

    Izadi-Zamanabadi, Roozbeh

    2001-01-01

    The paper presents a structural analysis based method for fault diagnosis purposes. The method uses the structural model of the system and utilizes the matching idea to extract system's inherent redundant information. The structural model is represented by a bipartite directed graph. FDI...... Possibilities are examined by further analysis of the obtained information. The method is illustrated by applying on the LTI model of motion of a fixed-wing aircraft....

  12. A unified approach to fixed-order controller design via linear matrix inequalities

    Directory of Open Access Journals (Sweden)

    Iwasaki T.

    1995-01-01

    Full Text Available We consider the design of fixed-order (or low-order linear controllers which meet certain performance and/or robustness specifications. The following three problems are considered; covariance control as a nominal performance problem, 𝒬 -stabilization as a robust stabilization problem, and robust L ∞ control problem as a robust performance problem. All three control problems are converted to a single linear algebra problem of solving a linear matrix inequality (LMI of the type B G C + ( B G C T + Q < 0 for the unknown matrix G . Thus this paper addresses the fixed-order controller design problem in a unified way. Necessary and sufficient conditions for the existence of a fixed-order controller which satisfies the design specifications for each problem are derived, and an explicit controller formula is given. In any case, the resulting problem is shown to be a search for a (structured positive definite matrix X such that X ∈ 𝒞 1 and X − 1 ∈ 𝒞 2 where 𝒞 1 and 𝒞 2 are convex sets defined by LMIs. Computational aspects of the nonconvex LMI problem are discussed.

  13. A unified approach to fixed-order controller design via linear matrix inequalities

    Directory of Open Access Journals (Sweden)

    T. Iwasaki

    1995-01-01

    Full Text Available We consider the design of fixed-order (or low-order linear controllers which meet certain performance and/or robustness specifications. The following three problems are considered; covariance control as a nominal performance problem,-stabilization as a robust stabilization problem, and robust L∞ control problem as a robust performance problem. All three control problems are converted to a single linear algebra problem of solving a linear matrix inequality (LMI of the type BGC+(BGCT+Q<0 for the unknown matrix G. Thus this paper addresses the fixed-order controller design problem in a unified way. Necessary and sufficient conditions for the existence of a fixed-order controller which satisfies the design specifications for each problem are derived, and an explicit controller formula is given. In any case, the resulting problem is shown to be a search for a (structured positive definite matrix X such that X∈1 and X−1∈2 where 1 and 2 are convex sets defined by LMIs. Computational aspects of the nonconvex LMI problem are discussed.

  14. Is the fixed-dose combination of telmisartan and hydrochlorothiazide a good approach to treat hypertension?

    Directory of Open Access Journals (Sweden)

    Marc P Maillard

    2007-07-01

    Full Text Available Marc P Maillard, Michel BurnierService of Nephrology, Department of Internal Medicine, Lausanne University Hospital, SwitzerlandAbstract: Blockade of the renin-angiotensin system with selective AT1 receptor antagonists is recognized as an effective mean to lower blood pressure in hypertensive patients. Among the class of AT1 receptor antagonists, telmisartan offers the advantage of a very long half-life. This enables blood pressure control over 24 hours using once-daily administration. The combination of telmisartan with hydrochlorothiazide is a logical step because numerous previous studies have demonstrated that sodium depletion enhances the antihypertensive efficacy of drugs interfering with the activity of the renin-angiotensin system (RAS. In accordance with past experience using similar compounds blocking the RAS, several controlled studies have now demonstrated that the fixed-dose combination of telmisartan/hydrochlorothiazide is superior in lowering blood pressure than either telmisartan or hydrochlorothiazide alone. Of clinical interest also is the observation that the excellent clinical tolerance of the angiotensin II receptor antagonist is not affected by the association of the low-dose thiazide. Thus telmisartan/hydrochlorothiazide is an effective and well-tolerated antihypertensive combination. Finally, the development of fixed-dose combinations should improve drug adherence because of the one-pill-a-day regimen.Keywords: telmisartan, hydrochlorothiazide, fixed-dose combinations, antihypertensive agent, safety, compliance

  15. Near-Bed Turbulent Kinetic Energy Budget Under a Large-Scale Plunging Breaking Wave Over a Fixed Bar

    Science.gov (United States)

    van der Zanden, Joep; van der A, Dominic A.; Cáceres, Iván.; Hurther, David; McLelland, Stuart J.; Ribberink, Jan S.; O'Donoghue, Tom

    2018-02-01

    Hydrodynamics under regular plunging breaking waves over a fixed breaker bar were studied in a large-scale wave flume. A previous paper reported on the outer flow hydrodynamics; the present paper focuses on the turbulence dynamics near the bed (up to 0.10 m from the bed). Velocities were measured with high spatial and temporal resolution using a two component laser Doppler anemometer. The results show that even at close distance from the bed (1 mm), the turbulent kinetic energy (TKE) increases by a factor five between the shoaling, and breaking regions because of invasion of wave breaking turbulence. The sign and phase behavior of the time-dependent Reynolds shear stresses at elevations up to approximately 0.02 m from the bed (roughly twice the elevation of the boundary layer overshoot) are mainly controlled by local bed-shear-generated turbulence, but at higher elevations Reynolds stresses are controlled by wave breaking turbulence. The measurements are subsequently analyzed to investigate the TKE budget at wave-averaged and intrawave time scales. Horizontal and vertical turbulence advection, production, and dissipation are the major terms. A two-dimensional wave-averaged circulation drives advection of wave breaking turbulence through the near-bed layer, resulting in a net downward influx in the bar trough region, followed by seaward advection along the bar's shoreward slope, and an upward outflux above the bar crest. The strongly nonuniform flow across the bar combined with the presence of anisotropic turbulence enhances turbulent production rates near the bed.

  16. Modeling Ontario regional electricity system demand using a mixed fixed and random coefficients approach

    Energy Technology Data Exchange (ETDEWEB)

    Hsiao, C.; Mountain, D.C.; Chan, M.W.L.; Tsui, K.Y. (University of Southern California, Los Angeles (USA) McMaster Univ., Hamilton, ON (Canada) Chinese Univ. of Hong Kong, Shatin)

    1989-12-01

    In examining the municipal peak and kilowatt-hour demand for electricity in Ontario, the issue of homogeneity across geographic regions is explored. A common model across municipalities and geographic regions cannot be supported by the data. Considered are various procedures which deal with this heterogeneity and yet reduce the multicollinearity problems associated with regional specific demand formulations. The recommended model controls for regional differences assuming that the coefficients of regional-seasonal specific factors are fixed and different while the coefficients of economic and weather variables are random draws from a common population for any one municipality by combining the information on all municipalities through a Bayes procedure. 8 tabs., 41 refs.

  17. Data requirement comparison between the fixed site upgrade rule guidance compendium and the Structured Assessment Approach Licensee Submittal Document

    Energy Technology Data Exchange (ETDEWEB)

    Parziale, A.A.; Sacks, I.J.

    1980-12-01

    We compared the Structured Assessment Approach's (SAA) Licensee Submittal Document (LSD) with the Fixed Site Physical Protection Upgrade Rule Guidance Compendium Standard Format and Content (SFC) Guide using correlation matrices to see how well the data requirements of the SFC Guide coincided with those of a specific automated vulnerability assessment technique for fixed-site nuclear fuel cycle facilities, namely, SAA. We found that a limited SAA assessment is possible using the SFC Guide, but significant and critical safeguards vulnerabilities might be missed. Also, it was found that in some cases the organization and format of the SFC Guide input data and information made the preparation of data for the SAA somewhat awkward. 2 refs., 2 tabs.

  18. Real-time approaches to the estimation of local wind velocity for a fixed-wing unmanned air vehicle

    International Nuclear Information System (INIS)

    Chan, W L; Lee, C S; Hsiao, F B

    2011-01-01

    Three real-time approaches to estimating local wind velocity for a fixed-wing unmanned air vehicle are presented in this study. All three methods work around the navigation equations with added wind components. The first approach calculates the local wind speed by substituting the ground speed and ascent rate data given by the Global Positioning System (GPS) into the navigation equations. The second and third approaches utilize the extended Kalman filter (EKF) and the unscented Kalman filter (UKF), respectively. The results show that, despite the nonlinearity of the navigation equations, the EKF performance is proven to be on a par with the UKF. A time-varying noise estimation method based on the Wiener filter is also discussed. Results are compared with the average wind speed measured on the ground. All three approaches are proven to be reliable with stated advantages and disadvantages

  19. Intergenerational continuity in attitudes: A latent variable family fixed-effects approach.

    Science.gov (United States)

    Schofield, Thomas J; Abraham, W Todd

    2017-12-01

    Attitudes are associated with behavior. Adolescents raised by parents who endorse particular attitudes are relatively more likely to endorse those same attitudes. The present study addresses conditions that would moderate intergenerational continuity in attitudes across 6 domains: authoritative parenting, conventional life goals, gender egalitarianism, deviancy, abortion, and sexual permissiveness. Hypothesized moderators included the attitudes of the other parent, and adolescent sex. Data come from a 2-generation study of a cohort of 451 adolescents (52% female), a close-aged sibling, and their parents. After employing a novel specification in which family fixed-effect models partitioned out variation at the between-family level, hypotheses were tested on the within-family variance. Unlike typical family fixed-effect models, this specification accounted for measurement error. Intergenerational continuity was not significant (deviancy), negative (sexual permissiveness), and conditional on the attitudes of the coparent (authoritative parenting, conventional life goals, and gender egalitarianism). Adolescent age, sex, and conscientiousness were accounted for in all analyses. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  20. Scaling of brain metabolism with a fixed energy budget per neuron: implications for neuronal activity, plasticity and evolution.

    Science.gov (United States)

    Herculano-Houzel, Suzana

    2011-03-01

    It is usually considered that larger brains have larger neurons, which consume more energy individually, and are therefore accompanied by a larger number of glial cells per neuron. These notions, however, have never been tested. Based on glucose and oxygen metabolic rates in awake animals and their recently determined numbers of neurons, here I show that, contrary to the expected, the estimated glucose use per neuron is remarkably constant, varying only by 40% across the six species of rodents and primates (including humans). The estimated average glucose use per neuron does not correlate with neuronal density in any structure. This suggests that the energy budget of the whole brain per neuron is fixed across species and brain sizes, such that total glucose use by the brain as a whole, by the cerebral cortex and also by the cerebellum alone are linear functions of the number of neurons in the structures across the species (although the average glucose consumption per neuron is at least 10× higher in the cerebral cortex than in the cerebellum). These results indicate that the apparently remarkable use in humans of 20% of the whole body energy budget by a brain that represents only 2% of body mass is explained simply by its large number of neurons. Because synaptic activity is considered the major determinant of metabolic cost, a conserved energy budget per neuron has several profound implications for synaptic homeostasis and the regulation of firing rates, synaptic plasticity, brain imaging, pathologies, and for brain scaling in evolution.

  1. Scaling of brain metabolism with a fixed energy budget per neuron: implications for neuronal activity, plasticity and evolution.

    Directory of Open Access Journals (Sweden)

    Suzana Herculano-Houzel

    Full Text Available It is usually considered that larger brains have larger neurons, which consume more energy individually, and are therefore accompanied by a larger number of glial cells per neuron. These notions, however, have never been tested. Based on glucose and oxygen metabolic rates in awake animals and their recently determined numbers of neurons, here I show that, contrary to the expected, the estimated glucose use per neuron is remarkably constant, varying only by 40% across the six species of rodents and primates (including humans. The estimated average glucose use per neuron does not correlate with neuronal density in any structure. This suggests that the energy budget of the whole brain per neuron is fixed across species and brain sizes, such that total glucose use by the brain as a whole, by the cerebral cortex and also by the cerebellum alone are linear functions of the number of neurons in the structures across the species (although the average glucose consumption per neuron is at least 10× higher in the cerebral cortex than in the cerebellum. These results indicate that the apparently remarkable use in humans of 20% of the whole body energy budget by a brain that represents only 2% of body mass is explained simply by its large number of neurons. Because synaptic activity is considered the major determinant of metabolic cost, a conserved energy budget per neuron has several profound implications for synaptic homeostasis and the regulation of firing rates, synaptic plasticity, brain imaging, pathologies, and for brain scaling in evolution.

  2. Scaling of Brain Metabolism with a Fixed Energy Budget per Neuron: Implications for Neuronal Activity, Plasticity and Evolution

    Science.gov (United States)

    Herculano-Houzel, Suzana

    2011-01-01

    It is usually considered that larger brains have larger neurons, which consume more energy individually, and are therefore accompanied by a larger number of glial cells per neuron. These notions, however, have never been tested. Based on glucose and oxygen metabolic rates in awake animals and their recently determined numbers of neurons, here I show that, contrary to the expected, the estimated glucose use per neuron is remarkably constant, varying only by 40% across the six species of rodents and primates (including humans). The estimated average glucose use per neuron does not correlate with neuronal density in any structure. This suggests that the energy budget of the whole brain per neuron is fixed across species and brain sizes, such that total glucose use by the brain as a whole, by the cerebral cortex and also by the cerebellum alone are linear functions of the number of neurons in the structures across the species (although the average glucose consumption per neuron is at least 10× higher in the cerebral cortex than in the cerebellum). These results indicate that the apparently remarkable use in humans of 20% of the whole body energy budget by a brain that represents only 2% of body mass is explained simply by its large number of neurons. Because synaptic activity is considered the major determinant of metabolic cost, a conserved energy budget per neuron has several profound implications for synaptic homeostasis and the regulation of firing rates, synaptic plasticity, brain imaging, pathologies, and for brain scaling in evolution. PMID:21390261

  3. What is at stake in multi-scale approaches

    International Nuclear Information System (INIS)

    Jamet, Didier

    2008-01-01

    Full text of publication follows: Multi-scale approaches amount to analyzing physical phenomena at small space and time scales in order to model their effects at larger scales. This approach is very general in physics and engineering; one of the best examples of success of this approach is certainly statistical physics that allows to recover classical thermodynamics and to determine the limits of application of classical thermodynamics. Getting access to small scale information aims at reducing the models' uncertainty but it has a cost: fine scale models may be more complex than larger scale models and their resolution may require the development of specific and possibly expensive methods, numerical simulation techniques and experiments. For instance, in applications related to nuclear engineering, the application of computational fluid dynamics instead of cruder models is a formidable engineering challenge because it requires resorting to high performance computing. Likewise, in two-phase flow modeling, the techniques of direct numerical simulation, where all the interfaces are tracked individually and where all turbulence scales are captured, are getting mature enough to be considered for averaged modeling purposes. However, resolving small scale problems is a necessary step but it is not sufficient in a multi-scale approach. An important modeling challenge is to determine how to treat small scale data in order to get relevant information for larger scale models. For some applications, such as single-phase turbulence or transfers in porous media, this up-scaling approach is known and is now used rather routinely. However, in two-phase flow modeling, the up-scaling approach is not as mature and specific issues must be addressed that raise fundamental questions. This will be discussed and illustrated. (author)

  4. Cultivation and Differentiation of Encapsulated hMSC-TERT in a Disposable Small-Scale Syringe-Like Fixed Bed Reactor

    DEFF Research Database (Denmark)

    Weber, Christian; Pohl, Sebastian; Pörtner, Ralf

    2007-01-01

    The use of commercially available plastic syringes is introduced as disposable small-scale fixed bed bioreactors for the cultivation of implantable therapeutic cell systems on the basis of an alginate-encapsulated human mesenchymal stem cell line. The system introduced is fitted with a noninvasiv...

  5. Stereo Vision Guiding for the Autonomous Landing of Fixed-Wing UAVs: A Saliency-Inspired Approach

    Directory of Open Access Journals (Sweden)

    Zhaowei Ma

    2016-03-01

    Full Text Available It is an important criterion for unmanned aerial vehicles (UAVs to land on the runway safely. This paper concentrates on stereo vision localization of a fixed-wing UAV's autonomous landing within global navigation satellite system (GNSS denied environments. A ground stereo vision guidance system imitating the human visual system (HVS is presented for the autonomous landing of fixed-wing UAVs. A saliency-inspired algorithm is presented and developed to detect flying UAV targets in captured sequential images. Furthermore, an extended Kalman filter (EKF based state estimation is employed to reduce localization errors caused by measurement errors of object detection and pan-tilt unit (PTU attitudes. Finally, stereo-vision-dataset-based experiments are conducted to verify the effectiveness of the proposed visual detection method and error correction algorithm. The compared results between the visual guidance approach and differential GPS-based approach indicate that the stereo vision system and detection method can achieve the better guiding effect.

  6. Decentralized control of large-scale systems: Fixed modes, sensitivity and parametric robustness. Ph.D. Thesis - Universite Paul Sabatier, 1985

    Science.gov (United States)

    Tarras, A.

    1987-01-01

    The problem of stabilization/pole placement under structural constraints of large scale linear systems is discussed. The existence of a solution to this problem is expressed in terms of fixed modes. The aim is to provide a bibliographic survey of the available results concerning the fixed modes (characterization, elimination, control structure selection to avoid them, control design in their absence) and to present the author's contribution to this problem which can be summarized by the use of the mode sensitivity concept to detect or to avoid them, the use of vibrational control to stabilize them, and the addition of parametric robustness considerations to design an optimal decentralized robust control.

  7. Structured ecosystem-scale approach to marine water quality management

    CSIR Research Space (South Africa)

    Taljaard, Susan

    2006-10-01

    Full Text Available and implement environmental management programmes. A structured ecosystem-scale approach for the design and implementation of marine water quality management programmes developed by the CSIR (South Africa) in response to recent advances in policies...

  8. A Scale-up Approach for Film Coating Process Based on Surface Roughness as the Critical Quality Attribute.

    Science.gov (United States)

    Yoshino, Hiroyuki; Hara, Yuko; Dohi, Masafumi; Yamashita, Kazunari; Hakomori, Tadashi; Kimura, Shin-Ichiro; Iwao, Yasunori; Itai, Shigeru

    2018-04-01

    Scale-up approaches for film coating process have been established for each type of film coating equipment from thermodynamic and mechanical analyses for several decades. The objective of the present study was to establish a versatile scale-up approach for film coating process applicable to commercial production that is based on critical quality attribute (CQA) using the Quality by Design (QbD) approach and is independent of the equipment used. Experiments on a pilot scale using the Design of Experiment (DoE) approach were performed to find a suitable CQA from surface roughness, contact angle, color difference, and coating film properties by terahertz spectroscopy. Surface roughness was determined to be a suitable CQA from a quantitative appearance evaluation. When surface roughness was fixed as the CQA, the water content of the film-coated tablets was determined to be the critical material attribute (CMA), a parameter that does not depend on scale or equipment. Finally, to verify the scale-up approach determined from the pilot scale, experiments on a commercial scale were performed. The good correlation between the surface roughness (CQA) and the water content (CMA) identified at the pilot scale was also retained at the commercial scale, indicating that our proposed method should be useful as a scale-up approach for film coating process.

  9. Combining fixed effects and instrumental variable approaches for estimating the effect of psychosocial job quality on mental health: evidence from 13 waves of a nationally representative cohort study.

    Science.gov (United States)

    Milner, Allison; Aitken, Zoe; Kavanagh, Anne; LaMontagne, Anthony D; Pega, Frank; Petrie, Dennis

    2017-06-23

    Previous studies suggest that poor psychosocial job quality is a risk factor for mental health problems, but they use conventional regression analytic methods that cannot rule out reverse causation, unmeasured time-invariant confounding and reporting bias. This study combines two quasi-experimental approaches to improve causal inference by better accounting for these biases: (i) linear fixed effects regression analysis and (ii) linear instrumental variable analysis. We extract 13 annual waves of national cohort data including 13 260 working-age (18-64 years) employees. The exposure variable is self-reported level of psychosocial job quality. The instruments used are two common workplace entitlements. The outcome variable is the Mental Health Inventory (MHI-5). We adjust for measured time-varying confounders. In the fixed effects regression analysis adjusted for time-varying confounders, a 1-point increase in psychosocial job quality is associated with a 1.28-point improvement in mental health on the MHI-5 scale (95% CI: 1.17, 1.40; P variable analysis, a 1-point increase psychosocial job quality is related to 1.62-point improvement on the MHI-5 scale (95% CI: -0.24, 3.48; P = 0.088). Our quasi-experimental results provide evidence to confirm job stressors as risk factors for mental ill health using methods that improve causal inference. © The Author 2017. Published by Oxford University Press on behalf of Faculty of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  10. Modeling carbon sequestration in afforestation, agroforestry and forest management projects: the CO2FIX V.2 approach

    NARCIS (Netherlands)

    Masera, O.R.; Garza-Caligaris, J.F.; Kanninen, M.; Karjalainen, T.; Liski, J.; Nabuurs, G.J.; Pussinen, A.; Jong de, B.H.J.; Mohren, G.M.J.

    2003-01-01

    The paper describes the Version 2 of the CO2FIX (CO2FIX V.2) model, a user-friendly tool for dynamically estimating the carbon sequestration potential of forest management, agroforesty and afforestation projects. CO2FIX V.2 is a multi-cohort ecosystem-level model based on carbon accounting of forest

  11. Scaling laws for trace impurity confinement: a variational approach

    International Nuclear Information System (INIS)

    Thyagaraja, A.; Haas, F.A.

    1990-01-01

    A variational approach is outlined for the deduction of impurity confinement scaling laws. Given the forms of the diffusive and convective components to the impurity particle flux, we present a variational principle for the impurity confinement time in terms of the diffusion time scale and the convection parameter, which is a non-dimensional measure of the size of the convective flux relative to the diffusive flux. These results are very general and apply irrespective of whether the transport fluxes are of theoretical or empirical origin. The impurity confinement time scales exponentially with the convection parameter in cases of practical interest. (orig.)

  12. Empirical likelihood-based confidence intervals for the sensitivity of a continuous-scale diagnostic test at a fixed level of specificity.

    Science.gov (United States)

    Gengsheng Qin; Davis, Angela E; Jing, Bing-Yi

    2011-06-01

    For a continuous-scale diagnostic test, it is often of interest to find the range of the sensitivity of the test at the cut-off that yields a desired specificity. In this article, we first define a profile empirical likelihood ratio for the sensitivity of a continuous-scale diagnostic test and show that its limiting distribution is a scaled chi-square distribution. We then propose two new empirical likelihood-based confidence intervals for the sensitivity of the test at a fixed level of specificity by using the scaled chi-square distribution. Simulation studies are conducted to compare the finite sample performance of the newly proposed intervals with the existing intervals for the sensitivity in terms of coverage probability. A real example is used to illustrate the application of the recommended methods.

  13. Polymer density functional theory approach based on scaling second-order direct correlation function.

    Science.gov (United States)

    Zhou, Shiqi

    2006-06-01

    A second-order direct correlation function (DCF) from solving the polymer-RISM integral equation is scaled up or down by an equation of state for bulk polymer, the resultant scaling second-order DCF is in better agreement with corresponding simulation results than the un-scaling second-order DCF. When the scaling second-order DCF is imported into a recently proposed LTDFA-based polymer DFT approach, an originally associated adjustable but mathematically meaningless parameter now becomes mathematically meaningful, i.e., the numerical value lies now between 0 and 1. When the adjustable parameter-free version of the LTDFA is used instead of the LTDFA, i.e., the adjustable parameter is fixed at 0.5, the resultant parameter-free version of the scaling LTDFA-based polymer DFT is also in good agreement with the corresponding simulation data for density profiles. The parameter-free version of the scaling LTDFA-based polymer DFT is employed to investigate the density profiles of a freely jointed tangent hard sphere chain near a variable sized central hard sphere, again the predictions reproduce accurately the simulational results. Importance of the present adjustable parameter-free version lies in its combination with a recently proposed universal theoretical way, in the resultant formalism, the contact theorem is still met by the adjustable parameter associated with the theoretical way.

  14. A structured ecosystem-scale approach to marine water quality ...

    African Journals Online (AJOL)

    These, in turn, created the need for holistic and integrated frameworks within which to design and implement environmental management programmes. A structured ecosystem-scale approach for the design and implementation of marine water quality management programmes developed by the CSIR (South Africa) in ...

  15. Multiple-scale approach for the expansion scaling of superfluid quantum gases

    International Nuclear Information System (INIS)

    Egusquiza, I. L.; Valle Basagoiti, M. A.; Modugno, M.

    2011-01-01

    We present a general method, based on a multiple-scale approach, for deriving the perturbative solutions of the scaling equations governing the expansion of superfluid ultracold quantum gases released from elongated harmonic traps. We discuss how to treat the secular terms appearing in the usual naive expansion in the trap asymmetry parameter ε and calculate the next-to-leading correction for the asymptotic aspect ratio, with significant improvement over the previous proposals.

  16. Eco-Approach and Departure System for Left-Turn Vehicles at a Fixed-Time Signalized Intersection

    Directory of Open Access Journals (Sweden)

    Huifu Jiang

    2018-01-01

    Full Text Available This research proposed an eco-approach and departure system for left-turn vehicles at a fixed-time signalized intersection. This system gives higher priority to enhancing traffic safety than improving mobility and fuel efficiency, and optimizes the entire traffic consisted of connected and automated vehicles (CAVs and conventional human-driven vehicles by providing ecological speed trajectories for left-turn CAVs. All the ecological speed trajectories are offline optimized before the implementation of system. The speed trajectory optimization is constructed in Pontryagin’s Minimum Principle structure. The before and after evaluation of the proposed system shows the percentage of vehicles that drive pass the intersection at safe speed increases by 2.14% to 45.65%, fuel consumption benefits range 0.53% to 18.44%, emission benefits range from 0.57% to 15.69%, no significant throughput benefits is observed. The proposed system significantly enhances the traffic safety and improves the fuel efficiency and emission reduction of left-turn vehicles with no adverse effect on mobility, and has a good robustness against the randomness of traffic. The investigation also indicates that the computation time of proposed system is greatly reduced compared to previous eco-driving system with online speed optimization. The computation time is up to 0.01 s. The proposed system is ready for real-time application.

  17. A Comparison of Deterministic and Stochastic Modeling Approaches for Biochemical Reaction Systems: On Fixed Points, Means, and Modes.

    Science.gov (United States)

    Hahl, Sayuri K; Kremling, Andreas

    2016-01-01

    In the mathematical modeling of biochemical reactions, a convenient standard approach is to use ordinary differential equations (ODEs) that follow the law of mass action. However, this deterministic ansatz is based on simplifications; in particular, it neglects noise, which is inherent to biological processes. In contrast, the stochasticity of reactions is captured in detail by the discrete chemical master equation (CME). Therefore, the CME is frequently applied to mesoscopic systems, where copy numbers of involved components are small and random fluctuations are thus significant. Here, we compare those two common modeling approaches, aiming at identifying parallels and discrepancies between deterministic variables and possible stochastic counterparts like the mean or modes of the state space probability distribution. To that end, a mathematically flexible reaction scheme of autoregulatory gene expression is translated into the corresponding ODE and CME formulations. We show that in the thermodynamic limit, deterministic stable fixed points usually correspond well to the modes in the stationary probability distribution. However, this connection might be disrupted in small systems. The discrepancies are characterized and systematically traced back to the magnitude of the stoichiometric coefficients and to the presence of nonlinear reactions. These factors are found to synergistically promote large and highly asymmetric fluctuations. As a consequence, bistable but unimodal, and monostable but bimodal systems can emerge. This clearly challenges the role of ODE modeling in the description of cellular signaling and regulation, where some of the involved components usually occur in low copy numbers. Nevertheless, systems whose bimodality originates from deterministic bistability are found to sustain a more robust separation of the two states compared to bimodal, but monostable systems. In regulatory circuits that require precise coordination, ODE modeling is thus still

  18. New Approach in Filling of Fixed-Point Cells: Case Study of the Melting Point of Gallium

    Science.gov (United States)

    Bojkovski, J.; Hiti, M.; Batagelj, V.; Drnovšek, J.

    2008-02-01

    The typical way of constructing fixed-point cells is very well described in the literature. The crucible is loaded with shot, or any other shape of pure metal, inside an argon-filled glove box. Then, the crucible is carefully slid into a fused-silica tube that is closed at the top with an appropriate cap. After that, the cell is removed from the argon glove box and melted inside a furnace while under vacuum or filled with an inert gas like argon. Since the metal comes as shot, or in some other shape such as rods of various sizes, and takes more volume than the melted material, it is necessary to repeat the procedure until a sufficient amount of material is introduced into the crucible. With such a procedure, there is the possibility of introducing additional impurities into the pure metal with each cycle of melting the material and putting it back into the glove box to fill the cell. Our new approach includes the use of a special, so-called dry-box system, which is well known in chemistry. The atmosphere inside the dry box contains less than 20 ppm of water and less than 3 ppm of oxygen. Also, the size of the dry box allows it to contain a furnace for melting materials, not only for gallium but for higher-temperature materials as well. With such an approach, the cell and all its parts (pure metal, graphite, fused-silica tube, and cap) are constantly inside the controlled atmosphere, even while melting the material and filling the crucible. With such a method, the possibility of contaminating the cell during the filling process is minimized.

  19. Effect of small-scale biomass gasification at the state of refractory lining the fixed bed reactor

    Energy Technology Data Exchange (ETDEWEB)

    Janša, Jan, E-mail: jan.jansa@vsb.cz; Peer, Vaclav, E-mail: vaclav.peer@vsb.cz; Pavloková, Petra, E-mail: petra.pavlokova@vsb.cz [VŠB – Technical University of Ostrava, Energy Research Center, 708 33 Ostrava (Czech Republic)

    2016-06-30

    The article deals with the influence of biomass gasification on the condition of the refractory lining of a fixed bed reactor. The refractory lining of the gasifier is one part of the device, which significantly affects the operational reliability and durability. After removing the refractory lining of the gasifier from the experimental reactor, there was done an assessment how gasification of different kinds of biomass reflected on its condition in terms of the main factors affecting its life. Gasification of biomass is reflected on the lining, especially through sticking at the bottom of the reactor. Measures for prolonging the life of lining consist in the reduction of temperature in the reactor, in this case, in order to avoid ash fusion biomass which it is difficult for this type of gasifier.

  20. Fixed Points

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 5; Issue 5. Fixed Points - From Russia with Love - A Primer of Fixed Point Theory. A K Vijaykumar. Book Review Volume 5 Issue 5 May 2000 pp 101-102. Fulltext. Click here to view fulltext PDF. Permanent link:

  1. Culture-independent molecular approaches reveal a mostly unknown high diversity of active nitrogen-fixing bacteria associated with Pennisetum purpureum—a bioenergy crop

    NARCIS (Netherlands)

    Videira, Sandy Sampaio; de Cássia Pereira e Silva, Michele; Galisa, Pericles de Souza; Franco Dias, Armando Cavalcante; Nissinen, Riitta; Baldani Divan, Vera Lucia; van Elsas, Jan Dirk; Baldani, Jose Ivo; Salles, Joana Falcao

    2013-01-01

    Previous studies have shown that elephant grass is colonized by nitrogen-fixing bacterial species; however, these results were based on culture-dependent methods, an approach that introduces bias due to an incomplete assessment of the microbial community. In this study, we used culture-independent

  2. An application of the Health Action Process Approach model to oral hygiene behaviour and dental plaque in adolescents with fixed orthodontic appliances

    NARCIS (Netherlands)

    Scheerman, J.F.M.; Empelen, P. van; Loveren, C. van; Pakpour, A.H.; Meijel, B. van; Gholami, M.; Mierzaie, Z.; Braak, M.C.T. van; Verrips, G.H.W.

    2017-01-01

    Background. The Health Action Process Approach (HAPA) model addresses health behaviours, but it has never been applied to model adolescents’ oral hygiene behaviour during fixed orthodontic treatment. Aim. This study aimed to apply the HAPA model to explain adolescents’ oral hygiene behaviour and

  3. Using Multisite Experiments to Study Cross-Site Variation in Treatment Effects: A Hybrid Approach with Fixed Intercepts and A Random Treatment Coefficient

    Science.gov (United States)

    Bloom, Howard S.; Raudenbush, Stephen W.; Weiss, Michael J.; Porter, Kristin

    2017-01-01

    The present article considers a fundamental question in evaluation research: "By how much do program effects vary across sites?" The article first presents a theoretical model of cross-site impact variation and a related estimation model with a random treatment coefficient and fixed site-specific intercepts. This approach eliminates…

  4. A unified monolithic approach for multi-fluid flows and fluid-structure interaction using the Particle Finite Element Method with fixed mesh

    Science.gov (United States)

    Becker, P.; Idelsohn, S. R.; Oñate, E.

    2015-06-01

    This paper describes a strategy to solve multi-fluid and fluid-structure interaction (FSI) problems using Lagrangian particles combined with a fixed finite element (FE) mesh. Our approach is an extension of the fluid-only PFEM-2 (Idelsohn et al., Eng Comput 30(2):2-2, 2013; Idelsohn et al., J Numer Methods Fluids, 2014) which uses explicit integration over the streamlines to improve accuracy. As a result, the convective term does not appear in the set of equations solved on the fixed mesh. Enrichments in the pressure field are used to improve the description of the interface between phases.

  5. Fix 40!

    Index Scriptorium Estoniae

    2008-01-01

    Ansambel Fix peab 13. detsembril Tallinnas Saku Suurhallis oma 40. sünnipäeva. Kontserdi erikülaline on ansambel Apelsin, kaastegevad Jassi Zahharov ja HaleBopp Singers. Õhtut juhib Tarmo Leinatamm

  6. Receptivity to Kinetic Fluctuations: A Multiple Scales Approach

    Science.gov (United States)

    Edwards, Luke; Tumin, Anatoli

    2017-11-01

    The receptivity of high-speed compressible boundary layers to kinetic fluctuations (KF) is considered within the framework of fluctuating hydrodynamics. The formulation is based on the idea that KF-induced dissipative fluxes may lead to the generation of unstable modes in the boundary layer. Fedorov and Tumin solved the receptivity problem using an asymptotic matching approach which utilized a resonant inner solution in the vicinity of the generation point of the second Mack mode. Here we take a slightly more general approach based on a multiple scales WKB ansatz which requires fewer assumptions about the behavior of the stability spectrum. The approach is modeled after the one taken by Luchini to study low speed incompressible boundary layers over a swept wing. The new framework is used to study examples of high-enthalpy, flat plate boundary layers whose spectra exhibit nuanced behavior near the generation point, such as first mode instabilities and near-neutral evolution over moderate length scales. The configurations considered exhibit supersonic unstable second Mack modes despite the temperature ratio Tw /Te > 1 , contrary to prior expectations. Supported by AFOSR and ONR.

  7. Two-scale approach to oscillatory singularly perturbed transport equations

    CERN Document Server

    Frénod, Emmanuel

    2017-01-01

    This book presents the classical results of the two-scale convergence theory and explains – using several figures – why it works. It then shows how to use this theory to homogenize ordinary differential equations with oscillating coefficients as well as oscillatory singularly perturbed ordinary differential equations. In addition, it explores the homogenization of hyperbolic partial differential equations with oscillating coefficients and linear oscillatory singularly perturbed hyperbolic partial differential equations. Further, it introduces readers to the two-scale numerical methods that can be built from the previous approaches to solve oscillatory singularly perturbed transport equations (ODE and hyperbolic PDE) and demonstrates how they can be used efficiently. This book appeals to master’s and PhD students interested in homogenization and numerics, as well as to the Iter community.

  8. Functional Single-Cell Approach to Probing Nitrogen-Fixing Bacteria in Soil Communities by Resonance Raman Spectroscopy with 15N2 Labeling.

    Science.gov (United States)

    Cui, Li; Yang, Kai; Li, Hong-Zhe; Zhang, Han; Su, Jian-Qiang; Paraskevaidi, Maria; Martin, Francis L; Ren, Bin; Zhu, Yong-Guan

    2018-04-17

    Nitrogen (N) fixation is the conversion of inert nitrogen gas (N 2 ) to bioavailable N essential for all forms of life. N 2 -fixing microorganisms (diazotrophs), which play a key role in global N cycling, remain largely obscure because a large majority are uncultured. Direct probing of active diazotrophs in the environment is still a major challenge. Herein, a novel culture-independent single-cell approach combining resonance Raman (RR) spectroscopy with 15 N 2 stable isotope probing (SIP) was developed to discern N 2 -fixing bacteria in a complex soil community. Strong RR signals of cytochrome c (Cyt c, frequently present in diverse N 2 -fixing bacteria), along with a marked 15 N 2 -induced Cyt c band shift, generated a highly distinguishable biomarker for N 2 fixation. 15 N 2 -induced shift was consistent well with 15 N abundance in cell determined by isotope ratio mass spectroscopy. By applying this biomarker and Raman imaging, N 2 -fixing bacteria in both artificial and complex soil communities were discerned and imaged at the single-cell level. The linear band shift of Cyt c versus 15 N 2 percentage allowed quantification of N 2 fixation extent of diverse soil bacteria. This single-cell approach will advance the exploration of hitherto uncultured diazotrophs in diverse ecosystems.

  9. Examining Similarity Structure: Multidimensional Scaling and Related Approaches in Neuroimaging

    Directory of Open Access Journals (Sweden)

    Svetlana V. Shinkareva

    2013-01-01

    Full Text Available This paper covers similarity analyses, a subset of multivariate pattern analysis techniques that are based on similarity spaces defined by multivariate patterns. These techniques offer several advantages and complement other methods for brain data analyses, as they allow for comparison of representational structure across individuals, brain regions, and data acquisition methods. Particular attention is paid to multidimensional scaling and related approaches that yield spatial representations or provide methods for characterizing individual differences. We highlight unique contributions of these methods by reviewing recent applications to functional magnetic resonance imaging data and emphasize areas of caution in applying and interpreting similarity analysis methods.

  10. Statistical distance and the approach to KNO scaling

    International Nuclear Information System (INIS)

    Diosi, L.; Hegyi, S.; Krasznovszky, S.

    1990-05-01

    A new method is proposed for characterizing the approach to KNO scaling. The essence of our method lies in the concept of statistical distance between nearby KNO distributions which reflects their distinguishability in spite of multiplicity fluctuations. It is shown that the geometry induced by the distance function defines a natural metric on the parameter space of a certain family of KNO distributions. Some examples are given in which the energy dependences of distinguishability of neighbouring KNO distributions are compared in nondiffractive hadron-hadron collisions and electron-positron annihilation. (author) 19 refs.; 4 figs

  11. Fast Laplace solver approach to pore-scale permeability

    Science.gov (United States)

    Arns, C. H.; Adler, P. M.

    2018-02-01

    We introduce a powerful and easily implemented method to calculate the permeability of porous media at the pore scale using an approximation based on the Poiseulle equation to calculate permeability to fluid flow with a Laplace solver. The method consists of calculating the Euclidean distance map of the fluid phase to assign local conductivities and lends itself naturally to the treatment of multiscale problems. We compare with analytical solutions as well as experimental measurements and lattice Boltzmann calculations of permeability for Fontainebleau sandstone. The solver is significantly more stable than the lattice Boltzmann approach, uses less memory, and is significantly faster. Permeabilities are in excellent agreement over a wide range of porosities.

  12. Parametric Approach in Designing Large-Scale Urban Architectural Objects

    Directory of Open Access Journals (Sweden)

    Arne Riekstiņš

    2011-04-01

    Full Text Available When all the disciplines of various science fields converge and develop, new approaches to contemporary architecture arise. The author looks towards approaching digital architecture from parametric viewpoint, revealing its generative capacity, originating from the fields of aeronautical, naval, automobile and product-design industries. The author also goes explicitly through his design cycle workflow for testing the latest methodologies in architectural design. The design process steps involved: extrapolating valuable statistical data about the site into three-dimensional diagrams, defining certain materiality of what is being produced, ways of presenting structural skin and structure simultaneously, contacting the object with the ground, interior program definition of the building with floors and possible spaces, logic of fabrication, CNC milling of the proto-type. The author’s developed tool that is reviewed in this article features enormous performative capacity and is applicable to various architectural design scales.Article in English

  13. Giant monopole transition densities within the local scale ATDHF approach

    International Nuclear Information System (INIS)

    Dimitrova, S.S.; Petkov, I.Zh.; Stoitsov, M.V.

    1986-01-01

    Transition densities for 12 C, 16 O, 28 Si, 32 S, 40 Ca, 48 Ca, 56 Ni, 90 Zr, 208 Pb even-even nuclei corresponding to nuclear glant monopole resonances obtained within a local-scale adiabatic time-dependent Hartree-Fook approach in terms of effective Skyrme-type forces SkM and S3. The approach, the particular form and all necessary coefficients of these transition densities are reported. They are of a simple analytical form and may be directly used for example in analyses of particle inelastic scattering on nuclei by distorted wave method and a such a way allowing a test of the theoretical interpretation of giant monopole resonances

  14. Scaled-up electrochemical reactor with a fixed bed three-dimensional cathode for electro-Fenton process: Application to the treatment of bisphenol A

    International Nuclear Information System (INIS)

    Chmayssem, Ayman; Taha, Samir; Hauchard, Didier

    2017-01-01

    In this study, we report on the development of an open undivided electrochemical reactor with a compact fixed bed of glassy carbon pellets as three-dimensional cathode for the application of electro-Fenton process. Bisphenol A (BPA) was chosen as model molecule in order to improve its efficiency to the treatment of persistent pollutants. The study of the BPA removal efficiency in function of the applied current intensity was investigated in order to determine the limiting current of O 2 reduction (optimal conditions of H 2 O 2 production at flow rate of 0.36 m 3 .h −1 ) which was 0.8 A (0.5 A/100 g of glassy carbon pellets). Many parameters have been carried out using this electro-Fenton reactor namely degradation kinetics, influence of anodic reactions on DSA, effect of initial pollutant concentration. In the optimal current condition, the global production rate of H 2 O 2 and ·OH was investigated. The yield of electro-Fenton reaction (conversion of H 2 O 2 to ·OH) was very high (> 90%). The absolute rate of BPA degradation was determined as 4.3 × 10 9 M −1 s −1 . COD, TOC and BOD 5 measurements indicated that only few minutes of treatment by electro-Fenton process were needed to eliminate BPA for dilute solutions (10 and 25 mg.L −1 ). In this case, the biodegradability of the treated solutions occurred rapidly. For higher concentration levels, an efficient removal of BPA appeared for treatment time higher than 1 hour and more than 90 minutes were necessary to obtain the biodegradability of BPA solutions. In optimum conditions, the scale-up of the electrochemical reactor applied to electro-Fenton process was suggested and depended on the concentration level of the pollutant. The operating parameters of the scaled-up reactor might be deduced from the new section of each fixed bed exposed to the flow, from values of liquid flow velocity and from the corresponding limiting current density obtained with the reactor at laboratory scale. The compact fixed bed

  15. Design, scale-up, Six Sigma in processing different feedstocks in a fixed bed downdraft biomass gasifier

    Science.gov (United States)

    Boravelli, Sai Chandra Teja

    This thesis mainly focuses on design and process development of a downdraft biomass gasification processes. The objective is to develop a gasifier and process of gasification for a continuous steady state process. A lab scale downdraft gasifier was designed to develop the process and obtain optimum operating procedure. Sustainable and dependable sources such as biomass are potential sources of renewable energy and have a reasonable motivation to be used in developing a small scale energy production plant for countries such as Canada where wood stocks are more reliable sources than fossil fuels. This thesis addresses the process of thermal conversion of biomass gasification process in a downdraft reactor. Downdraft biomass gasifiers are relatively cheap and easy to operate because of their design. We constructed a simple biomass gasifier to study the steady state process for different sizes of the reactor. The experimental part of this investigation look at how operating conditions such as feed rate, air flow, the length of the bed, the vibration of the reactor, height and density of syngas flame in combustion flare changes for different sizes of the reactor. These experimental results also compare the trends of tar, char and syngas production for wood pellets in a steady state process. This study also includes biomass gasification process for different wood feedstocks. It compares how shape, size and moisture content of different feedstocks makes a difference in operating conditions for the gasification process. For this, Six Sigma DMAIC techniques were used to analyze and understand how each feedstock makes a significant impact on the process.

  16. Outcomes Assessment of Treating Completely Edentulous Patients with a Fixed Implant-Supported Profile Prosthesis Utilizing a Graftless Approach. Part 1: Clinically Related Outcomes.

    Science.gov (United States)

    Alzoubi, Fawaz; Bedrossian, Edmond; Wong, Allen; Farrell, Douglas; Park, Chan; Indresano, Thomas

    To assess outcomes of treating completely edentulous patients with a fixed implant-supported profile prosthesis utilizing a graftless approach for the maxilla and for the mandible, with emphasis on clinically related outcomes, specifically implant and prosthesis survival. This was a retrospective study with the following inclusion criteria: completely edentulous patients rehabilitated with a fixed implant-supported profile denture utilizing a graftless approach. Patients fulfilling the inclusion criteria were asked to participate in the study during their follow-up visits, and hence a consecutive sampling strategy was used. Data regarding implant and prosthesis cumulative survival rates (CSRs) were gathered and calculated. Thirty-four patients were identified with a total of 220 implants placed. An overall CSR of 98.2% was recorded with an observation of up to 10 years. For tilted, axial, and zygomatic implants, CSRs of 96.9%, 98.0%, and 100%, respectively, were observed for up to 10 years. For provisional prostheses, CSRs of 92.3% at 1 year, and 84.6% at 2 years were observed. For final prostheses, a CSR of 93.8% was observed at 10 years. The results suggest that treating completely edentulous patients with a fixed profile prosthesis utilizing a graftless approach in the maxilla and the mandible can be a reliable treatment option.

  17. Phenomenology of scaled factorial moments and future approaches for correlation studies

    International Nuclear Information System (INIS)

    Seibert, D.

    1991-01-01

    We show that the definitions of the exclusive and inclusive scaled factorial moments are not equivalent, and propose the use of scaled factorial moments that reduce to the exclusive moments in the case of fixed multiplicity. We then present a new derivation of the multiplicity scaling law for scaled factorial moment data. This scaling law seems to hold, independent of collision energy, for events with fixed projectile and target. However, deviations from this scaling law indicate that correlations in S-Au collisions are 30 times as strong as correlations in hadronic collisions. Finally, we discuss 'split-bin' correlation functions, the most useful tool for future investigations of these anomalously strong hadronic correlations. (orig.)

  18. Globalisation and national trends in nutrition and health: A grouped fixed-effects approach to intercountry heterogeneity.

    Science.gov (United States)

    Oberlander, Lisa; Disdier, Anne-Célia; Etilé, Fabrice

    2017-09-01

    Using a panel dataset of 70 countries spanning 42 years (1970-2011), we investigate the distinct effects of social globalisation and trade openness on national trends in markers of diet quality (supplies of animal proteins, free fats and sugar, average body mass index, and diabetes prevalence). Our key methodological contribution is the application of a grouped fixed-effects estimator, which extends linear fixed-effects models. The grouped fixed-effects estimator partitions our sample into distinct groups of countries in order to control for time-varying unobserved heterogeneity that follows a group-specific pattern. We find that increasing social globalisation has a significant impact on the supplies of animal protein and sugar available for human consumption, as well as on mean body mass index. Specific components of social globalisation such as information flows (via television and the Internet) drive these results. Trade openness has no effect on dietary outcomes or health. These findings suggest that the social and cultural aspects of globalisation should receive greater attention in research on the nutrition transition. Copyright © 2017 John Wiley & Sons, Ltd.

  19. Floating-to-Fixed-Point Conversion for Digital Signal Processors

    Directory of Open Access Journals (Sweden)

    Menard Daniel

    2006-01-01

    Full Text Available Digital signal processing applications are specified with floating-point data types but they are usually implemented in embedded systems with fixed-point arithmetic to minimise cost and power consumption. Thus, methodologies which establish automatically the fixed-point specification are required to reduce the application time-to-market. In this paper, a new methodology for the floating-to-fixed point conversion is proposed for software implementations. The aim of our approach is to determine the fixed-point specification which minimises the code execution time for a given accuracy constraint. Compared to previous methodologies, our approach takes into account the DSP architecture to optimise the fixed-point formats and the floating-to-fixed-point conversion process is coupled with the code generation process. The fixed-point data types and the position of the scaling operations are optimised to reduce the code execution time. To evaluate the fixed-point computation accuracy, an analytical approach is used to reduce the optimisation time compared to the existing methods based on simulation. The methodology stages are described and several experiment results are presented to underline the efficiency of this approach.

  20. Floating-to-Fixed-Point Conversion for Digital Signal Processors

    Science.gov (United States)

    Menard, Daniel; Chillet, Daniel; Sentieys, Olivier

    2006-12-01

    Digital signal processing applications are specified with floating-point data types but they are usually implemented in embedded systems with fixed-point arithmetic to minimise cost and power consumption. Thus, methodologies which establish automatically the fixed-point specification are required to reduce the application time-to-market. In this paper, a new methodology for the floating-to-fixed point conversion is proposed for software implementations. The aim of our approach is to determine the fixed-point specification which minimises the code execution time for a given accuracy constraint. Compared to previous methodologies, our approach takes into account the DSP architecture to optimise the fixed-point formats and the floating-to-fixed-point conversion process is coupled with the code generation process. The fixed-point data types and the position of the scaling operations are optimised to reduce the code execution time. To evaluate the fixed-point computation accuracy, an analytical approach is used to reduce the optimisation time compared to the existing methods based on simulation. The methodology stages are described and several experiment results are presented to underline the efficiency of this approach.

  1. Scaling up biomass gasifier use: an application-specific approach

    International Nuclear Information System (INIS)

    Ghosh, Debyani; Sagar, Ambuj D.; Kishore, V.V.N.

    2006-01-01

    Biomass energy accounts for about 11% of the global primary energy supply, and it is estimated that about 2 billion people worldwide depend on biomass for their energy needs. Yet, most of the use of biomass is in a primitive and inefficient manner, primarily in developing countries, leading to a host of adverse implications on human health, environment, workplace conditions, and social well being. Therefore, the utilization of biomass in a clean and efficient manner to deliver modern energy services to the world's poor remains an imperative for the development community. One possible approach to do this is through the use of biomass gasifiers. Although significant efforts have been directed towards developing and deploying biomass gasifiers in many countries, scaling up their dissemination remains an elusive goal. Based on an examination of biomass gasifier development, demonstration, and deployment efforts in India-a country with more than two decades of experiences in biomass gasifier development and dissemination, this article identifies a number of barriers that have hindered widespread deployment of biomass gasifier-based energy systems. It also suggests a possible approach for moving forward, which involves a focus on specific application areas that satisfy a set of criteria that are critical to deployment of biomass gasifiers, and then tailoring the scaling up strategy to the characteristics of the user groups for that application. Our technical, financial, economic and institutional analysis suggests an initial focus on four categories of applications-small and medium enterprises, the informal sector, biomass-processing industries, and some rural areas-may be particularly feasible and fruitful

  2. A convex optimization approach for solving large scale linear systems

    Directory of Open Access Journals (Sweden)

    Debora Cores

    2017-01-01

    Full Text Available The well-known Conjugate Gradient (CG method minimizes a strictly convex quadratic function for solving large-scale linear system of equations when the coefficient matrix is symmetric and positive definite. In this work we present and analyze a non-quadratic convex function for solving any large-scale linear system of equations regardless of the characteristics of the coefficient matrix. For finding the global minimizers, of this new convex function, any low-cost iterative optimization technique could be applied. In particular, we propose to use the low-cost globally convergent Spectral Projected Gradient (SPG method, which allow us to extend this optimization approach for solving consistent square and rectangular linear system, as well as linear feasibility problem, with and without convex constraints and with and without preconditioning strategies. Our numerical results indicate that the new scheme outperforms state-of-the-art iterative techniques for solving linear systems when the symmetric part of the coefficient matrix is indefinite, and also for solving linear feasibility problems.

  3. Bridging the PSI Knowledge Gap: A Multi-Scale Approach

    Energy Technology Data Exchange (ETDEWEB)

    Wirth, Brian D. [Univ. of Tennessee, Knoxville, TN (United States)

    2015-01-08

    Plasma-surface interactions (PSI) pose an immense scientific hurdle in magnetic confinement fusion and our present understanding of PSI in confinement environments is highly inadequate; indeed, a recent Fusion Energy Sciences Advisory Committee report found that 4 out of the 5 top five fusion knowledge gaps were related to PSI. The time is appropriate to develop a concentrated and synergistic science effort that would expand, exploit and integrate the wealth of laboratory ion-beam and plasma research, as well as exciting new computational tools, towards the goal of bridging the PSI knowledge gap. This effort would broadly advance plasma and material sciences, while providing critical knowledge towards progress in fusion PSI. This project involves the development of a Science Center focused on a new approach to PSI science; an approach that both exploits access to state-of-the-art PSI experiments and modeling, as well as confinement devices. The organizing principle is to develop synergistic experimental and modeling tools that treat the truly coupled multi-scale aspect of the PSI issues in confinement devices. This is motivated by the simple observation that while typical lab experiments and models allow independent manipulation of controlling variables, the confinement PSI environment is essentially self-determined with few outside controls. This means that processes that may be treated independently in laboratory experiments, because they involve vastly different physical and time scales, will now affect one another in the confinement environment. Also, lab experiments cannot simultaneously match all exposure conditions found in confinement devices typically forcing a linear extrapolation of lab results. At the same time programmatic limitations prevent confinement experiments alone from answering many key PSI questions. The resolution to this problem is to usefully exploit access to PSI science in lab devices, while retooling our thinking from a linear and de

  4. Biodiversity conservation in agriculture requires a multi-scale approach.

    Science.gov (United States)

    Gonthier, David J; Ennis, Katherine K; Farinas, Serge; Hsieh, Hsun-Yi; Iverson, Aaron L; Batáry, Péter; Rudolphi, Jörgen; Tscharntke, Teja; Cardinale, Bradley J; Perfecto, Ivette

    2014-09-22

    Biodiversity loss--one of the most prominent forms of modern environmental change--has been heavily driven by terrestrial habitat loss and, in particular, the spread and intensification of agriculture. Expanding agricultural land-use has led to the search for strong conservation strategies, with some suggesting that biodiversity conservation in agriculture is best maximized by reducing local management intensity, such as fertilizer and pesticide application. Others highlight the importance of landscape-level approaches that incorporate natural or semi-natural areas in landscapes surrounding farms. Here, we show that both of these practices are valuable to the conservation of biodiversity, and that either local or landscape factors can be most crucial to conservation planning depending on which types of organisms one wishes to save. We performed a quantitative review of 266 observations taken from 31 studies that compared the impacts of localized (within farm) management strategies and landscape complexity (around farms) on the richness and abundance of plant, invertebrate and vertebrate species in agro-ecosystems. While both factors significantly impacted species richness, the richness of sessile plants increased with less-intensive local management, but did not significantly respond to landscape complexity. By contrast, the richness of mobile vertebrates increased with landscape complexity, but did not significantly increase with less-intensive local management. Invertebrate richness and abundance responded to both factors. Our analyses point to clear differences in how various groups of organisms respond to differing scales of management, and suggest that preservation of multiple taxonomic groups will require multiple scales of conservation. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  5. A fixed-dose approach to conducting emamectin benzoate tolerance assessments on field-collected sea lice, Lepeophtheirus salmonis.

    Science.gov (United States)

    Whyte, S K; Westcott, J D; Elmoslemany, A; Hammell, K L; Revie, C W

    2013-03-01

    In New Brunswick, Canada, the sea louse, Lepeophtheirus salmonis, poses an on-going management challenge to the health and productivity of commercially cultured Atlantic salmon, Salmo salar. While the in-feed medication, emamectin benzoate (SLICE® ; Merck), has been highly effective for many years, evidence of increased tolerance has been observed in the field since late 2008. Although bioassays on motile stages are a common tool to monitor sea lice sensitivity to emamectin benzoate in field-collected sea lice, they require the collection of large numbers of sea lice due to inherent natural variability in the gender and stage response to chemotherapeutants. In addition, sensitive instruments such as EC(50) analysis may be unnecessarily complex to characterize susceptibility subsequent to a significant observed decline in efficacy. This study proposes an adaptation of the traditional, dose-response format bioassay to a fixed-dose method. Analysis of 657 bioassays on preadult and adult stages of sea lice over the period 2008-2011 indicated a population of sea lice in New Brunswick with varying degrees of susceptibility to emamectin benzoate. A seasonal and spatial effect was observed in the robustness of genders and stages of sea lice, which suggest that mixing different genders and stages of lice within a single bioassay may result in pertinent information being overlooked. Poor survival of adult female lice in bioassays, particularly during May/June, indicates it may be prudent to consider excluding this stage from bioassays conducted at certain times of the year. This work demonstrates that fixed-dose bioassays can be a valuable technique in detecting reduced sensitivity in sea lice populations with varying degrees of susceptibility to emamectin benzoate treatments. © 2013 Blackwell Publishing Ltd.

  6. A CAD/CAM Zirconium Bar as a Bonded Mandibular Fixed Retainer: A Novel Approach with Two-Year Follow-Up

    Science.gov (United States)

    Hassan, Rozita; Hanoun, Abdul Fatah

    2017-01-01

    Stainless steel alloys containing 8% to 12% nickel and 17% to 22% chromium are generally used in orthodontic appliances. A major concern has been the performance of alloys in the environment in which they are intended to function in the oral cavity. Biodegradation and metal release increase the risk of hypersensitivity and cytotoxicity. This case report describes for the first time a CAD/CAM zirconium bar as a bonded mandibular fixed retainer with 2-year follow-up in a patient who is subjected to long-term treatment with fixed orthodontic appliance and suspected to have metal hypersensitivity as shown by the considerable increase of nickel and chromium concentrations in a sample of patient's unstimulated saliva. The CAD/CAM design included a 1.8 mm thickness bar on the lingual surface of lower teeth from canine to canine with occlusal rests on mesial side of first premolars. For better retention, a thin layer of feldspathic ceramic was added to the inner surface of the bar and cemented with two dual-cured cement types. The patient's complaint subsided 6 weeks after cementation. Clinical evaluation appeared to give good functional value where the marginal fit of digitized CAD/CAM design and glazed surface offered an enhanced approach of fixed retention. PMID:28819572

  7. Fixed, low radiant exposure vs. incremental radiant exposure approach for diode laser hair reduction: a randomized, split axilla, comparative single-blinded trial.

    Science.gov (United States)

    Pavlović, M D; Adamič, M; Nenadić, D

    2015-12-01

    Diode lasers are the most commonly used treatment modalities for unwanted hair reduction. Only a few controlled clinical trials but not a single randomized controlled trial (RCT) compared the impact of various laser parameters, especially radiant exposure, onto efficacy, tolerability and safety of laser hair reduction. To compare the safety, tolerability and mid-term efficacy of fixed, low and incremental radiant exposures of diode lasers (800 nm) for axillary hair removal, we conducted an intrapatient, left-to-right, patient- and assessor-blinded and controlled trial. Diode laser (800 nm) treatments were evaluated in 39 study participants (skin type II-III) with unwanted axillary hairs. Randomization and allocation to split axilla treatments were carried out by a web-based randomization tool. Six treatments were performed at 4- to 6-week intervals with study subjects blinded to the type of treatment. Final assessment of hair reduction was conducted 6 months after the last treatment by means of blinded 4-point clinical scale using photographs. The primary endpoint was reduction in hair growth, and secondary endpoints were patient-rated tolerability and satisfaction with the treatment, treatment-related pain and adverse effects. Excellent reduction in axillary hairs (≥ 76%) at 6-month follow-up visit after receiving fixed, low and incremental radiant exposure diode laser treatments was obtained in 59% and 67% of study participants respectively (Z value: 1.342, P = 0.180). Patients reported lower visual analogue scale (VAS) pain score on the fixed (4.26) than on the incremental radiant exposure side (5.64) (P diode laser treatments were less painful and better tolerated. © 2015 European Academy of Dermatology and Venereology.

  8. A comparison of approaches for simultaneous inference of fixed effects for multiple outcomes using linear mixed models

    DEFF Research Database (Denmark)

    Jensen, Signe Marie; Ritz, Christian

    2018-01-01

    Longitudinal studies with multiple outcomes often pose challenges for the statistical analysis. A joint model including all outcomes has the advantage of incorporating the simultaneous behavior but is often difficult to fit due to computational challenges. We consider 2 alternative approaches to ......, pairwise fitting shows a larger loss in efficiency than the marginal models approach. Using an alternative to the joint modelling strategy will lead to some but not necessarily a large loss of efficiency for small sample sizes....

  9. Neurological Complications after Lateral Transpsoas Approach to Anterior Interbody Fusion with a Novel Flat-Blade Spine-Fixed Retractor

    Directory of Open Access Journals (Sweden)

    Pierce Nunley

    2016-01-01

    Full Text Available Introduction. The lateral lumbar interbody fusion (LLIF surgical approach has potential advantages over other approaches but is associated with some unique neurologic risks due to the proximity of the lumbosacral plexus. The present study analyzed complications following LLIF surgical approach using a novel single flat-blade retractor system. Methods. A retrospective data collection of patients receiving LLIF using a novel single flat-blade retractor system at two institutions in the US. Inclusion criteria were all patients receiving an LLIF procedure with the RAVINE® Lateral Access System (K2M, Inc., Leesburg, VA, USA. There was no restriction on preoperative diagnosis or number of levels treated. Approach-related neurologic complications were collected and analyzed postoperatively through a minimum of one year. Results. Analysis included 253 patients with one to four treated lateral levels. Immediate postoperative neurologic complications were present in 11.1% (28/253 of patients. At one-year follow-up the approach-related neurologic complications resolved in all except 5 patients (2.0%. Conclusion. We observed an 11.1% neurologic complication rate in LLIF procedures. There was resolution of symptoms for most patients by 12-month follow-up, with only 2% of patients with residual symptoms. This supports the hypothesis that the vast majority of approach-related neurologic symptoms are transient.

  10. Noise pollution mapping approach and accuracy on landscape scales.

    Science.gov (United States)

    Iglesias Merchan, Carlos; Diaz-Balteiro, Luis

    2013-04-01

    Noise mapping allows the characterization of environmental variables, such as noise pollution or soundscape, depending on the task. Strategic noise mapping (as per Directive 2002/49/EC, 2002) is a tool intended for the assessment of noise pollution at the European level every five years. These maps are based on common methods and procedures intended for human exposure assessment in the European Union that could be also be adapted for assessing environmental noise pollution in natural parks. However, given the size of such areas, there could be an alternative approach to soundscape characterization rather than using human noise exposure procedures. It is possible to optimize the size of the mapping grid used for such work by taking into account the attributes of the area to be studied and the desired outcome. This would then optimize the mapping time and the cost. This type of optimization is important in noise assessment as well as in the study of other environmental variables. This study compares 15 models, using different grid sizes, to assess the accuracy of the noise mapping of the road traffic noise at a landscape scale, with respect to noise and landscape indicators. In a study area located in the Manzanares High River Basin Regional Park in Spain, different accuracy levels (Kappa index values from 0.725 to 0.987) were obtained depending on the terrain and noise source properties. The time taken for the calculations and the noise mapping accuracy results reveal the potential for setting the map resolution in line with decision-makers' criteria and budget considerations. Copyright © 2013 Elsevier B.V. All rights reserved.

  11. Thermodynamic modeling of small scale biomass gasifiers: Development and assessment of the ''Multi-Box'' approach.

    Science.gov (United States)

    Vakalis, Stergios; Patuzzi, Francesco; Baratieri, Marco

    2016-04-01

    Modeling can be a powerful tool for designing and optimizing gasification systems. Modeling applications for small scale/fixed bed biomass gasifiers have been interesting due to their increased commercial practices. Fixed bed gasifiers are characterized by a wide range of operational conditions and are multi-zoned processes. The reactants are distributed in different phases and the products from each zone influence the following process steps and thus the composition of the final products. The present study aims to improve the conventional 'Black-Box' thermodynamic modeling by means of developing multiple intermediate 'boxes' that calculate two phase (solid-vapor) equilibriums in small scale gasifiers. Therefore the model is named ''Multi-Box''. Experimental data from a small scale gasifier have been used for the validation of the model. The returned results are significantly closer with the actual case study measurements in comparison to single-stage thermodynamic modeling. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. A new fixed-target approach for serial crystallography at synchrotron light sources and X-ray free electron lasers

    Energy Technology Data Exchange (ETDEWEB)

    Roedig, Philip

    2017-07-15

    In the framework of this thesis, a new method for high-speed fixed-target serial crystallography experiments and its applicability to biomacromolecular crystallography at both synchrotron light sources and X-ray free electron lasers (XFELs) is presented. The method is based on a sample holder, which can carry up to 20,000 microcrystals and which is made of single-crystalline silicon. Using synchrotron radiation, the structure of Operophtera brumata cytoplasmic polyhedrosis virus type 18 polyhedrin, lysozyme and cubic insulin was determined by collecting X-ray diffraction data from multiple microcrystals. Data collection was shown to be possible at both cryogenic and ambient conditions. For room-temperature measurements, both global and specific indications of radiation damage were investigated and characterized. Due to the sieve-like structure of the chip, the microcrystals tend to arrange themselves according to the micropore pattern, which allows for efficient sampling of the sample material. In combination with a high-speed scanning stage, the sample holder was furthermore shown to be highly suitable for serial femtosecond crystallography experiments. By fast raster scanning of the chip through the pulsed X-ray beam of an XFEL, structure determination of a virus, using the example of bovine enterovirus type 2, has been demonstrated at an XFEL for the first time. Hit rates of up to 100% were obtained by the presented method, which refers to a reduction in sample consumption by at least three orders of magnitude with respect to common liquid-jet injection methods used for sample delivery. In this way, the typical time needed for data collection in serial femtosecond crystallography is significantly decreased. The presented technique for sample loading of the chip is easy to learn and results in efficient removal of the surrounding mother liquor, thereby reducing the generated background signal. Since the chip is made of single-crystalline silicon, in principle no

  13. A new fixed-target approach for serial crystallography at synchrotron light sources and X-ray free electron lasers

    International Nuclear Information System (INIS)

    Roedig, Philip

    2017-07-01

    In the framework of this thesis, a new method for high-speed fixed-target serial crystallography experiments and its applicability to biomacromolecular crystallography at both synchrotron light sources and X-ray free electron lasers (XFELs) is presented. The method is based on a sample holder, which can carry up to 20,000 microcrystals and which is made of single-crystalline silicon. Using synchrotron radiation, the structure of Operophtera brumata cytoplasmic polyhedrosis virus type 18 polyhedrin, lysozyme and cubic insulin was determined by collecting X-ray diffraction data from multiple microcrystals. Data collection was shown to be possible at both cryogenic and ambient conditions. For room-temperature measurements, both global and specific indications of radiation damage were investigated and characterized. Due to the sieve-like structure of the chip, the microcrystals tend to arrange themselves according to the micropore pattern, which allows for efficient sampling of the sample material. In combination with a high-speed scanning stage, the sample holder was furthermore shown to be highly suitable for serial femtosecond crystallography experiments. By fast raster scanning of the chip through the pulsed X-ray beam of an XFEL, structure determination of a virus, using the example of bovine enterovirus type 2, has been demonstrated at an XFEL for the first time. Hit rates of up to 100% were obtained by the presented method, which refers to a reduction in sample consumption by at least three orders of magnitude with respect to common liquid-jet injection methods used for sample delivery. In this way, the typical time needed for data collection in serial femtosecond crystallography is significantly decreased. The presented technique for sample loading of the chip is easy to learn and results in efficient removal of the surrounding mother liquor, thereby reducing the generated background signal. Since the chip is made of single-crystalline silicon, in principle no

  14. On approach to double asymptotic scaling at low x

    International Nuclear Information System (INIS)

    Choudhury, D.K.

    1994-10-01

    We obtain the finite x correlations to the gluon structure function which exhibits double asymptotic scaling at low x. The technique used is the GLAP equation for gluon approximated at low x by a Taylor expansion. (author). 27 refs

  15. A Proteomic Approach of Bradyrhizobium/Aeschynomene Root and Stem Symbioses Reveals the Importance of the fixA Locus for Symbiosis

    Directory of Open Access Journals (Sweden)

    Nathanael Delmotte

    2014-02-01

    Full Text Available Rhizobia are soil bacteria that are able to form symbiosis with plant hosts of the legume family. These associations result in the formation of organs, called nodules in which bacteria fix atmospheric nitrogen to the benefit of the plant. Most of our knowledge on the metabolism and the physiology of the bacteria during symbiosis derives from studying roots nodules of terrestrial plants. Here we used a proteomics approach to investigate the bacterial physiology of photosynthetic Bradyrhizobium sp. ORS278 during the symbiotic process with the semi aquatical plant Aeschynomene indica that forms root and stem nodules. We analyzed the proteomes of bacteria extracted from each type of nodule. First, we analyzed the bacteroid proteome at two different time points and found only minor variation between the bacterial proteomes of 2-week- and 3-week-old nodules. High conservation of the bacteroid proteome was also found when comparing stem nodules and root nodules. Among the stem nodule specific proteins were those related to the phototrophic ability of Bradyrhizobium sp. ORS278. Furthermore, we compared our data with those obtained during an extensive genetic screen previously published. The symbiotic role of four candidate genes which corresponding proteins were found massively produced in the nodules but not identified during this screening was examined. Mutant analysis suggested that in addition to the EtfAB system, the fixA locus is required for symbiotic efficiency.

  16. Mechatronic modeling of a 750kW fixed-speed wind energy conversion system using the Bond Graph Approach.

    Science.gov (United States)

    Khaouch, Zakaria; Zekraoui, Mustapha; Bengourram, Jamaa; Kouider, Nourreeddine; Mabrouki, Mustapha

    2016-11-01

    In this paper, we would like to focus on modeling main parts of the wind turbines (blades, gearbox, tower, generator and pitching system) from a mechatronics viewpoint using the Bond-Graph Approach (BGA). Then, these parts are combined together in order to simulate the complete system. Moreover, the real dynamic behavior of the wind turbine is taken into account and with the new model; final load simulation is more realistic offering benefits and reliable system performance. This model can be used to develop control algorithms to reduce fatigue loads and enhance power production. Different simulations are carried-out in order to validate the proposed wind turbine model, using real data provided in the open literature (blade profile and gearbox parameters for a 750 kW wind turbine). Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  17. Validity of the Neuromuscular Recovery Scale: a measurement model approach.

    Science.gov (United States)

    Velozo, Craig; Moorhouse, Michael; Ardolino, Elizabeth; Lorenz, Doug; Suter, Sarah; Basso, D Michele; Behrman, Andrea L

    2015-08-01

    To determine how well the Neuromuscular Recovery Scale (NRS) items fit the Rasch, 1-parameter, partial-credit measurement model. Confirmatory factor analysis (CFA) and principal components analysis (PCA) of residuals were used to determine dimensionality. The Rasch, 1-parameter, partial-credit rating scale model was used to determine rating scale structure, person/item fit, point-measure item correlations, item discrimination, and measurement precision. Seven NeuroRecovery Network clinical sites. Outpatients (N=188) with spinal cord injury. Not applicable. NRS. While the NRS met 1 of 3 CFA criteria, the PCA revealed that the Rasch measurement dimension explained 76.9% of the variance. Ten of 11 items and 91% of the patients fit the Rasch model, with 9 of 11 items showing high discrimination. Sixty-nine percent of the ratings met criteria. The items showed a logical item-difficulty order, with Stand retraining as the easiest item and Walking as the most challenging item. The NRS showed no ceiling or floor effects and separated the sample into almost 5 statistically distinct strata; individuals with an American Spinal Injury Association Impairment Scale (AIS) D classification showed the most ability, and those with an AIS A classification showed the least ability. Items not meeting the rating scale criteria appear to be related to the low frequency counts. The NRS met many of the Rasch model criteria for construct validity. Copyright © 2015 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  18. An analytical approach for a nodal formulation of a two-dimensional fixed-source neutron transport problem in heterogeneous medium

    Energy Technology Data Exchange (ETDEWEB)

    Basso Barichello, Liliane; Dias da Cunha, Rudnei [Universidade Federal do Rio Grande do Sul, Porto Alegre, RS (Brazil). Inst. de Matematica; Becker Picoloto, Camila [Universidade Federal do Rio Grande do Sul, Porto Alegre, RS (Brazil). Programa de Pos-Graduacao em Engenharia Mecanica; Tres, Anderson [Universidade Federal do Rio Grande do Sul, Porto Alegre, RS (Brazil). Programa de Pos-Graduacao em Matematica Aplicada

    2015-05-15

    A nodal formulation of a fixed-source two-dimensional neutron transport problem, in Cartesian geometry, defined in a heterogeneous medium, is solved by an analytical approach. Explicit expressions, in terms of the spatial variables, are derived for averaged fluxes in each region in which the domain is subdivided. The procedure is an extension of an analytical discrete ordinates method, the ADO method, for the solution of the two-dimensional homogeneous medium case. The scheme is developed from the discrete ordinates version of the two-dimensional transport equation along with the level symmetric quadrature scheme. As usual for nodal schemes, relations between the averaged fluxes and the unknown angular fluxes at the contours are introduced as auxiliary equations. Numerical results are in agreement with results available in the literature.

  19. Maxillary and mandibular immediately loaded implant-supported interim complete fixed dental prostheses on immediately placed dental implants with a digital approach: A clinical report.

    Science.gov (United States)

    Lewis, Ryan C; Harris, Bryan T; Sarno, Robert; Morton, Dean; Llop, Daniel R; Lin, Wei-Shao

    2015-09-01

    This clinical report describes the treatment of maxillary and mandibular immediate implant placement and immediately loaded implant-supported interim complete fixed dental prostheses with a contemporary digital approach. The virtual diagnostic tooth arrangement eliminated the need for a customized radiographic template, and the diagnostic data collection required for computer-guided surgery (digital diagnostic impressions, digital photographs, and a cone beam-computed tomography [CBCT] scan) was completed in a single visit with improved workflow efficiency. Computer-aided design and computer-aided manufacturing (CAD/CAM)-fabricated surgical templates and interim prosthesis templates were made in a dental laboratory to facilitate computer-guided surgery and the immediate loading process. Copyright © 2015 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.

  20. Interdisciplinary approach to enhance the esthetics of maxillary anterior region using soft- and hard-tissue ridge augmentation in conjunction with a fixed partial prosthesis.

    Science.gov (United States)

    Khetarpal, Shaleen; Chouksey, Ajay; Bele, Anand; Vishnoi, Rahul

    2018-01-01

    Favorable esthetics is one of the most important treatment outcomes in dentistry, and to achieve this, interdisciplinary approaches are often required. Ridge deficiencies can be corrected for both, soft- and hard-tissue discrepancies. To overcome such defects, not only a variety of prosthetic options are at our disposal but also several periodontal plastic surgical techniques are available as well. Various techniques have been described and revised, over the year to correct ridge defects. For enhancing soft-tissue contours in the anterior region, the subepithelial connective tissue graft is the treatment of choice. A combination of alloplastic bone graft in adjunct to connective tissue graft optimizes ridge augmentation and minimizes defects. The present case report describes the use of vascular interpositional connective tissue graft in combination with alloplastic bone graft for correction of Seibert's Class III ridge deficiency followed by a fixed partial prosthesis to achieve a better esthetic outcome.

  1. Interdisciplinary approach to enhance the esthetics of maxillary anterior region using soft- and hard-tissue ridge augmentation in conjunction with a fixed partial prosthesis

    Directory of Open Access Journals (Sweden)

    Shaleen Khetarpal

    2018-01-01

    Full Text Available Favorable esthetics is one of the most important treatment outcomes in dentistry, and to achieve this, interdisciplinary approaches are often required. Ridge deficiencies can be corrected for both, soft- and hard-tissue discrepancies. To overcome such defects, not only a variety of prosthetic options are at our disposal but also several periodontal plastic surgical techniques are available as well. Various techniques have been described and revised, over the year to correct ridge defects. For enhancing soft-tissue contours in the anterior region, the subepithelial connective tissue graft is the treatment of choice. A combination of alloplastic bone graft in adjunct to connective tissue graft optimizes ridge augmentation and minimizes defects. The present case report describes the use of vascular interpositional connective tissue graft in combination with alloplastic bone graft for correction of Seibert's Class III ridge deficiency followed by a fixed partial prosthesis to achieve a better esthetic outcome.

  2. Intravital imaging of the immune responses during liver-stage malaria infection: An improved approach for fixing the liver.

    Science.gov (United States)

    Akbari, Masoud; Kimura, Kazumi; Houts, James T; Yui, Katsuyuki

    2016-10-01

    The host-parasite relationship is one of the main themes of modern parasitology. Recent revolutions in science, including the development of various fluorescent proteins/probes and two-photon microscopy, have made it possible to directly visualize and study the mechanisms underlying the interaction between the host and pathogen. Here, we describe our method of preparing and setting-up the liver for our experimental approach of using intravital imaging to examine the interaction between Plasmodium berghei ANKA and antigen-specific CD8 + T cells during the liver-stage of the infection in four dimensions. Since the liver is positioned near the diaphragm, neutralization of respiratory movements is critical during the imaging process. In addition, blood circulation and temperature can be affected by the surgical exposure due to the anatomy and tissue structure of the liver. To control respiration, we recommend anesthesia with isoflurane inhalation at 1% during the surgery. In addition, our protocol introduces a cushion of gauze around the liver to avoid external pressure on the liver during intravital imaging using an inverted microscope, which makes it possible to image the liver tissue for long periods with minimal reduction in the blood circulation and with minimal displacement and tissue damage. The key point of this method is to reduce respiratory movements and external pressure on the liver tissue during intravital imaging. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  3. Effectiveness of fixed-site high-frequency transcutaneous electrical nerve stimulation in chronic pain: a large-scale, observational study

    Science.gov (United States)

    Kong, Xuan; Gozani, Shai N

    2018-01-01

    Objective The objective of this study was to assess the effectiveness of fixed-site high-frequency transcutaneous electrical nerve stimulation (FS-TENS) in a real-world chronic pain sample. Background There is a need for nonpharmacological treatment options for chronic pain. FS-TENS improved multisite chronic pain in a previous interventional study. Large observational studies are needed to further characterize its effectiveness. Methods This retrospective observational cohort study examined changes in chronic pain measures following 60 days of FS-TENS use. The study data were obtained from FS-TENS users who uploaded their device utilization and clinical data to an online database. The primary outcome measures were changes in pain intensity and pain interference with sleep, activity, and mood on an 11-point numerical rating scale. Dose–response associations were evaluated by stratifying subjects into low (≤30 days), intermediate (31–56 days), and high (≥57 days) utilization subgroups. FS-TENS effectiveness was quantified by baseline to follow-up group differences and a responder analysis (≥30% improvement in pain intensity or ≥2-point improvement in pain interference domains). Results Utilization and clinical data were collected from 11,900 people using FS-TENS for chronic pain, with 713 device users meeting the inclusion and exclusion criteria. Study subjects were generally older, overweight adults. Subjects reported multisite pain with a mean of 4.8 (standard deviation [SD] 2.5) pain sites. A total of 97.2% of subjects identified low back and/or lower extremity pain, and 72.9% of subjects reported upper body pain. All pain measures exhibited statistically significant group differences from baseline to 60-day follow-up. The largest changes were pain interference with activity (−0.99±2.69 points) and mood (−1.02±2.78 points). A total of 48.7% of subjects exhibited a clinically meaningful reduction in pain interference with activity or mood. This

  4. Effectiveness of fixed-site high-frequency transcutaneous electrical nerve stimulation in chronic pain: a large-scale, observational study

    Directory of Open Access Journals (Sweden)

    Kong X

    2018-04-01

    Full Text Available Xuan Kong, Shai N Gozani NeuroMetrix, Inc., Waltham, MA, USA Objective: The objective of this study was to assess the effectiveness of fixed-site high-frequency transcutaneous electrical nerve stimulation (FS-TENS in a real-world chronic pain sample. Background: There is a need for nonpharmacological treatment options for chronic pain. FS-TENS improved multisite chronic pain in a previous interventional study. Large observational studies are needed to further characterize its effectiveness. Methods: This retrospective observational cohort study examined changes in chronic pain measures following 60 days of FS-TENS use. The study data were obtained from FS-TENS users who uploaded their device utilization and clinical data to an online database. The primary outcome measures were changes in pain intensity and pain interference with sleep, activity, and mood on an 11-point numerical rating scale. Dose–response associations were evaluated by stratifying subjects into low (≤30 days, intermediate (31–56 days, and high (≥57 days utilization subgroups. FS-TENS effectiveness was quantified by baseline to follow-up group differences and a responder analysis (≥30% improvement in pain intensity or ≥2-point improvement in pain interference domains. Results: Utilization and clinical data were collected from 11,900 people using FS-TENS for chronic pain, with 713 device users meeting the inclusion and exclusion criteria. Study subjects were generally older, overweight adults. Subjects reported multisite pain with a mean of 4.8 (standard deviation [SD] 2.5 pain sites. A total of 97.2% of subjects identified low back and/or lower extremity pain, and 72.9% of subjects reported upper body pain. All pain measures exhibited statistically significant group differences from baseline to 60-day follow-up. The largest changes were pain interference with activity (−0.99±2.69 points and mood (−1.02±2.78 points. A total of 48.7% of subjects exhibited a

  5. Scaling production and improving efficiency in DEA: an interactive approach

    Science.gov (United States)

    Rödder, Wilhelm; Kleine, Andreas; Dellnitz, Andreas

    2017-10-01

    DEA models help a DMU to detect its (in-)efficiency and to improve activities, if necessary. Efficiency is only one economic aim for a decision-maker; however, up- or downsizing might be a second one. Improving efficiency is the main topic in DEA; the long-term strategy towards the right production size should attract our attention as well. Not always the management of a DMU primarily focuses on technical efficiency but rather is interested in gaining scale effects. In this paper, a formula for returns to scale (RTS) is developed, and this formula is even applicable for interior points of technology. Particularly, technical and scale inefficient DMUs need sophisticated instruments to improve their situation. Considering RTS as well as efficiency, in this paper, we give an advice for each DMU to find an economically reliable path from its actual situation to better activities and finally to most productive scale size (mpss), perhaps. For realizing this path, we propose an interactive algorithm, thus harmonizing the scientific findings and the interests of the management. Small numerical examples illustrate such paths for selected DMUs; an empirical application in theatre management completes the contribution.

  6. Determination of the interatomic potential from elastic differential cross sections at fixed energy: Functional sensitivity analysis approach

    International Nuclear Information System (INIS)

    Ho, T.; Rabitz, H.

    1989-01-01

    Elastic differential cross sections in atomic crossed beam experiments contain detailed information about the underlying interatomic potentials. The functional sensitivity density of the cross sections with respect to the potential δσ(θ)/δV(R) reveals such information and has been implemented in an iterative inversion procedure, analogous to that of the Newton--Raphson technique. The stability of the inversion is achieved with the use of the regularization method of Tikhonov and Miller. It is shown that given a set of well resolved and noise-free differential cross section data within a limited angular range and given a reasonable starting reference potential, the recovered potential accurately resembles the desired one in the important region, i.e., the region to which the scattering data are sensitive. The region of importance depends upon the collision energy relative to the well depth of the potential under study; usually a higher collision energy penetrates deeper into the repulsive part of the potential and thus accordingly yields a more accurate potential in that part. The inversion procedure produces also a quality function indicating the well determined radial region. Moreover, the extracted potential is quite independent of the functional form of the reference potential in contrast to curve fitting approaches. As illustrations, the model inert gas systems He--Ne and Ne--Ar have been considered. For collision energies within an order of magnitude of the associated potential well depth, the attractive part of the potential can be determined to high precision provided that scattering data at small enough angles are available

  7. 331 cases of clinically node-negative supraglottic carcinoma of the larynx: a study of a modest size fixed field radiotherapy approach

    International Nuclear Information System (INIS)

    Sykes, Andrew J.; Slevin, Nicholas J.; Gupta, Nirmal K.; Brewster, Allison E.

    2000-01-01

    Purpose: For node-negative supraglottic carcinoma of the larynx, radiotherapy with surgery in reserve commonly provides very good results in terms of both local control and survival, while preserving function. However uncertainty exists over the treatment of the node-negative neck. Elective whole neck radiotherapy, while effective, may be associated with significant morbidity. The purpose of this study was to examine our practice of treating a modest size, fixed field to a high biologically effective dose and compare it with the patterns of recurrence from other centers that use different dose/volume approaches. Methods and Materials: Over a 10-year period 331 patients with node-negative supraglottic carcinoma of the larynx were treated with radiotherapy at the Christie Hospital Manchester. Patients were treated with doses of 50-55 Gy in 16 fractions over 3 weeks. Data were collected retrospectively for local and regional control, survival, and morbidity. Results: Overall local control, after surgical salvage in 17 cases, was 79% (T1-92%, T2-81%, T3-67%, T4-73%). Overall regional lymph node control, after surgical salvage in 13 cases, was 84% (T1-91%, T2-88%, T3-81%, T4-72%). Five-year crude survival was 50%, but after correcting for intercurrent deaths was 70% (T1-83%, T2-78%, T3-53%, T4-61%). Serious morbidity requiring surgery was seen in 7 cases (2.1%) and was related to prescribed dose (50 Gy-0%, 52.5 Gy-1.3%, 55 Gy-3.4%). Discussion: Our results confirm that treating a modest size, fixed field to a high biologically effective dose is highly effective. It enables preservation of the larynx in most cases, with acceptable regional control and no loss of survival compared to whole neck radiotherapy regimes

  8. Characterizing fixed points

    Directory of Open Access Journals (Sweden)

    Sanjo Zlobec

    2017-04-01

    Full Text Available A set of sufficient conditions which guarantee the existence of a point x⋆ such that f(x⋆ = x⋆ is called a "fixed point theorem". Many such theorems are named after well-known mathematicians and economists. Fixed point theorems are among most useful ones in applied mathematics, especially in economics and game theory. Particularly important theorem in these areas is Kakutani's fixed point theorem which ensures existence of fixed point for point-to-set mappings, e.g., [2, 3, 4]. John Nash developed and applied Kakutani's ideas to prove the existence of (what became known as "Nash equilibrium" for finite games with mixed strategies for any number of players. This work earned him a Nobel Prize in Economics that he shared with two mathematicians. Nash's life was dramatized in the movie "Beautiful Mind" in 2001. In this paper, we approach the system f(x = x differently. Instead of studying existence of its solutions our objective is to determine conditions which are both necessary and sufficient that an arbitrary point x⋆ is a fixed point, i.e., that it satisfies f(x⋆ = x⋆. The existence of solutions for continuous function f of the single variable is easy to establish using the Intermediate Value Theorem of Calculus. However, characterizing fixed points x⋆, i.e., providing answers to the question of finding both necessary and sufficient conditions for an arbitrary given x⋆ to satisfy f(x⋆ = x⋆, is not simple even for functions of the single variable. It is possible that constructive answers do not exist. Our objective is to find them. Our work may require some less familiar tools. One of these might be the "quadratic envelope characterization of zero-derivative point" recalled in the next section. The results are taken from the author's current research project "Studying the Essence of Fixed Points". They are believed to be original. The author has received several feedbacks on the preliminary report and on parts of the project

  9. Quantitative approach to small-scale nonequilibrium systems

    DEFF Research Database (Denmark)

    Dreyer, Jakob K; Berg-Sørensen, Kirstine; Oddershede, Lene B

    2006-01-01

    In a nano-scale system out of thermodynamic equilibrium, it is important to account for thermal fluctuations. Typically, the thermal noise contributes fluctuations, e.g., of distances that are substantial in comparison to the size of the system and typical distances measured. If the thermal...... propose an approximate but quantitative way of dealing with such an out-of-equilibrium system. The limits of this approximate description of the escape process are determined through optical tweezers experiments and comparison to simulations. Also, this serves as a recipe for how to use the proposed...

  10. Transition from failing dentition to full-arch fixed implant-supported prosthesis with a staged approach using removable partial dentures: a case series.

    Science.gov (United States)

    Cortes, Arthur Rodriguez Gonzalez; Cortes, Djalma Nogueira; No-Cortes, Juliana; Arita, Emiko Saito

    2014-06-01

    The present retrospective case series is aimed at evaluating a staged approach using a removable partial denture (RPD) as an interim prosthesis in treatment to correct a failing dentition until such time as a full-arch fixed implant-supported prosthesis may be inserted. Eight patients, who had undergone maxillary full-arch rehabilitation with dental implants due to poor prognosis of their dentitions, were analyzed. All treatment included initial periodontal therapy and a strategic order of extraction of hopeless teeth. An RPD supported by selected teeth rehabilitated the compromised arch during implant osseointegration. These remaining teeth were extracted prior to definitive prosthesis delivery. Advantages and drawbacks of this technique were also recorded for the cases presented. Among the advantages provided by the staged approach are simplicity of fabrication, low cost, and ease of insertion. Additionally, RPD tooth support prevented contact between the interim prosthesis and healing abutments, promoting implant osseointegration. The main drawbacks were interference with speech and limited esthetic results. Implant survival rate was 100% within a follow-up of at least 1 year. The use of RPDs as interim prostheses allowed for the accomplishment of the analyzed rehabilitation treatments. It is a simple treatment alternative for patients with a low smile line. © 2013 by the American College of Prosthodontists.

  11. An approach to an acute emotional stress reference scale.

    Science.gov (United States)

    Garzon-Rey, J M; Arza, A; de-la-Camara, C; Lobo, A; Armario, A; Aguilo, J

    2017-06-16

    The clinical diagnosis aims to identify the degree of affectation of the psycho-physical state of the patient as a guide to therapeutic intervention. In stress, the lack of a measurement tool based on a reference makes it difficult to quantitatively assess this degree of affectation. To define and perform a primary assessment of a standard reference in order to measure acute emotional stress from the markers identified as indicators of the degree. Psychometric tests and biochemical variables are, in general, the most accepted stress measurements by the scientific community. Each one of them probably responds to different and complementary processes related to the reaction to a stress stimulus. The reference that is proposed is a weighted mean of these indicators by assigning them relative weights in accordance with a principal components analysis. An experimental study was conducted on 40 healthy young people subjected to the psychosocial stress stimulus of the Trier Social Stress Test in order to perform a primary assessment and consistency check of the proposed reference. The proposed scale clearly differentiates between the induced relax and stress states. Accepting the subjectivity of the definition and the lack of a subsequent validation with new experimental data, the proposed standard differentiates between a relax state and an emotional stress state triggered by a moderate stress stimulus, as it is the Trier Social Stress Test. The scale is robust. Although the variations in the percentage composition slightly affect the score, but they do not affect the valid differentiation between states.

  12. Truncated conformal space approach to scaling Lee-Yang model

    International Nuclear Information System (INIS)

    Yurov, V.P.; Zamolodchikov, Al.B.

    1989-01-01

    A numerical approach to 2D relativstic field theories is suggested. Considering a field theory model as an ultraviolet conformal field theory perturbed by suitable relevant scalar operator one studies it in finite volume (on a circle). The perturbed Hamiltonian acts in the conformal field theory space of states and its matrix elements can be extracted from the conformal field theory. Truncation of the space at reasonable level results in a finite dimensional problem for numerical analyses. The nonunitary field theory with the ultraviolet region controlled by the minimal conformal theory μ(2/5) is studied in detail. 9 refs.; 17 figs

  13. Methanation of CO2 on Ni/Al2O3 in a Structured Fixed-Bed Reactor—A Scale-Up Study

    Directory of Open Access Journals (Sweden)

    Daniel Türks

    2017-05-01

    Full Text Available Due to the ongoing change of energy supply, the availability of a reliable high-capacity storage technology becomes increasingly important. While conventional large-scale facilities are either limited in capacity respective supply time or their extension potential is little (e.g., pumped storage power stations, decentralized units could contribute to energy transition. The concepts of PtX (power-to-X storage technologies and in particular PtG (power-to-gas aim at fixation of electric power in chemical compounds. CO2 hydrogenation (methanation is the foundation of the PtG idea as H2 (via electrolysis and CO2 are easily accessible. Methane produced in this way, often called substitute natural gas (SNG, is a promising solution since it can be stored in the existing gas grid, tanks or underground cavern storages. Methanation is characterized by a strong exothermic heat of reaction which has to be handled safely. This work aims at getting rid of extreme temperature hot-spots in a tube reactor by configuring the catalyst bed structure. Proof of concept studies began with a small tube reactor (V = 12.5 cm3 with a commercial 18 wt % Ni/Al2O3 catalyst. Later, a double-jacket tube reactor was built (V = 452 cm3, reaching a production rate of 50 L/h SNG. The proposed approach not only improves the heat management and process safety, but also increases the specific productivity and stability of the catalyst remarkably.

  14. PATTERN CLASSIFICATION APPROACHES TO MATCHING BUILDING POLYGONS AT MULTIPLE SCALES

    Directory of Open Access Journals (Sweden)

    X. Zhang

    2012-07-01

    Full Text Available Matching of building polygons with different levels of detail is crucial in the maintenance and quality assessment of multi-representation databases. Two general problems need to be addressed in the matching process: (1 Which criteria are suitable? (2 How to effectively combine different criteria to make decisions? This paper mainly focuses on the second issue and views data matching as a supervised pattern classification. Several classifiers (i.e. decision trees, Naive Bayes and support vector machines are evaluated for the matching task. Four criteria (i.e. position, size, shape and orientation are used to extract information for these classifiers. Evidence shows that these classifiers outperformed the weighted average approach.

  15. Facing the scaling problem: A multi-methodical approach to simulate soil erosion at hillslope and catchment scale

    Science.gov (United States)

    Schmengler, A. C.; Vlek, P. L. G.

    2012-04-01

    Modelling soil erosion requires a holistic understanding of the sediment dynamics in a complex environment. As most erosion models are scale-dependent and their parameterization is spatially limited, their application often requires special care, particularly in data-scarce environments. This study presents a hierarchical approach to overcome the limitations of a single model by using various quantitative methods and soil erosion models to cope with the issues of scale. At hillslope scale, the physically-based Water Erosion Prediction Project (WEPP)-model is used to simulate soil loss and deposition processes. Model simulations of soil loss vary between 5 to 50 t ha-1 yr-1 dependent on the spatial location on the hillslope and have only limited correspondence with the results of the 137Cs technique. These differences in absolute soil loss values could be either due to internal shortcomings of each approach or to external scale-related uncertainties. Pedo-geomorphological soil investigations along a catena confirm that estimations by the 137Cs technique are more appropriate in reflecting both the spatial extent and magnitude of soil erosion at hillslope scale. In order to account for sediment dynamics at a larger scale, the spatially-distributed WaTEM/SEDEM model is used to simulate soil erosion at catchment scale and to predict sediment delivery rates into a small water reservoir. Predicted sediment yield rates are compared with results gained from a bathymetric survey and sediment core analysis. Results show that specific sediment rates of 0.6 t ha-1 yr-1 by the model are in close agreement with observed sediment yield calculated from stratigraphical changes and downcore variations in 137Cs concentrations. Sediment erosion rates averaged over the entire catchment of 1 to 2 t ha-1 yr-1 are significantly lower than results obtained at hillslope scale confirming an inverse correlation between the magnitude of erosion rates and the spatial scale of the model. The

  16. OBJECT-ORIENTED CHANGE DETECTION BASED ON MULTI-SCALE APPROACH

    Directory of Open Access Journals (Sweden)

    Y. Jia

    2016-06-01

    Full Text Available The change detection of remote sensing images means analysing the change information quantitatively and recognizing the change types of the surface coverage data in different time phases. With the appearance of high resolution remote sensing image, object-oriented change detection method arises at this historic moment. In this paper, we research multi-scale approach for high resolution images, which includes multi-scale segmentation, multi-scale feature selection and multi-scale classification. Experimental results show that this method has a stronger advantage than the traditional single-scale method of high resolution remote sensing image change detection.

  17. Various approaches to the modelling of large scale 3-dimensional circulation in the Ocean

    Digital Repository Service at National Institute of Oceanography (India)

    Shaji, C.; Bahulayan, N.; Rao, A.D.; Dube, S.K.

    In this paper, the three different approaches to the modelling of large scale 3-dimensional flow in the ocean such as the diagnostic, semi-diagnostic (adaptation) and the prognostic are discussed in detail. Three-dimensional solutions are obtained...

  18. Approaches to recreational landscape scaling of mountain resorts

    Science.gov (United States)

    Chalaya, Elena; Efimenko, Natalia; Povolotskaia, Nina; Slepih, Vladimir

    2013-04-01

    19 Hz, gamma 19 … 25Hz by 9-17%; the increase in adaptation layer of the organism by 21% and a versatility indicator of health - by 19%; the decrease in systolic (from 145 to 131 mm of mercury) and diastolic (from 96 to 82 mm of mercury) arterial pressure, the increase in indicators of carpal dynamometry (on the right hand from 27 to 36 kg, on the left hand from 25 to 34 kg), the increase in speed of thermogenesis (from 0.0633 to 0.0944 K/s) and quality of neurovascular reactivity (from 48% to 81%). In the whole the patient`s cenesthesia has improved. We have also studied the responses of adaptive reactions with the recipients at other options of RL. But researches are still being carried out in this direction. Their results will be used as a base of RL scaling of North Caucasus mountain territories. This problem is interdisciplinary, multidimensional and deals with both medical and geophysical issues. The studies were performed by support of the Program "Basic Sciences for Medicine" and RFBR project No.10-05-01014_a.

  19. Revisiting the dilatation operator of the Wilson-Fisher fixed point

    Energy Technology Data Exchange (ETDEWEB)

    Liendo, Pedro [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany). Theory Group

    2017-01-15

    We revisit the order ε dilatation operator of the Wilson-Fisher fixed point obtained by Kehrein, Pismak, and Wegner in light of recent results in conformal field theory. Our approach is algebraic and based only on symmetry principles. The starting point of our analysis is that the first correction to the dilatation operator is a conformal invariant, which implies that its form is fixed up to an infinite set of coefficients associated with the scaling dimensions of higher-spin currents. These coefficients can be fixed using well-known perturbative results, however, they were recently re-obtained using CFT arguments without relying on perturbation theory. Our analysis then implies that all order-ε scaling dimensions of the Wilson-Fisher fixed point can be fixed by symmetry.

  20. National-Scale Hydrologic Classification & Agricultural Decision Support: A Multi-Scale Approach

    Science.gov (United States)

    Coopersmith, E. J.; Minsker, B.; Sivapalan, M.

    2012-12-01

    Classification frameworks can help organize catchments exhibiting similarity in hydrologic and climatic terms. Focusing this assessment of "similarity" upon specific hydrologic signatures, in this case the annual regime curve, can facilitate the prediction of hydrologic responses. Agricultural decision-support over a diverse set of catchments throughout the United States depends upon successful modeling of the wetting/drying process without necessitating separate model calibration at every site where such insights are required. To this end, a holistic classification framework is developed to describe both climatic variability (humid vs. arid, winter rainfall vs. summer rainfall) and the draining, storing, and filtering behavior of any catchment, including ungauged or minimally gauged basins. At the national scale, over 400 catchments from the MOPEX database are analyzed to construct the classification system, with over 77% of these catchments ultimately falling into only six clusters. At individual locations, soil moisture models, receiving only rainfall as input, produce correlation values in excess of 0.9 with respect to observed soil moisture measurements. By deploying physical models for predicting soil moisture exclusively from precipitation that are calibrated at gauged locations, overlaying machine learning techniques to improve these estimates, then generalizing the calibration parameters for catchments in a given class, agronomic decision-support becomes available where it is needed rather than only where sensing data are located.lassifications of 428 U.S. catchments on the basis of hydrologic regime data, Coopersmith et al, 2012.

  1. Tuneable resolution as a systems biology approach for multi-scale, multi-compartment computational models.

    Science.gov (United States)

    Kirschner, Denise E; Hunt, C Anthony; Marino, Simeone; Fallahi-Sichani, Mohammad; Linderman, Jennifer J

    2014-01-01

    The use of multi-scale mathematical and computational models to study complex biological processes is becoming increasingly productive. Multi-scale models span a range of spatial and/or temporal scales and can encompass multi-compartment (e.g., multi-organ) models. Modeling advances are enabling virtual experiments to explore and answer questions that are problematic to address in the wet-lab. Wet-lab experimental technologies now allow scientists to observe, measure, record, and analyze experiments focusing on different system aspects at a variety of biological scales. We need the technical ability to mirror that same flexibility in virtual experiments using multi-scale models. Here we present a new approach, tuneable resolution, which can begin providing that flexibility. Tuneable resolution involves fine- or coarse-graining existing multi-scale models at the user's discretion, allowing adjustment of the level of resolution specific to a question, an experiment, or a scale of interest. Tuneable resolution expands options for revising and validating mechanistic multi-scale models, can extend the longevity of multi-scale models, and may increase computational efficiency. The tuneable resolution approach can be applied to many model types, including differential equation, agent-based, and hybrid models. We demonstrate our tuneable resolution ideas with examples relevant to infectious disease modeling, illustrating key principles at work. © 2014 The Authors. WIREs Systems Biology and Medicine published by Wiley Periodicals, Inc.

  2. A new approach to designing reduced scale thermal-hydraulic experiments

    International Nuclear Information System (INIS)

    Lapa, Celso M.F.; Sampaio, Paulo A.B. de; Pereira, Claudio M.N.A.

    2004-01-01

    Reduced scale experiments are often employed in engineering because they are much cheaper than real scale testing. Unfortunately, though, it is difficult to design a thermal-hydraulic circuit or equipment in reduced scale capable of reproducing, both accurately and simultaneously, all the physical phenomena that occur in real scale and operating conditions. This paper presents a methodology to designing thermal-hydraulic experiments in reduced scale based on setting up a constrained optimization problem that is solved using genetic algorithms (GAs). In order to demonstrate the application of the methodology proposed, we performed some investigations in the design of a heater aimed to simulate the transport of heat and momentum in the core of a pressurized water reactor (PWR) at 100% of nominal power and non-accident operating conditions. The results obtained show that the proposed methodology is a promising approach for designing reduced scale experiments

  3. Validation of the "Quality of Life related to function, aesthetics, socialization, and thoughts about health-behavioural habits (QoLFAST-10)" scale for wearers of implant-supported fixed partial dentures.

    Science.gov (United States)

    Castillo-Oyagüe, Raquel; Perea, Carmen; Suárez-García, María-Jesús; Río, Jaime Del; Lynch, Christopher D; Preciado, Arelis

    2016-12-01

    To validate the 'Quality of Life related to function, aesthetics, socialization, and thoughts about health-behavioural habits (QoLFAST-10)' questionnaire for assessing the whole concept of oral health-related quality of life (OHRQoL) of implant-supported fixed partial denture (FPD) wearers. 107 patients were assigned to: Group 1 (HP; n=37): fixed-detachable hybrid prostheses (control); Group 2 (C-PD, n=35): cemented partial dentures; and Group 3 (S-PD, n=35): screwed partial dentures. Patients answered the QoLFAST-10 and the Oral Health Impact Profile (OHIP-14sp) scales. Information on global oral satisfaction, socio-demographic, prosthetic, and clinical data was gathered. The psychometric capacity of the QoLFAST-10 was investigated. The correlations between both indices were explored by the Spearman's rank test. The effect of the study variables on the OHRQoL was evaluated by descriptive and non-parametric probes (α=0.05). The QoLFAST-10 was reliable and valid for implant-supported FPD wearers, who attained comparable results regardless of the connection system being cement or screws. Both fixed partial groups demonstrated significantly better social, functional, and total satisfaction than did HP wearers with this index. All groups revealed similar aesthetic-related well-being and consciousness about the importance of health-behavioural habits. Several study variables modulated the QoLFAST-10 scores. Hybrid prostheses represent the least predictable treatment option, while cemented and screwed FPDs supplied equal OHRQoL as estimated by the QoLFAST-10 scale. The selection of cemented or screwed FPDs should mainly rely on clinical factors, since no differences in patient satisfaction may be expected between both types of implant rehabilitations. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Planning Alternative Organizational Frameworks For a Large Scale Educational Telecommunications System Served by Fixed/Broadcast Satellites. Memorandum Number 73/3.

    Science.gov (United States)

    Walkmeyer, John

    Considerations relating to the design of organizational structures for development and control of large scale educational telecommunications systems using satellites are explored. The first part of the document deals with four issues of system-wide concern. The first is user accessibility to the system, including proximity to entry points, ability…

  5. Heat and mass transfer intensification and shape optimization a multi-scale approach

    CERN Document Server

    2013-01-01

    Is the heat and mass transfer intensification defined as a new paradigm of process engineering, or is it just a common and old idea, renamed and given the current taste? Where might intensification occur? How to achieve intensification? How the shape optimization of thermal and fluidic devices leads to intensified heat and mass transfers? To answer these questions, Heat & Mass Transfer Intensification and Shape Optimization: A Multi-scale Approach clarifies  the definition of the intensification by highlighting the potential role of the multi-scale structures, the specific interfacial area, the distribution of driving force, the modes of energy supply and the temporal aspects of processes.   A reflection on the methods of process intensification or heat and mass transfer enhancement in multi-scale structures is provided, including porous media, heat exchangers, fluid distributors, mixers and reactors. A multi-scale approach to achieve intensification and shape optimization is developed and clearly expla...

  6. A Dynamical System Approach Explaining the Process of Development by Introducing Different Time-scales.

    Science.gov (United States)

    Hashemi Kamangar, Somayeh Sadat; Moradimanesh, Zahra; Mokhtari, Setareh; Bakouie, Fatemeh

    2018-06-11

    A developmental process can be described as changes through time within a complex dynamic system. The self-organized changes and emergent behaviour during development can be described and modeled as a dynamical system. We propose a dynamical system approach to answer the main question in human cognitive development i.e. the changes during development happens continuously or in discontinuous stages. Within this approach there is a concept; the size of time scales, which can be used to address the aforementioned question. We introduce a framework, by considering the concept of time-scale, in which "fast" and "slow" is defined by the size of time-scales. According to our suggested model, the overall pattern of development can be seen as one continuous function, with different time-scales in different time intervals.

  7. A Fixed Point Approach to the Stability of a General Mixed AQCQ-Functional Equation in Non-Archimedean Normed Spaces

    Directory of Open Access Journals (Sweden)

    Tian Zhou Xu

    2010-01-01

    Full Text Available Using the fixed point methods, we prove the generalized Hyers-Ulam stability of the general mixed additive-quadratic-cubic-quartic functional equation f(x+ky+f(x−ky=k2f(x+y+k2f(x−y+2(1−k2f(x+((k4−k2/12[f(2y+f(−2y−4f(y−4f(−y] for a fixed integer k with k≠0,±1 in non-Archimedean normed spaces.

  8. Approaches to large scale unsaturated flow in heterogeneous, stratified, and fractured geologic media

    International Nuclear Information System (INIS)

    Ababou, R.

    1991-08-01

    This report develops a broad review and assessment of quantitative modeling approaches and data requirements for large-scale subsurface flow in radioactive waste geologic repository. The data review includes discussions of controlled field experiments, existing contamination sites, and site-specific hydrogeologic conditions at Yucca Mountain. Local-scale constitutive models for the unsaturated hydrodynamic properties of geologic media are analyzed, with particular emphasis on the effect of structural characteristics of the medium. The report further reviews and analyzes large-scale hydrogeologic spatial variability from aquifer data, unsaturated soil data, and fracture network data gathered from the literature. Finally, various modeling strategies toward large-scale flow simulations are assessed, including direct high-resolution simulation, and coarse-scale simulation based on auxiliary hydrodynamic models such as single equivalent continuum and dual-porosity continuum. The roles of anisotropy, fracturing, and broad-band spatial variability are emphasized. 252 refs

  9. The Multi-Scale Model Approach to Thermohydrology at Yucca Mountain

    International Nuclear Information System (INIS)

    Glascoe, L; Buscheck, T A; Gansemer, J; Sun, Y

    2002-01-01

    The Multi-Scale Thermo-Hydrologic (MSTH) process model is a modeling abstraction of them1 hydrology (TH) of the potential Yucca Mountain repository at multiple spatial scales. The MSTH model as described herein was used for the Supplemental Science and Performance Analyses (BSC, 2001) and is documented in detail in CRWMS M and O (2000) and Glascoe et al. (2002). The model has been validated to a nested grid model in Buscheck et al. (In Review). The MSTH approach is necessary for modeling thermal hydrology at Yucca Mountain for two reasons: (1) varying levels of detail are necessary at different spatial scales to capture important TH processes and (2) a fully-coupled TH model of the repository which includes the necessary spatial detail is computationally prohibitive. The MSTH model consists of six ''submodels'' which are combined in a manner to reduce the complexity of modeling where appropriate. The coupling of these models allows for appropriate consideration of mountain-scale thermal hydrology along with the thermal hydrology of drift-scale discrete waste packages of varying heat load. Two stages are involved in the MSTH approach, first, the execution of submodels, and second, the assembly of submodels using the Multi-scale Thermohydrology Abstraction Code (MSTHAC). MSTHAC assembles the submodels in a five-step process culminating in the TH model output of discrete waste packages including a mountain-scale influence

  10. A novel approach to the automatic control of scale model airplanes

    OpenAIRE

    Hua , Minh-Duc; Pucci , Daniele; Hamel , Tarek; Morin , Pascal; Samson , Claude

    2014-01-01

    International audience; — This paper explores a new approach to the control of scale model airplanes as an extension of previous studies addressing the case of vehicles presenting a symmetry of revolution about the thrust axis. The approach is intrinsically nonlinear and, with respect to other contributions on aircraft nonlinear control, no small attack angle assumption is made in order to enlarge the controller's operating domain. Simulation results conducted on a simplified, but not overly ...

  11. A large-scale multi-objective flights conflict avoidance approach supporting 4D trajectory operation

    OpenAIRE

    Guan, Xiangmin; Zhang, Xuejun; Lv, Renli; Chen, Jun; Weiszer, Michal

    2017-01-01

    Recently, the long-term conflict avoidance approaches based on large-scale flights scheduling have attracted much attention due to their ability to provide solutions from a global point of view. However, the current approaches which focus only on a single objective with the aim of minimizing the total delay and the number of conflicts, cannot provide the controllers with variety of optional solutions, representing different trade-offs. Furthermore, the flight track error is often overlooked i...

  12. Hierarchical approach to optimization of parallel matrix multiplication on large-scale platforms

    KAUST Repository

    Hasanov, Khalid; Quintin, Jean-Noë l; Lastovetsky, Alexey

    2014-01-01

    -scale parallelism in mind. Indeed, while in 1990s a system with few hundred cores was considered a powerful supercomputer, modern top supercomputers have millions of cores. In this paper, we present a hierarchical approach to optimization of message-passing parallel

  13. A multiscale analytical approach for bone remodeling simulations : linking scales from collagen to trabeculae

    NARCIS (Netherlands)

    Colloca, M.; Blanchard, R.; Hellmich, C.; Ito, K.; Rietbergen, van B.

    2014-01-01

    Bone is a dynamic and hierarchical porous material whose spatial and temporal mechanical properties can vary considerably due to differences in its microstructure and due to remodeling. Hence, a multiscale analytical approach, which combines bone structural information at multiple scales to the

  14. How efficient is sliding-scale insulin therapy? Problems with a 'cookbook' approach in hospitalized patients.

    Science.gov (United States)

    Katz, C M

    1991-04-01

    Sliding-scale insulin therapy is seldom the best way to treat hospitalized diabetic patients. In the few clinical situations in which it is appropriate, close attention to details and solidly based scientific principles is absolutely necessary. Well-organized alternative approaches to insulin therapy usually offer greater efficiency and effectiveness.

  15. Biocultural approaches to well-being and sustainability indicators across scales

    Science.gov (United States)

    Eleanor J. Sterling; Christopher Filardi; Anne Toomey; Amanda Sigouin; Erin Betley; Nadav Gazit; Jennifer Newell; Simon Albert; Diana Alvira; Nadia Bergamini; Mary Blair; David Boseto; Kate Burrows; Nora Bynum; Sophie Caillon; Jennifer E. Caselle; Joachim Claudet; Georgina Cullman; Rachel Dacks; Pablo B. Eyzaguirre; Steven Gray; James Herrera; Peter Kenilorea; Kealohanuiopuna Kinney; Natalie Kurashima; Suzanne Macey; Cynthia Malone; Senoveva Mauli; Joe McCarter; Heather McMillen; Pua’ala Pascua; Patrick Pikacha; Ana L. Porzecanski; Pascale de Robert; Matthieu Salpeteur; Myknee Sirikolo; Mark H. Stege; Kristina Stege; Tamara Ticktin; Ron Vave; Alaka Wali; Paige West; Kawika B. Winter; Stacy D. Jupiter

    2017-01-01

    Monitoring and evaluation are central to ensuring that innovative, multi-scale, and interdisciplinary approaches to sustainability are effective. The development of relevant indicators for local sustainable management outcomes, and the ability to link these to broader national and international policy targets, are key challenges for resource managers, policymakers, and...

  16. Hierarchical approach to optimization of parallel matrix multiplication on large-scale platforms

    KAUST Repository

    Hasanov, Khalid

    2014-03-04

    © 2014, Springer Science+Business Media New York. Many state-of-the-art parallel algorithms, which are widely used in scientific applications executed on high-end computing systems, were designed in the twentieth century with relatively small-scale parallelism in mind. Indeed, while in 1990s a system with few hundred cores was considered a powerful supercomputer, modern top supercomputers have millions of cores. In this paper, we present a hierarchical approach to optimization of message-passing parallel algorithms for execution on large-scale distributed-memory systems. The idea is to reduce the communication cost by introducing hierarchy and hence more parallelism in the communication scheme. We apply this approach to SUMMA, the state-of-the-art parallel algorithm for matrix–matrix multiplication, and demonstrate both theoretically and experimentally that the modified Hierarchical SUMMA significantly improves the communication cost and the overall performance on large-scale platforms.

  17. Approaching a universal scaling relationship between fracture stiffness and fluid flow

    Science.gov (United States)

    Pyrak-Nolte, Laura J.; Nolte, David D.

    2016-02-01

    A goal of subsurface geophysical monitoring is the detection and characterization of fracture alterations that affect the hydraulic integrity of a site. Achievement of this goal requires a link between the mechanical and hydraulic properties of a fracture. Here we present a scaling relationship between fluid flow and fracture-specific stiffness that approaches universality. Fracture-specific stiffness is a mechanical property dependent on fracture geometry that can be monitored remotely using seismic techniques. A Monte Carlo numerical approach demonstrates that a scaling relationship exists between flow and stiffness for fractures with strongly correlated aperture distributions, and continues to hold for fractures deformed by applied stress and by chemical erosion as well. This new scaling relationship provides a foundation for simulating changes in fracture behaviour as a function of stress or depth in the Earth and will aid risk assessment of the hydraulic integrity of subsurface sites.

  18. A Confirmatory Factor Analysis on the Attitude Scale of Constructivist Approach for Science Teachers

    Directory of Open Access Journals (Sweden)

    E. Evrekli

    2010-11-01

    Full Text Available Underlining the importance of teachers for the constructivist approach, the present study attempts to develop “Attitude Scale of Construc¬tivist Approach for Science Teachers (ASCAST”. The pre-applications of the scale were administered to a total of 210 science teachers; however, the data obtained from 5 teachers were excluded from the analysis. As a result of the analysis of the data obtained from the pre-applications, it was found that the scale could have a single factor structure, which was tested using the confir¬matory factor analysis. As a result of the initial confirmatory factor analysis, the values of fit were examined and found to be low. Subsequently, by exam¬ining the modification indices, error covariance was added between items 23 and 24 and the model was tested once again. The added error covariance led to a significant improvement in the model, producing values of fit suitable for limit values. Thus, it was concluded that the scale could be employed with a single factor. The explained variance value for the scale developed with a sin¬gle factor structure was calculated to be 50.43% and its reliability was found to be .93. The results obtained suggest that the scale possesses reliable-valid characteristics and could be used in further studies.

  19. A feasible approach to implement a commercial scale CANDU fuel manufacturing plant in Egypt

    International Nuclear Information System (INIS)

    El-Shehawy, I.; El-Sharaky, M.; Yasso, K.; Selim, I.; Graham, N.; Newington, D.

    1995-01-01

    Many planning scenarios have been examined to assess and evaluate the economic estimates for implementing a commercial scale CANDU fuel manufacturing plant in Egypt. The cost estimates indicated strong influence of the annual capital costs on total fuel manufacturing cost; this is particularly evident in a small initial plant where the proposed design output is only sufficient to supply reload fuel for a single CANDU-6 reactor. A modular approach is investigated as a possible way, to reduce the capital costs for a small initial fuel plant. In this approach the plant would do fuel assembly operations only and the remainder of a plant would be constructed and equipped in the stages when high production volumes can justify the capital expenses. Such approach seems economically feasible for implementing a small scale CANDU fuel manufacturing plant in developing countries such as Egypt and further improvement could be achieved over the years of operation. (author)

  20. College students with Internet addiction decrease fewer Behavior Inhibition Scale and Behavior Approach Scale when getting online.

    Science.gov (United States)

    Ko, Chih-Hung; Wang, Peng-Wei; Liu, Tai-Ling; Yen, Cheng-Fang; Chen, Cheng-Sheng; Yen, Ju-Yu

    2015-09-01

    The aim of the study is to compare the reinforcement sensitivity between online and offline interaction. The effect of gender, Internet addiction, depression, and online gaming on the difference of reinforcement sensitivity between online and offline were also evaluated. The subjects were 2,258 college students (1,066 men and 1,192 women). They completed the Behavior Inhibition Scale and Behavior Approach Scale (BIS/BAS) according to their experience online or offline. Internet addiction, depression, and Internet activity type were evaluated simultaneously. The results showed that reinforcement sensitivity was lower when interacting online than when interacting offline. College students with Internet addiction decrease fewer score on BIS and BAS after getting online than did others. The higher reward and aversion sensitivity are associated with the risk of Internet addiction. The fun seeking online might contribute to the maintenance of Internet addiction. This suggests that reinforcement sensitivity would change after getting online and would contribute to the risk and maintenance of Internet addiction. © 2014 Wiley Publishing Asia Pty Ltd.

  1. A multi-scale metrics approach to forest fragmentation for Strategic Environmental Impact Assessment

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Eunyoung, E-mail: eykim@kei.re.kr [Korea Environment Institute, 215 Jinheungno, Eunpyeong-gu, Seoul 122-706 (Korea, Republic of); Song, Wonkyong, E-mail: wksong79@gmail.com [Suwon Research Institute, 145 Gwanggyo-ro, Yeongtong-gu, Suwon-si, Gyeonggi-do 443-270 (Korea, Republic of); Lee, Dongkun, E-mail: dklee7@snu.ac.kr [Department of Landscape Architecture and Rural System Engineering, Seoul National University, 599 Gwanakro, Gwanak-gu, Seoul 151-921 (Korea, Republic of); Research Institute for Agriculture and Life Sciences, Seoul National University, Seoul 151-921 (Korea, Republic of)

    2013-09-15

    Forests are becoming severely fragmented as a result of land development. South Korea has responded to changing community concerns about environmental issues. The nation has developed and is extending a broad range of tools for use in environmental management. Although legally mandated environmental compliance requirements in South Korea have been implemented to predict and evaluate the impacts of land-development projects, these legal instruments are often insufficient to assess the subsequent impact of development on the surrounding forests. It is especially difficult to examine impacts on multiple (e.g., regional and local) scales in detail. Forest configuration and size, including forest fragmentation by land development, are considered on a regional scale. Moreover, forest structure and composition, including biodiversity, are considered on a local scale in the Environmental Impact Assessment process. Recently, the government amended the Environmental Impact Assessment Act, including the SEA, EIA, and small-scale EIA, to require an integrated approach. Therefore, the purpose of this study was to establish an impact assessment system that minimizes the impacts of land development using an approach that is integrated across multiple scales. This study focused on forest fragmentation due to residential development and road construction sites in selected Congestion Restraint Zones (CRZs) in the Greater Seoul Area of South Korea. Based on a review of multiple-scale impacts, this paper integrates models that assess the impacts of land development on forest ecosystems. The applicability of the integrated model for assessing impacts on forest ecosystems through the SEIA process is considered. On a regional scale, it is possible to evaluate the location and size of a land-development project by considering aspects of forest fragmentation, such as the stability of the forest structure and the degree of fragmentation. On a local scale, land-development projects should

  2. A multi-scale approach of fluvial biogeomorphic dynamics using photogrammetry.

    Science.gov (United States)

    Hortobágyi, Borbála; Corenblit, Dov; Vautier, Franck; Steiger, Johannes; Roussel, Erwan; Burkart, Andreas; Peiry, Jean-Luc

    2017-11-01

    Over the last twenty years, significant technical advances turned photogrammetry into a relevant tool for the integrated analysis of biogeomorphic cross-scale interactions within vegetated fluvial corridors, which will largely contribute to the development and improvement of self-sustainable river restoration efforts. Here, we propose a cost-effective, easily reproducible approach based on stereophotogrammetry and Structure from Motion (SfM) technique to study feedbacks between fluvial geomorphology and riparian vegetation at different nested spatiotemporal scales. We combined different photogrammetric methods and thus were able to investigate biogeomorphic feedbacks at all three spatial scales (i.e., corridor, alluvial bar and micro-site) and at three different temporal scales, i.e., present, recent past and long term evolution on a diversified riparian landscape mosaic. We evaluate the performance and the limits of photogrammetric methods by targeting a set of fundamental parameters necessary to study biogeomorphic feedbacks at each of the three nested spatial scales and, when possible, propose appropriate solutions. The RMSE varies between 0.01 and 2 m depending on spatial scale and photogrammetric methods. Despite some remaining difficulties to properly apply them with current technologies under all circumstances in fluvial biogeomorphic studies, e.g. the detection of vegetation density or landform topography under a dense vegetation canopy, we suggest that photogrammetry is a promising instrument for the quantification of biogeomorphic feedbacks at nested spatial scales within river systems and for developing appropriate river management tools and strategies. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. A new scaling approach for the mesoscale simulation of magnetic domain structures using Monte Carlo simulations

    Energy Technology Data Exchange (ETDEWEB)

    Radhakrishnan, B., E-mail: radhakrishnb@ornl.gov; Eisenbach, M.; Burress, T.A.

    2017-06-15

    Highlights: • Developed new scaling technique for dipole–dipole interaction energy. • Developed new scaling technique for exchange interaction energy. • Used scaling laws to extend atomistic simulations to micrometer length scale. • Demonstrated transition from mono-domain to vortex magnetic structure. • Simulated domain wall width and transition length scale agree with experiments. - Abstract: A new scaling approach has been proposed for the spin exchange and the dipole–dipole interaction energy as a function of the system size. The computed scaling laws are used in atomistic Monte Carlo simulations of magnetic moment evolution to predict the transition from single domain to a vortex structure as the system size increases. The width of a 180° – domain wall extracted from the simulated structures is in close agreement with experimentally values for an F–Si alloy. The transition size from a single domain to a vortex structure is also in close agreement with theoretically predicted and experimentally measured values for Fe.

  4. Impact of fixed-mobile convergence

    DEFF Research Database (Denmark)

    Pachnicke, Stephan; Andrus, Bogdan-Mihai; Autenrieth, Achim

    2016-01-01

    Fixed-Mobile Convergence (FMC) is a very trendy concept as it promises integration of the previously separated fixed access network and the mobile network. From this novel approach telecommunication operators expect significant cost savings and performance improvements. FMC can be separated...

  5. A multi-scale relevance vector regression approach for daily urban water demand forecasting

    Science.gov (United States)

    Bai, Yun; Wang, Pu; Li, Chuan; Xie, Jingjing; Wang, Yin

    2014-09-01

    Water is one of the most important resources for economic and social developments. Daily water demand forecasting is an effective measure for scheduling urban water facilities. This work proposes a multi-scale relevance vector regression (MSRVR) approach to forecast daily urban water demand. The approach uses the stationary wavelet transform to decompose historical time series of daily water supplies into different scales. At each scale, the wavelet coefficients are used to train a machine-learning model using the relevance vector regression (RVR) method. The estimated coefficients of the RVR outputs for all of the scales are employed to reconstruct the forecasting result through the inverse wavelet transform. To better facilitate the MSRVR forecasting, the chaos features of the daily water supply series are analyzed to determine the input variables of the RVR model. In addition, an adaptive chaos particle swarm optimization algorithm is used to find the optimal combination of the RVR model parameters. The MSRVR approach is evaluated using real data collected from two waterworks and is compared with recently reported methods. The results show that the proposed MSRVR method can forecast daily urban water demand much more precisely in terms of the normalized root-mean-square error, correlation coefficient, and mean absolute percentage error criteria.

  6. Practice-oriented optical thin film growth simulation via multiple scale approach

    Energy Technology Data Exchange (ETDEWEB)

    Turowski, Marcus, E-mail: m.turowski@lzh.de [Laser Zentrum Hannover e.V., Hollerithallee 8, Hannover 30419 (Germany); Jupé, Marco [Laser Zentrum Hannover e.V., Hollerithallee 8, Hannover 30419 (Germany); QUEST: Centre of Quantum Engineering and Space-Time Research, Leibniz Universität Hannover (Germany); Melzig, Thomas [Fraunhofer Institute for Surface Engineering and Thin Films IST, Bienroder Weg 54e, Braunschweig 30108 (Germany); Moskovkin, Pavel [Research Centre for Physics of Matter and Radiation (PMR-LARN), University of Namur (FUNDP), 61 rue de Bruxelles, Namur 5000 (Belgium); Daniel, Alain [Centre for Research in Metallurgy, CRM, 21 Avenue du bois Saint Jean, Liège 4000 (Belgium); Pflug, Andreas [Fraunhofer Institute for Surface Engineering and Thin Films IST, Bienroder Weg 54e, Braunschweig 30108 (Germany); Lucas, Stéphane [Research Centre for Physics of Matter and Radiation (PMR-LARN), University of Namur (FUNDP), 61 rue de Bruxelles, Namur 5000 (Belgium); Ristau, Detlev [Laser Zentrum Hannover e.V., Hollerithallee 8, Hannover 30419 (Germany); QUEST: Centre of Quantum Engineering and Space-Time Research, Leibniz Universität Hannover (Germany)

    2015-10-01

    Simulation of the coating process is a very promising approach for the understanding of thin film formation. Nevertheless, this complex matter cannot be covered by a single simulation technique. To consider all mechanisms and processes influencing the optical properties of the growing thin films, various common theoretical methods have been combined to a multi-scale model approach. The simulation techniques have been selected in order to describe all processes in the coating chamber, especially the various mechanisms of thin film growth, and to enable the analysis of the resulting structural as well as optical and electronic layer properties. All methods are merged with adapted communication interfaces to achieve optimum compatibility of the different approaches and to generate physically meaningful results. The present contribution offers an approach for the full simulation of an Ion Beam Sputtering (IBS) coating process combining direct simulation Monte Carlo, classical molecular dynamics, kinetic Monte Carlo, and density functional theory. The simulation is performed exemplary for an existing IBS-coating plant to achieve a validation of the developed multi-scale approach. Finally, the modeled results are compared to experimental data. - Highlights: • A model approach for simulating an Ion Beam Sputtering (IBS) process is presented. • In order to combine the different techniques, optimized interfaces are developed. • The transport of atomic species in the coating chamber is calculated. • We modeled structural and optical film properties based on simulated IBS parameter. • The modeled and the experimental refractive index data fit very well.

  7. A new approach for modeling and analysis of molten salt reactors using SCALE

    Energy Technology Data Exchange (ETDEWEB)

    Powers, J. J.; Harrison, T. J.; Gehin, J. C. [Oak Ridge National Laboratory, PO Box 2008, Oak Ridge, TN 37831-6172 (United States)

    2013-07-01

    The Office of Fuel Cycle Technologies (FCT) of the DOE Office of Nuclear Energy is performing an evaluation and screening of potential fuel cycle options to provide information that can support future research and development decisions based on the more promising fuel cycle options. [1] A comprehensive set of fuel cycle options are put into evaluation groups based on physics and fuel cycle characteristics. Representative options for each group are then evaluated to provide the quantitative information needed to support the valuation of criteria and metrics used for the study. Included in this set of representative options are Molten Salt Reactors (MSRs), the analysis of which requires several capabilities that are not adequately supported by the current version of SCALE or other neutronics depletion software packages (e.g., continuous online feed and removal of materials). A new analysis approach was developed for MSR analysis using SCALE by taking user-specified MSR parameters and performing a series of SCALE/TRITON calculations to determine the resulting equilibrium operating conditions. This paper provides a detailed description of the new analysis approach, including the modeling equations and radiation transport models used. Results for an MSR fuel cycle option of interest are also provided to demonstrate the application to a relevant problem. The current implementation is through a utility code that uses the two-dimensional (2D) TRITON depletion sequence in SCALE 6.1 but could be readily adapted to three-dimensional (3D) TRITON depletion sequences or other versions of SCALE. (authors)

  8. A new approach for modeling and analysis of molten salt reactors using SCALE

    International Nuclear Information System (INIS)

    Powers, J. J.; Harrison, T. J.; Gehin, J. C.

    2013-01-01

    The Office of Fuel Cycle Technologies (FCT) of the DOE Office of Nuclear Energy is performing an evaluation and screening of potential fuel cycle options to provide information that can support future research and development decisions based on the more promising fuel cycle options. [1] A comprehensive set of fuel cycle options are put into evaluation groups based on physics and fuel cycle characteristics. Representative options for each group are then evaluated to provide the quantitative information needed to support the valuation of criteria and metrics used for the study. Included in this set of representative options are Molten Salt Reactors (MSRs), the analysis of which requires several capabilities that are not adequately supported by the current version of SCALE or other neutronics depletion software packages (e.g., continuous online feed and removal of materials). A new analysis approach was developed for MSR analysis using SCALE by taking user-specified MSR parameters and performing a series of SCALE/TRITON calculations to determine the resulting equilibrium operating conditions. This paper provides a detailed description of the new analysis approach, including the modeling equations and radiation transport models used. Results for an MSR fuel cycle option of interest are also provided to demonstrate the application to a relevant problem. The current implementation is through a utility code that uses the two-dimensional (2D) TRITON depletion sequence in SCALE 6.1 but could be readily adapted to three-dimensional (3D) TRITON depletion sequences or other versions of SCALE. (authors)

  9. Scale-Dependence of Processes Structuring Dung Beetle Metacommunities Using Functional Diversity and Community Deconstruction Approaches

    Science.gov (United States)

    da Silva, Pedro Giovâni; Hernández, Malva Isabel Medina

    2015-01-01

    Community structure is driven by mechanisms linked to environmental, spatial and temporal processes, which have been successfully addressed using metacommunity framework. The relative importance of processes shaping community structure can be identified using several different approaches. Two approaches that are increasingly being used are functional diversity and community deconstruction. Functional diversity is measured using various indices that incorporate distinct community attributes. Community deconstruction is a way to disentangle species responses to ecological processes by grouping species with similar traits. We used these two approaches to determine whether they are improvements over traditional measures (e.g., species composition, abundance, biomass) for identification of the main processes driving dung beetle (Scarabaeinae) community structure in a fragmented mainland-island landscape in southern Brazilian Atlantic Forest. We sampled five sites in each of four large forest areas, two on the mainland and two on the island. Sampling was performed in 2012 and 2013. We collected abundance and biomass data from 100 sampling points distributed over 20 sampling sites. We studied environmental, spatial and temporal effects on dung beetle community across three spatial scales, i.e., between sites, between areas and mainland-island. The γ-diversity based on species abundance was mainly attributed to β-diversity as a consequence of the increase in mean α- and β-diversity between areas. Variation partitioning on abundance, biomass and functional diversity showed scale-dependence of processes structuring dung beetle metacommunities. We identified two major groups of responses among 17 functional groups. In general, environmental filters were important at both local and regional scales. Spatial factors were important at the intermediate scale. Our study supports the notion of scale-dependence of environmental, spatial and temporal processes in the distribution

  10. Scales

    Science.gov (United States)

    Scales are a visible peeling or flaking of outer skin layers. These layers are called the stratum ... Scales may be caused by dry skin, certain inflammatory skin conditions, or infections. Examples of disorders that ...

  11. Modeling Impact-induced Failure of Polysilicon MEMS: A Multi-scale Approach.

    Science.gov (United States)

    Mariani, Stefano; Ghisi, Aldo; Corigliano, Alberto; Zerbini, Sarah

    2009-01-01

    Failure of packaged polysilicon micro-electro-mechanical systems (MEMS) subjected to impacts involves phenomena occurring at several length-scales. In this paper we present a multi-scale finite element approach to properly allow for: (i) the propagation of stress waves inside the package; (ii) the dynamics of the whole MEMS; (iii) the spreading of micro-cracking in the failing part(s) of the sensor. Through Monte Carlo simulations, some effects of polysilicon micro-structure on the failure mode are elucidated.

  12. Time-dependent approach to collisional ionization using exterior complex scaling

    International Nuclear Information System (INIS)

    McCurdy, C. William; Horner, Daniel A.; Rescigno, Thomas N.

    2002-01-01

    We present a time-dependent formulation of the exterior complex scaling method that has previously been used to treat electron-impact ionization of the hydrogen atom accurately at low energies. The time-dependent approach solves a driven Schroedinger equation, and scales more favorably with the number of electrons than the original formulation. The method is demonstrated in calculations for breakup processes in two dimensions (2D) and three dimensions for systems involving short-range potentials and in 2D for electron-impact ionization in the Temkin-Poet model for electron-hydrogen atom collisions

  13. A probabilistic approach to quantifying spatial patterns of flow regimes and network-scale connectivity

    Science.gov (United States)

    Garbin, Silvia; Alessi Celegon, Elisa; Fanton, Pietro; Botter, Gianluca

    2017-04-01

    The temporal variability of river flow regime is a key feature structuring and controlling fluvial ecological communities and ecosystem processes. In particular, streamflow variability induced by climate/landscape heterogeneities or other anthropogenic factors significantly affects the connectivity between streams with notable implication for river fragmentation. Hydrologic connectivity is a fundamental property that guarantees species persistence and ecosystem integrity in riverine systems. In riverine landscapes, most ecological transitions are flow-dependent and the structure of flow regimes may affect ecological functions of endemic biota (i.e., fish spawning or grazing of invertebrate species). Therefore, minimum flow thresholds must be guaranteed to support specific ecosystem services, like fish migration, aquatic biodiversity and habitat suitability. In this contribution, we present a probabilistic approach aiming at a spatially-explicit, quantitative assessment of hydrologic connectivity at the network-scale as derived from river flow variability. Dynamics of daily streamflows are estimated based on catchment-scale climatic and morphological features, integrating a stochastic, physically based approach that accounts for the stochasticity of rainfall with a water balance model and a geomorphic recession flow model. The non-exceedance probability of ecologically meaningful flow thresholds is used to evaluate the fragmentation of individual stream reaches, and the ensuing network-scale connectivity metrics. A multi-dimensional Poisson Process for the stochastic generation of rainfall is used to evaluate the impact of climate signature on reach-scale and catchment-scale connectivity. The analysis shows that streamflow patterns and network-scale connectivity are influenced by the topology of the river network and the spatial variability of climatic properties (rainfall, evapotranspiration). The framework offers a robust basis for the prediction of the impact of

  14. An Axiomatic Analysis Approach for Large-Scale Disaster-Tolerant Systems Modeling

    Directory of Open Access Journals (Sweden)

    Theodore W. Manikas

    2011-02-01

    Full Text Available Disaster tolerance in computing and communications systems refers to the ability to maintain a degree of functionality throughout the occurrence of a disaster. We accomplish the incorporation of disaster tolerance within a system by simulating various threats to the system operation and identifying areas for system redesign. Unfortunately, extremely large systems are not amenable to comprehensive simulation studies due to the large computational complexity requirements. To address this limitation, an axiomatic approach that decomposes a large-scale system into smaller subsystems is developed that allows the subsystems to be independently modeled. This approach is implemented using a data communications network system example. The results indicate that the decomposition approach produces simulation responses that are similar to the full system approach, but with greatly reduced simulation time.

  15. Dissolution profiles of perindopril and indapamide in their fixed-dose formulations by a new HPLC method and different mathematical approaches

    Directory of Open Access Journals (Sweden)

    Gumieniczek Anna

    2015-09-01

    Full Text Available A new HPLC method was introduced and validated for simultaneous determination of perindopril and indapamide. Validation procedure included specificity, sensitivity, robustness, stability, linearity, precision and accuracy. The method was used for the dissolution test of perindopril and indapamide in three fixed-dose formulations. The dissolution procedure was optimized using different media, different pH of the buffer, surfactants, paddle speed and temperature. Similarity of dissolution profiles was estimated using different model-independent and model-dependent methods and, additionally, by principal component analysis (PCA. Also, some kinetic models were checked for dissolved amounts of drugs as a function of time.

  16. Environmental Remediation Full-Scale Implementation: Back to Simple Microbial Massive Culture Approaches

    Directory of Open Access Journals (Sweden)

    Agung Syakti

    2010-10-01

    Full Text Available Using bioaugmentation and biostimulation approach for contaminated soil bioremediation were investigated and implemented on field scale. We combine those approaches by culturing massively the petrophilic indigenous microorganisms from chronically contaminated soil enriched by mixed manure. Through these methods, bioremediation performance revealed promising results in removing the petroleum hydrocarbons comparatively using metabolite by product such as biosurfactant, specific enzymes and other extra-cellular product which are considered as a difficult task and will impact on cost increase.

  17. The Universal Patient Centredness Questionnaire: scaling approaches to reduce positive skew

    Directory of Open Access Journals (Sweden)

    Bjertnaes O

    2016-11-01

    Full Text Available Oyvind Bjertnaes, Hilde Hestad Iversen, Andrew M Garratt Unit for Patient-Reported Quality, Norwegian Institute of Public Health, Oslo, Norway Purpose: Surveys of patients’ experiences typically show results that are indicative of positive experiences. Unbalanced response scales have reduced positive skew for responses to items within the Universal Patient Centeredness Questionnaire (UPC-Q. The objective of this study was to compare the unbalanced response scale with another unbalanced approach to scaling to assess whether the positive skew might be further reduced. Patients and methods: The UPC-Q was included in a patient experience survey conducted at the ward level at six hospitals in Norway in 2015. The postal survey included two reminders to nonrespondents. For patients in the first month of inclusion, UPC-Q items had standard scaling: poor, fairly good, good, very good, and excellent. For patients in the second month, the scaling was more positive: poor, good, very good, exceptionally good, and excellent. The effect of scaling on UPC-Q scores was tested with independent samples t-tests and multilevel linear regression analysis, the latter controlling for the hierarchical structure of data and known predictors of patient-reported experiences. Results: The response rate was 54.6% (n=4,970. Significantly lower scores were found for all items of the more positively worded scale: UPC-Q total score difference was 7.9 (P<0.001, on a scale from 0 to 100 where 100 is the best possible score. Differences between the four items of the UPC-Q ranged from 7.1 (P<0.001 to 10.4 (P<0.001. Multivariate multilevel regression analysis confirmed the difference between the response groups, after controlling for other background variables; UPC-Q total score difference estimate was 8.3 (P<0.001. Conclusion: The more positively worded scaling significantly lowered the mean scores, potentially increasing the sensitivity of the UPC-Q to identify differences over

  18. Approaches to 30 Percent Energy Savings at the Community Scale in the Hot-Humid Climate

    Energy Technology Data Exchange (ETDEWEB)

    Thomas-Rees, S. [Building America Partnership for Improved Residential Construction (BA-PIRC), Cocoa, FL (United States); Beal, D. [Building America Partnership for Improved Residential Construction (BA-PIRC), Cocoa, FL (United States); Martin, E. [Building America Partnership for Improved Residential Construction (BA-PIRC), Cocoa, FL (United States)

    2013-03-01

    BA-PIRC has worked with several community-scale builders within the hot humid climate zone to improve performance of production, or community scale, housing. Tommy Williams Homes (Gainesville, FL), Lifestyle Homes (Melbourne, FL), and Habitat for Humanity (various locations, FL) have all been continuous partners of the Building America program and are the subjects of this report to document achievement of the Building America goal of 30% whole house energy savings packages adopted at the community scale. Key aspects of this research include determining how to evolve existing energy efficiency packages to produce replicable target savings, identifying what builders' technical assistance needs are for implementation and working with them to create sustainable quality assurance mechanisms, and documenting the commercial viability through neutral cost analysis and market acceptance. This report documents certain barriers builders overcame and the approaches they implemented in order to accomplish Building America (BA) Program goals that have not already been documented in previous reports.

  19. FEM × DEM: a new efficient multi-scale approach for geotechnical problems with strain localization

    Directory of Open Access Journals (Sweden)

    Nguyen Trung Kien

    2017-01-01

    Full Text Available The paper presents a multi-scale modeling of Boundary Value Problem (BVP approach involving cohesive-frictional granular materials in the FEM × DEM multi-scale framework. On the DEM side, a 3D model is defined based on the interactions of spherical particles. This DEM model is built through a numerical homogenization process applied to a Volume Element (VE. It is then paired with a Finite Element code. Using this numerical tool that combines two scales within the same framework, we conducted simulations of biaxial and pressuremeter tests on a cohesive-frictional granular medium. In these cases, it is known that strain localization does occur at the macroscopic level, but since FEMs suffer from severe mesh dependency as soon as shear band starts to develop, the second gradient regularization technique has been used. As a consequence, the objectivity of the computation with respect to mesh dependency is restored.

  20. Fuel rod fixing system

    International Nuclear Information System (INIS)

    Christiansen, D.W.

    1982-01-01

    This is a reusable system for fixing a nuclear reactor fuel rod to a support. An interlock cap is fixed to the fuel rod and an interlock strip is fixed to the support. The interlock cap has two opposed fingers, which are shaped so that a base is formed with a body part. The interlock strip has an extension, which is shaped so that this is rigidly fixed to the body part of the base. The fingers of the interlock cap are elastic in bending. To fix it, the interlock cap is pushed longitudinally on to the interlock strip, which causes the extension to bend the fingers open in order to engage with the body part of the base. To remove it, the procedure is reversed. (orig.) [de

  1. “HABITAT MAPPING” GEODATABASE, AN INTEGRATED INTERDISCIPLINARY AND MULTI-SCALE APPROACH FOR DATA MANAGEMENT

    OpenAIRE

    Grande, Valentina; Angeletti, Lorenzo; Campiani, Elisabetta; Conese, Ilaria; Foglini, Federica; Leidi, Elisa; Mercorella, Alessandra; Taviani, Marco

    2016-01-01

    Abstract Historically, a number of different key concepts and methods dealing with marine habitat classifications and mapping have been developed to date. The EU CoCoNET project provides a new attempt in establishing an integrated approach on the definition of habitats. This scheme combines multi-scale geological and biological data, in fact it consists of three levels (Geomorphological level, Substrate level and Biological level) which in turn are divided into several h...

  2. FIXING HEALTH SYSTEMS / Executive Summary (2008 update ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    2010-12-14

    Dec 14, 2010 ... FIXING HEALTH SYSTEMS / Executive Summary (2008 update) ... In several cases, specific approaches recommended by the TEHIP team have been acted upon regionally and internationally, including the ... Related articles ...

  3. Um modelo de otimização inteira mista e heurísticas relax and fix para a programação da produção de fábricas de refrigerantes de pequeno porte A mixed integer programming model and relax and fix heuristics for the production scheduling of small scale soft drink plants

    Directory of Open Access Journals (Sweden)

    Deisemara Ferreira

    2008-01-01

    Full Text Available Neste artigo propomos um modelo de otimização inteira mista para o problema de dimensionamento e seqüenciamento dos lotes de produção em fábricas de refrigerantes de pequeno porte, com tempos e custos de set up de produção dependentes do seqüenciamento dos lotes. O modelo considera o estágio de envase como sendo o gargalo da produção da planta, o que é comum em fábricas de pequeno porte com uma única linha de envase, e restrições de lote mínimo do estágio de xaroparia. Variações da heurística relax and fix são propostas e comparadas na solução de exemplares do modelo, gerados com dados reais de uma fábrica localizada no interior do Estado de São Paulo. Os resultados mostram que as abordagens são capazes de gerar soluções melhores do que as utilizadas pela empresa.In this paper we propose a mixed integer programming model to the lot sizing and sequencing problem of a soft drink plant with sequence-dependent set up costs and times. The model considers that the bottling stage is the production bottleneck, which is common in small plants with only one production line, and minimum lot size constrains of the syrup stage. Variations of the relax and fix heuristic are proposed and compared. A computational study with instances generated based on real data from a plant situated in the State of São Paulo-Brazil is also presented. The results show that the approaches are capable to produce better solutions than the ones from the company.

  4. SU-F-T-208: An Efficient Planning Approach to Posterior Fossa Tumor Bed Boosts Using Proton Pencil Beam Scanning in Fixed-Beam Room

    International Nuclear Information System (INIS)

    Ju, N; Chen, C; Gans, S; Hug, E; Cahlon, O; Chon, B; Tsai, H; Sine, K; Mah, D; Wolden, S; Yeh, B

    2016-01-01

    Purpose: A fixed-beam room could be underutilized in a multi-room proton center. We investigated the use of proton pencil beam scanning (PBS) on a fixed-beam as an alternative for posterior fossa tumor bed (PF-TB) boost treatments which were usually treating on a gantry with uniform scanning. Methods: Five patients were treated with craniospinal irradiation (CSI, 23.4 or 36.0 Gy(RBE)) followed by a PF-TB boost to 54 Gy(RBE) with proton beams. Three PF-TB boost plans were generated for each patient: (1) a uniform scanning (US) gantry plan with 4–7 posterior fields shaped with apertures and compensators (2) a PBS plan using bi-lateral and vertex fields with a 3-mm planning organ-at-risk volume (PRV) expansion around the brainstem and (3) PBS fields using same beam arrangement but replacing the PRV with robust optimization considering a 3-mm setup uncertainty. Results: A concave 54-Gy(RBE) isodose line surrounding the brainstem could be achieved using all three techniques. The mean V95% of the PTV was 99.7% (range: 97.6% to 100%) while the V100% of the PTV ranged from 56.3% to 93.1% depending on the involvement of the brainstem with the PTV. The mean doses received by 0.05 cm"3 of the brainstem were effectively identical: 54.0 Gy(RBE), 53.4 Gy(RBE) and 53.3 Gy(RBE) for US, PBS optimized with PRV, and PBS optimized with robustness plans respectively. The cochlea mean dose increased by 23% of the prescribed boost dose in average from the bi-lateral fields used in the PBS plan. Planning time for the PBS plan with PRV was 5–10 times less than the US plan and the robustly optimized PBS plan. Conclusion: We have demonstrated that a fixed-beam with PBS can deliver a dose distribution comparable to a gantry plan using uniform scanning. Planning time can be reduced substantially using a PRV around the brainstem instead of robust optimization.

  5. SU-F-T-208: An Efficient Planning Approach to Posterior Fossa Tumor Bed Boosts Using Proton Pencil Beam Scanning in Fixed-Beam Room

    Energy Technology Data Exchange (ETDEWEB)

    Ju, N; Chen, C; Gans, S; Hug, E; Cahlon, O; Chon, B; Tsai, H; Sine, K; Mah, D [Procure Treatment Center, Somerset, New Jersey (United States); Wolden, S [Memorial Sloan Kettering Cancer Center, New York, NY (United States); Yeh, B [Mount Sinai Hospital, New York, NY (United States)

    2016-06-15

    Purpose: A fixed-beam room could be underutilized in a multi-room proton center. We investigated the use of proton pencil beam scanning (PBS) on a fixed-beam as an alternative for posterior fossa tumor bed (PF-TB) boost treatments which were usually treating on a gantry with uniform scanning. Methods: Five patients were treated with craniospinal irradiation (CSI, 23.4 or 36.0 Gy(RBE)) followed by a PF-TB boost to 54 Gy(RBE) with proton beams. Three PF-TB boost plans were generated for each patient: (1) a uniform scanning (US) gantry plan with 4–7 posterior fields shaped with apertures and compensators (2) a PBS plan using bi-lateral and vertex fields with a 3-mm planning organ-at-risk volume (PRV) expansion around the brainstem and (3) PBS fields using same beam arrangement but replacing the PRV with robust optimization considering a 3-mm setup uncertainty. Results: A concave 54-Gy(RBE) isodose line surrounding the brainstem could be achieved using all three techniques. The mean V95% of the PTV was 99.7% (range: 97.6% to 100%) while the V100% of the PTV ranged from 56.3% to 93.1% depending on the involvement of the brainstem with the PTV. The mean doses received by 0.05 cm{sup 3} of the brainstem were effectively identical: 54.0 Gy(RBE), 53.4 Gy(RBE) and 53.3 Gy(RBE) for US, PBS optimized with PRV, and PBS optimized with robustness plans respectively. The cochlea mean dose increased by 23% of the prescribed boost dose in average from the bi-lateral fields used in the PBS plan. Planning time for the PBS plan with PRV was 5–10 times less than the US plan and the robustly optimized PBS plan. Conclusion: We have demonstrated that a fixed-beam with PBS can deliver a dose distribution comparable to a gantry plan using uniform scanning. Planning time can be reduced substantially using a PRV around the brainstem instead of robust optimization.

  6. Approaches to 30% Energy Savings at the Community Scale in the Hot-Humid Climate

    Energy Technology Data Exchange (ETDEWEB)

    Thomas-Rees, S.; Beal, D.; Martin, E.; Fonorow, K.

    2013-03-01

    BA-PIRC has worked with several community-scale builders within the hot humid climate zone to improve performance of production, or community scale, housing. Tommy Williams Homes (Gainesville, FL), Lifestyle Homes (Melbourne, FL), and Habitat for Humanity (various locations, FL) have all been continuous partners of the BA Program and are the subjects of this report to document achievement of the Building America goal of 30% whole house energy savings packages adopted at the community scale. The scope of this report is to demonstrate achievement of these goals though the documentation of production-scale homes built cost-effectively at the community scale, and modeled to reduce whole-house energy use by 30% in the Hot Humid climate region. Key aspects of this research include determining how to evolve existing energy efficiency packages to produce replicable target savings, identifying what builders' technical assistance needs are for implementation and working with them to create sustainable quality assurance mechanisms, and documenting the commercial viability through neutral cost analysis and market acceptance. This report documents certain barriers builders overcame and the approaches they implemented in order to accomplish Building America (BA) Program goals that have not already been documented in previous reports.

  7. Exploring Multi-Scale Spatiotemporal Twitter User Mobility Patterns with a Visual-Analytics Approach

    Directory of Open Access Journals (Sweden)

    Junjun Yin

    2016-10-01

    Full Text Available Understanding human mobility patterns is of great importance for urban planning, traffic management, and even marketing campaign. However, the capability of capturing detailed human movements with fine-grained spatial and temporal granularity is still limited. In this study, we extracted high-resolution mobility data from a collection of over 1.3 billion geo-located Twitter messages. Regarding the concerns of infringement on individual privacy, such as the mobile phone call records with restricted access, the dataset is collected from publicly accessible Twitter data streams. In this paper, we employed a visual-analytics approach to studying multi-scale spatiotemporal Twitter user mobility patterns in the contiguous United States during the year 2014. Our approach included a scalable visual-analytics framework to deliver efficiency and scalability in filtering large volume of geo-located tweets, modeling and extracting Twitter user movements, generating space-time user trajectories, and summarizing multi-scale spatiotemporal user mobility patterns. We performed a set of statistical analysis to understand Twitter user mobility patterns across multi-level spatial scales and temporal ranges. In particular, Twitter user mobility patterns measured by the displacements and radius of gyrations of individuals revealed multi-scale or multi-modal Twitter user mobility patterns. By further studying such mobility patterns in different temporal ranges, we identified both consistency and seasonal fluctuations regarding the distance decay effects in the corresponding mobility patterns. At the same time, our approach provides a geo-visualization unit with an interactive 3D virtual globe web mapping interface for exploratory geo-visual analytics of the multi-level spatiotemporal Twitter user movements.

  8. Linking biogeomorphic feedbacks from ecosystem engineer to landscape scale: a panarchy approach

    Science.gov (United States)

    Eichel, Jana

    2017-04-01

    Scale is a fundamental concept in both ecology and geomorphology. Therefore, scale-based approaches are a valuable tool to bridge the disciplines and improve the understanding of feedbacks between geomorphic processes, landforms, material and organisms and ecological processes in biogeomorphology. Yet, linkages between biogeomorphic feedbacks on different scales, e.g. between ecosystem engineering and landscape scale patterns and dynamics, are not well understood. A panarchy approach sensu Holling et al. (2002) can help to close this research gap and explain how structure and function are created in biogeomorphic ecosystems. Based on results from previous biogeomorphic research in Turtmann glacier foreland (Switzerland; Eichel, 2017; Eichel et al. 2013, 2016), a panarchy concept is presented for lateral moraine slope biogeomorphic ecosystems. It depicts biogeomorphic feedbacks on different spatiotemporal scales as a set of nested adaptive cycles and links them by 'remember' and 'revolt' connections. On a small scale (cm2 - m2; seconds to years), the life cycle of the ecosystem engineer Dryas octopetala L. is considered as an adaptive cycle. Biogeomorphic succession within patches created by geomorphic processes represents an intermediate scale adaptive cycle (m2 - ha, years to decades), while geomorphic and ecologic pattern development at a landscape scale (ha - km2, decades to centuries) can be illustrated by an adaptive cycle of ‚biogeomorphic patch dynamics' (Eichel, 2017). In the panarchy, revolt connections link the smaller scale adaptive cycles to larger scale cycles: on lateral moraine slopes, the development of ecosystem engineer biomass and cover controls the engineering threshold of the biogeomorphic feedback window (Eichel et al., 2016) and therefore the onset of the biogeomorphic phase during biogeomorphic succession. In this phase, engineer patches and biogeomorphic structures can be created in the patch mosaic of the landscape. Remember connections

  9. Properties of small-scale interfacial turbulence from a novel thermography based approach

    Science.gov (United States)

    Schnieders, Jana; Garbe, Christoph

    2013-04-01

    Oceans cover nearly two thirds of the earth's surface and exchange processes between the Atmosphere and the Ocean are of fundamental environmental importance. At the air-sea interface, complex interaction processes take place on a multitude of scales. Turbulence plays a key role in the coupling of momentum, heat and mass transfer [2]. Here we use high resolution infrared imagery to visualize near surface aqueous turbulence. Thermographic data is analized from a range of laboratory facilities and experimental conditions with wind speeds ranging from 1ms-1 to 7ms-1 and various surface conditions. The surface heat pattern is formed by distinct structures on two scales - small-scale short lived structures termed fish scales and larger scale cold streaks that are consistent with the footprints of Langmuir Circulations. There are two key characteristics of the observed surface heat patterns: (1) The surface heat patterns show characteristic features of scales. (2) The structure of these patterns change with increasing wind stress and surface conditions. We present a new image processing based approach to the analysis of the spacing of cold streaks based on a machine learning approach [4, 1] to classify the thermal footprints of near surface turbulence. Our random forest classifier is based on classical features in image processing such as gray value gradients and edge detecting features. The result is a pixel-wise classification of the surface heat pattern with a subsequent analysis of the streak spacing. This approach has been presented in [3] and can be applied to a wide range of experimental data. In spite of entirely different boundary conditions, the spacing of turbulent cells near the air-water interface seems to match the expected turbulent cell size for flow near a no-slip wall. The analysis of the spacing of cold streaks shows consistent behavior in a range of laboratory facilities when expressed as a function of water sided friction velocity, u*. The scales

  10. Disordering scaling and generalized nearest-neighbor approach in the thermodynamics of Lennard-Jones systems

    International Nuclear Information System (INIS)

    Vorob'ev, V.S.

    2003-01-01

    We suggest a concept of multiple disordering scaling of the crystalline state. Such a scaling procedure applied to a crystal leads to the liquid and (in low density limit) gas states. This approach provides an explanation to a high value of configuration (common) entropy of liquefied noble gases, which can be deduced from experimental data. We use the generalized nearest-neighbor approach to calculate free energy and pressure of the Lennard-Jones systems after performing this scaling procedure. These thermodynamic functions depend on one parameter characterizing the disordering only. Condensed states of the system (liquid and solid) correspond to small values of this parameter. When this parameter tends to unity, we get an asymptotically exact equation of state for a gas involving the second virial coefficient. A reasonable choice of the values for the disordering parameter (ranging between zero and unity) allows us to find the lines of coexistence between different phase states in the Lennard-Jones systems, which are in a good agreement with the available experimental data

  11. The Stokes number approach to support scale-up and technology transfer of a mixing process.

    Science.gov (United States)

    Willemsz, Tofan A; Hooijmaijers, Ricardo; Rubingh, Carina M; Frijlink, Henderik W; Vromans, Herman; van der Voort Maarschalk, Kees

    2012-09-01

    Transferring processes between different scales and types of mixers is a common operation in industry. Challenges within this operation include the existence of considerable differences in blending conditions between mixer scales and types. Obtaining the correct blending conditions is crucial for the ability to break up agglomerates in order to achieve the desired blend uniformity. Agglomerate break up is often an abrasion process. In this study, the abrasion rate potential of agglomerates is described by the Stokes abrasion (St(Abr)) number of the system. The St(Abr) number equals the ratio between the kinetic energy density of the moving powder bed and the work of fracture of the agglomerate. In this study, the St(Abr) approach demonstrates to be a useful tool to predict the abrasion of agglomerates during blending when technology is transferred between mixer scales/types. Applying the St(Abr) approach revealed a transition point between parameters that determined agglomerate abrasion. This study gave evidence that (1) below this transition point, agglomerate abrasion is determined by a combination of impeller effects and by the kinetic energy density of the powder blend, whereas (2) above this transition point, agglomerate abrasion is mainly determined by the kinetic energy density of the powder blend.

  12. Fixed automated spray technology.

    Science.gov (United States)

    2011-04-19

    This research project evaluated the construction and performance of Boschungs Fixed Automated : Spray Technology (FAST) system. The FAST system automatically sprays de-icing material on : the bridge when icing conditions are about to occur. The FA...

  13. Technical and scale efficiency in public and private Irish nursing homes - a bootstrap DEA approach.

    Science.gov (United States)

    Ni Luasa, Shiovan; Dineen, Declan; Zieba, Marta

    2016-10-27

    This article provides methodological and empirical insights into the estimation of technical efficiency in the nursing home sector. Focusing on long-stay care and using primary data, we examine technical and scale efficiency in 39 public and 73 private Irish nursing homes by applying an input-oriented data envelopment analysis (DEA). We employ robust bootstrap methods to validate our nonparametric DEA scores and to integrate the effects of potential determinants in estimating the efficiencies. Both the homogenous and two-stage double bootstrap procedures are used to obtain confidence intervals for the bias-corrected DEA scores. Importantly, the application of the double bootstrap approach affords true DEA technical efficiency scores after adjusting for the effects of ownership, size, case-mix, and other determinants such as location, and quality. Based on our DEA results for variable returns to scale technology, the average technical efficiency score is 62 %, and the mean scale efficiency is 88 %, with nearly all units operating on the increasing returns to scale part of the production frontier. Moreover, based on the double bootstrap results, Irish nursing homes are less technically efficient, and more scale efficient than the conventional DEA estimates suggest. Regarding the efficiency determinants, in terms of ownership, we find that private facilities are less efficient than the public units. Furthermore, the size of the nursing home has a positive effect, and this reinforces our finding that Irish homes produce at increasing returns to scale. Also, notably, we find that a tendency towards quality improvements can lead to poorer technical efficiency performance.

  14. Fixed mobile convergence handbook

    CERN Document Server

    Ahson, Syed A

    2010-01-01

    From basic concepts to future directions, this handbook provides technical information on all aspects of fixed-mobile convergence (FMC). The book examines such topics as integrated management architecture, business trends and strategic implications for service providers, personal area networks, mobile controlled handover methods, SIP-based session mobility, and supervisory and notification aggregator service. Case studies are used to illustrate technical and systematic implementation of unified and rationalized internet access by fixed-mobile network convergence. The text examines the technolo

  15. A mixed-integer linear programming approach to the reduction of genome-scale metabolic networks.

    Science.gov (United States)

    Röhl, Annika; Bockmayr, Alexander

    2017-01-03

    Constraint-based analysis has become a widely used method to study metabolic networks. While some of the associated algorithms can be applied to genome-scale network reconstructions with several thousands of reactions, others are limited to small or medium-sized models. In 2015, Erdrich et al. introduced a method called NetworkReducer, which reduces large metabolic networks to smaller subnetworks, while preserving a set of biological requirements that can be specified by the user. Already in 2001, Burgard et al. developed a mixed-integer linear programming (MILP) approach for computing minimal reaction sets under a given growth requirement. Here we present an MILP approach for computing minimum subnetworks with the given properties. The minimality (with respect to the number of active reactions) is not guaranteed by NetworkReducer, while the method by Burgard et al. does not allow specifying the different biological requirements. Our procedure is about 5-10 times faster than NetworkReducer and can enumerate all minimum subnetworks in case there exist several ones. This allows identifying common reactions that are present in all subnetworks, and reactions appearing in alternative pathways. Applying complex analysis methods to genome-scale metabolic networks is often not possible in practice. Thus it may become necessary to reduce the size of the network while keeping important functionalities. We propose a MILP solution to this problem. Compared to previous work, our approach is more efficient and allows computing not only one, but even all minimum subnetworks satisfying the required properties.

  16. A multi-scale spatial approach to address environmental effects of small hydropower development.

    Science.gov (United States)

    McManamay, Ryan A; Samu, Nicole; Kao, Shih-Chieh; Bevelhimer, Mark S; Hetrick, Shelaine C

    2015-01-01

    Hydropower development continues to grow worldwide in developed and developing countries. While the ecological and physical responses to dam construction have been well documented, translating this information into planning for hydropower development is extremely difficult. Very few studies have conducted environmental assessments to guide site-specific or widespread hydropower development. Herein, we propose a spatial approach for estimating environmental effects of hydropower development at multiple scales, as opposed to individual site-by-site assessments (e.g., environmental impact assessment). Because the complex, process-driven effects of future hydropower development may be uncertain or, at best, limited by available information, we invested considerable effort in describing novel approaches to represent environmental concerns using spatial data and in developing the spatial footprint of hydropower infrastructure. We then use two case studies in the US, one at the scale of the conterminous US and another within two adjoining rivers basins, to examine how environmental concerns can be identified and related to areas of varying energy capacity. We use combinations of reserve-design planning and multi-metric ranking to visualize tradeoffs among environmental concerns and potential energy capacity. Spatial frameworks, like the one presented, are not meant to replace more in-depth environmental assessments, but to identify information gaps and measure the sustainability of multi-development scenarios as to inform policy decisions at the basin or national level. Most importantly, the approach should foster discussions among environmental scientists and stakeholders regarding solutions to optimize energy development and environmental sustainability.

  17. Pesticide fate at regional scale: Development of an integrated model approach and application

    Science.gov (United States)

    Herbst, M.; Hardelauf, H.; Harms, R.; Vanderborght, J.; Vereecken, H.

    As a result of agricultural practice many soils and aquifers are contaminated with pesticides. In order to quantify the side-effects of these anthropogenic impacts on groundwater quality at regional scale, a process-based, integrated model approach was developed. The Richards’ equation based numerical model TRACE calculates the three-dimensional saturated/unsaturated water flow. For the modeling of regional scale pesticide transport we linked TRACE with the plant module SUCROS and with 3DLEWASTE, a hybrid Lagrangian/Eulerian approach to solve the convection/dispersion equation. We used measurements, standard methods like pedotransfer-functions or parameters from literature to derive the model input for the process model. A first-step application of TRACE/3DLEWASTE to the 20 km 2 test area ‘Zwischenscholle’ for the period 1983-1993 reveals the behaviour of the pesticide isoproturon. The selected test area is characterised by an intense agricultural use and shallow groundwater, resulting in a high vulnerability of the groundwater to pesticide contamination. The model results stress the importance of the unsaturated zone for the occurrence of pesticides in groundwater. Remarkable isoproturon concentrations in groundwater are predicted for locations with thin layered and permeable soils. For four selected locations we used measured piezometric heads to validate predicted groundwater levels. In general, the model results are consistent and reasonable. Thus the developed integrated model approach is seen as a promising tool for the quantification of the agricultural practice impact on groundwater quality.

  18. A multi-scale approach for high cycle anisotropic fatigue resistance: Application to forged components

    International Nuclear Information System (INIS)

    Milesi, M.; Chastel, Y.; Hachem, E.; Bernacki, M.; Loge, R.E.; Bouchard, P.O.

    2010-01-01

    Forged components exhibit good mechanical strength, particularly in terms of high cycle fatigue properties. This is due to the specific microstructure resulting from large plastic deformation as in a forging process. The goal of this study is to account for critical phenomena such as the anisotropy of the fatigue resistance in order to perform high cycle fatigue simulations on industrial forged components. Standard high cycle fatigue criteria usually give good results for isotropic behaviors but are not suitable for components with anisotropic features. The aim is to represent explicitly this anisotropy at a lower scale compared to the process scale and determined local coefficients needed to simulate a real case. We developed a multi-scale approach by considering the statistical morphology and mechanical characteristics of the microstructure to represent explicitly each element. From stochastic experimental data, realistic microstructures were reconstructed in order to perform high cycle fatigue simulations on it with different orientations. The meshing was improved by a local refinement of each interface and simulations were performed on each representative elementary volume. The local mechanical anisotropy is taken into account through the distribution of particles. Fatigue parameters identified at the microscale can then be used at the macroscale on the forged component. The linkage of these data and the process scale is the fiber vector and the deformation state, used to calculate global mechanical anisotropy. Numerical results reveal an expected behavior compared to experimental tendencies. We proved numerically the dependence of the anisotropy direction and the deformation state on the endurance limit evolution.

  19. A multi-scaled approach to evaluating the fish assemblage structure within southern Appalachian streams USA.

    Science.gov (United States)

    Kirsch, Joseph; Peterson, James T.

    2014-01-01

    There is considerable uncertainty about the relative roles of stream habitat and landscape characteristics in structuring stream-fish assemblages. We evaluated the relative importance of environmental characteristics on fish occupancy at the local and landscape scales within the upper Little Tennessee River basin of Georgia and North Carolina. Fishes were sampled using a quadrat sample design at 525 channel units within 48 study reaches during two consecutive years. We evaluated species–habitat relationships (local and landscape factors) by developing hierarchical, multispecies occupancy models. Modeling results suggested that fish occupancy within the Little Tennessee River basin was primarily influenced by stream topology and topography, urban land coverage, and channel unit types. Landscape scale factors (e.g., urban land coverage and elevation) largely controlled the fish assemblage structure at a stream-reach level, and local-scale factors (i.e., channel unit types) influenced fish distribution within stream reaches. Our study demonstrates the utility of a multi-scaled approach and the need to account for hierarchy and the interscale interactions of factors influencing assemblage structure prior to monitoring fish assemblages, developing biological management plans, or allocating management resources throughout a stream system.

  20. LIDAR-based urban metabolism approach to neighbourhood scale energy and carbon emissions modelling

    Energy Technology Data Exchange (ETDEWEB)

    Christen, A. [British Columbia Univ., Vancouver, BC (Canada). Dept. of Geography; Coops, N. [British Columbia Univ., Vancouver, BC (Canada). Dept. of Forest Sciences; Canada Research Chairs, Ottawa, ON (Canada); Kellet, R. [British Columbia Univ., Vancouver, BC (Canada). School of Architecture and Landscape Architecture

    2010-07-01

    A remote sensing technology was used to model neighbourhood scale energy and carbon emissions in a case study set in Vancouver, British Columbia (BC). The study was used to compile and aggregate atmospheric carbon flux, urban form, and energy and emissions data in a replicable neighbourhood-scale approach. The study illustrated methods of integrating diverse emission and uptake processes on a range of scales and resolutions, and benchmarked comparisons of modelled estimates with measured energy consumption data obtained over a 2-year period from a research tower located in the study area. The study evaluated carbon imports, carbon exports and sequestration, and relevant emissions processes. Fossil fuel emissions produced in the neighbourhood were also estimated. The study demonstrated that remote sensing technologies such as LIDAR and multispectral satellite imagery can be an effective means of generating and extracting urban form and land cover data at fine scales. Data from the study were used to develop several emissions reduction and energy conservation scenarios. 6 refs.

  1. FOREWORD: Heterogenous nucleation and microstructure formation—a scale- and system-bridging approach Heterogenous nucleation and microstructure formation—a scale- and system-bridging approach

    Science.gov (United States)

    Emmerich, H.

    2009-11-01

    Scope and aim of this volume. Nucleation and initial microstructure formation play an important role in almost all aspects of materials science [1-5]. The relevance of the prediction and control of nucleation and the subsequent microstructure formation is fully accepted across many areas of modern surface and materials science and technology. One reason is that a large range of material properties, from mechanical ones such as ductility and hardness to electrical and magnetic ones such as electric conductivity and magnetic hardness, depend largely on the specific crystalline structure that forms in nucleation and the subsequent initial microstructure growth. A very demonstrative example for the latter is the so called bamboo structure of an integrated circuit, for which resistance against electromigration [6] , a parallel alignment of grain boundaries vertical to the direction of electricity, is most favorable. Despite the large relevance of predicting and controlling nucleation and the subsequent microstructure formation, and despite significant progress in the experimental analysis of the later stages of crystal growth in line with new theoretical computer simulation concepts [7], details about the initial stages of solidification are still far from being satisfactorily understood. This is in particular true when the nucleation event occurs as heterogenous nucleation. The Priority Program SPP 1296 'Heterogenous Nucleation and Microstructure Formation—a Scale- and System-Bridging Approach' [8] sponsored by the German Research Foundation, DFG, intends to contribute to this open issue via a six year research program that enables approximately twenty research groups in Germany to work interdisciplinarily together following this goal. Moreover, it enables the participants to embed themselves in the international community which focuses on this issue via internationally open joint workshops, conferences and summer schools. An outline of such activities can be found

  2. Fixed drug eruption induced by an iodinated non-ionic X-ray contrast medium: a practical approach to identify the causative agent and to prevent its recurrence

    Energy Technology Data Exchange (ETDEWEB)

    Boehm, Ingrid; Block, Wolfgang; Schild, Hans H. [University of Bonn, Department of Radiology, Bonn (Germany); Medina, Jesus; Prieto, Pilar [JUSTESA IMAGEN SA, Biological R and D Department, Madrid (Spain)

    2007-02-15

    We describe the case of a 61-year-old physician who developed a fixed drug eruption (FDE) after i.v. administration of a non-ionic monomeric iodinated X-ray contrast medium (CM) (iopromide). During CM injection, a sensation of heat occurred, which was most intense in the right inguinal region. Four hours later, the FDE arose with a red macule of approximately 2 cm in diameter covering a dermal infiltration in the right inguinal region, and enlarged up to a final size of 15 x 8 cm, accompanied by a burning sensation. The patient's history revealed a similar reaction in the same localization and of the same clinical appearance after CM injection 1 year before. Patch testing 4 months later revealed positive reactions to iomeprol and iohexol. Iopamidol injection for another CT examination 23 months later was well tolerated. Based on these results, we suggest patch testing after CM-induced FDE, which could help to select a CM for future CT examinations. Late onset of adverse CM reactions may manifest as FDE. Patch testing within the previous skin reaction area is the diagnostic tool that should be used to confirm the suspected agent, possible cross-reacting agents and well-tolerated agents. (orig.)

  3. Fixed drug eruption induced by an iodinated non-ionic X-ray contrast medium: a practical approach to identify the causative agent and to prevent its recurrence

    International Nuclear Information System (INIS)

    Boehm, Ingrid; Block, Wolfgang; Schild, Hans H.; Medina, Jesus; Prieto, Pilar

    2007-01-01

    We describe the case of a 61-year-old physician who developed a fixed drug eruption (FDE) after i.v. administration of a non-ionic monomeric iodinated X-ray contrast medium (CM) (iopromide). During CM injection, a sensation of heat occurred, which was most intense in the right inguinal region. Four hours later, the FDE arose with a red macule of approximately 2 cm in diameter covering a dermal infiltration in the right inguinal region, and enlarged up to a final size of 15 x 8 cm, accompanied by a burning sensation. The patient's history revealed a similar reaction in the same localization and of the same clinical appearance after CM injection 1 year before. Patch testing 4 months later revealed positive reactions to iomeprol and iohexol. Iopamidol injection for another CT examination 23 months later was well tolerated. Based on these results, we suggest patch testing after CM-induced FDE, which could help to select a CM for future CT examinations. Late onset of adverse CM reactions may manifest as FDE. Patch testing within the previous skin reaction area is the diagnostic tool that should be used to confirm the suspected agent, possible cross-reacting agents and well-tolerated agents. (orig.)

  4. Large-scaled biomonitoring of trace-element air pollution: goals and approaches

    International Nuclear Information System (INIS)

    Wolterbeek, H.T.

    2000-01-01

    Biomonitoring is often used in multi-parameter approaches in especially larger scaled surveys. The information obtained may consist of thousands of data points, which can be processed in a variety of mathematical routines to permit a condensed and strongly-smoothed presentation of results and conclusions. Although reports on larger-scaled biomonitoring surveys are 'easy- to-read' and often include far-reaching interpretations, it is not possible to obtain an insight into the real meaningfulness or quality of the survey performed. In any set-up, the aims of the survey should be put forward as clear as possible. Is the survey to provide information on atmospheric element levels, or on total, wet and dry deposition, what should be the time- or geographical scale and resolution of the survey, which elements should be determined, is the survey to give information on emission or immission characteristics? Answers to all these questions are of paramount importance, not only regarding the choice of the biomonitoring species or necessary handling/analysis techniques, but also with respect to planning and personnel, and, not to forget, the expected/available means of data interpretation. In considering a survey set-up, rough survey dimensions may follow directly from the goals; in practice, however, they will be governed by other aspects such as available personnel, handling means/capacity, costs, etc. In what sense and to what extent these factors may cause the survey to drift away from the pre-set goals should receive ample attention: in extreme cases the survey should not be carried out. Bearing in mind the above considerations, the present paper focuses on goals, quality and approaches of larger-scaled biomonitoring surveys on trace element air pollution. The discussion comprises practical problems, options, decisions, analytical means, quality measures, and eventual survey results. (author)

  5. Multi-scale approach for predicting fish species distributions across coral reef seascapes.

    Directory of Open Access Journals (Sweden)

    Simon J Pittman

    Full Text Available Two of the major limitations to effective management of coral reef ecosystems are a lack of information on the spatial distribution of marine species and a paucity of data on the interacting environmental variables that drive distributional patterns. Advances in marine remote sensing, together with the novel integration of landscape ecology and advanced niche modelling techniques provide an unprecedented opportunity to reliably model and map marine species distributions across many kilometres of coral reef ecosystems. We developed a multi-scale approach using three-dimensional seafloor morphology and across-shelf location to predict spatial distributions for five common Caribbean fish species. Seascape topography was quantified from high resolution bathymetry at five spatial scales (5-300 m radii surrounding fish survey sites. Model performance and map accuracy was assessed for two high performing machine-learning algorithms: Boosted Regression Trees (BRT and Maximum Entropy Species Distribution Modelling (MaxEnt. The three most important predictors were geographical location across the shelf, followed by a measure of topographic complexity. Predictor contribution differed among species, yet rarely changed across spatial scales. BRT provided 'outstanding' model predictions (AUC = >0.9 for three of five fish species. MaxEnt provided 'outstanding' model predictions for two of five species, with the remaining three models considered 'excellent' (AUC = 0.8-0.9. In contrast, MaxEnt spatial predictions were markedly more accurate (92% map accuracy than BRT (68% map accuracy. We demonstrate that reliable spatial predictions for a range of key fish species can be achieved by modelling the interaction between the geographical location across the shelf and the topographic heterogeneity of seafloor structure. This multi-scale, analytic approach is an important new cost-effective tool to accurately delineate essential fish habitat and support

  6. Serbian translation of the 20-item toronto alexithymia scale: Psychometric properties and the new methodological approach in translating scales

    Directory of Open Access Journals (Sweden)

    Trajanović Nikola N.

    2013-01-01

    Full Text Available Introduction. Since inception of the alexithymia construct in 1970’s, there has been a continuous effort to improve both its theoretical postulates and the clinical utility through development, standardization and validation of assessment scales. Objective. The aim of this study was to validate the Serbian translation of the 20-item Toronto Alexithymia Scale (TAS-20 and to propose a new method of translation of scales with a property of temporal stability. Methods. The scale was expertly translated by bilingual medical professionals and a linguist, and given to a sample of bilingual participants from the general population who completed both the English and the Serbian version of the scale one week apart. Results. The findings showed that the Serbian version of the TAS-20 had a good internal consistency reliability regarding total scale (α=0.86, and acceptable reliability of the three factors (α=0.71-0.79. Conclusion. The analysis confirmed the validity and consistency of the Serbian translation of the scale, with observed weakness of the factorial structure consistent with studies in other languages. The results also showed that the method of utilizing a self-control bilingual subject is a useful alternative to the back-translation method, particularly in cases of linguistically and structurally sensitive scales, or in cases where a larger sample is not available. This method, dubbed as ‘forth-translation’, could be used to translate psychometric scales measuring properties which have temporal stability over the period of at least several weeks.

  7. Hybrid approaches to nanometer-scale patterning: Exploiting tailored intermolecular interactions

    International Nuclear Information System (INIS)

    Mullen, Thomas J.; Srinivasan, Charan; Shuster, Mitchell J.; Horn, Mark W.; Andrews, Anne M.; Weiss, Paul S.

    2008-01-01

    In this perspective, we explore hybrid approaches to nanometer-scale patterning, where the precision of molecular self-assembly is combined with the sophistication and fidelity of lithography. Two areas - improving existing lithographic techniques through self-assembly and fabricating chemically patterned surfaces - will be discussed in terms of their advantages, limitations, applications, and future outlook. The creation of such chemical patterns enables new capabilities, including the assembly of biospecific surfaces to be recognized by, and to capture analytes from, complex mixtures. Finally, we speculate on the potential impact and upcoming challenges of these hybrid strategies.

  8. Gravitation and Special Relativity from Compton Wave Interactions at the Planck Scale: An Algorithmic Approach

    Science.gov (United States)

    Blackwell, William C., Jr.

    2004-01-01

    In this paper space is modeled as a lattice of Compton wave oscillators (CWOs) of near- Planck size. It is shown that gravitation and special relativity emerge from the interaction between particles Compton waves. To develop this CWO model an algorithmic approach was taken, incorporating simple rules of interaction at the Planck-scale developed using well known physical laws. This technique naturally leads to Newton s law of gravitation and a new form of doubly special relativity. The model is in apparent agreement with the holographic principle, and it predicts a cutoff energy for ultrahigh-energy cosmic rays that is consistent with observational data.

  9. Scaling strength distributions in quasi-brittle materials from micro-to macro-scales: A computational approach to modeling Nature-inspired structural ceramics

    International Nuclear Information System (INIS)

    Genet, Martin; Couegnat, Guillaume; Tomsia, Antoni P.; Ritchie, Robert O.

    2014-01-01

    This paper presents an approach to predict the strength distribution of quasi-brittle materials across multiple length-scales, with emphasis on Nature-inspired ceramic structures. It permits the computation of the failure probability of any structure under any mechanical load, solely based on considerations of the microstructure and its failure properties by naturally incorporating the statistical and size-dependent aspects of failure. We overcome the intrinsic limitations of single periodic unit-based approaches by computing the successive failures of the material components and associated stress redistributions on arbitrary numbers of periodic units. For large size samples, the microscopic cells are replaced by a homogenized continuum with equivalent stochastic and damaged constitutive behavior. After establishing the predictive capabilities of the method, and illustrating its potential relevance to several engineering problems, we employ it in the study of the shape and scaling of strength distributions across differing length-scales for a particular quasi-brittle system. We find that the strength distributions display a Weibull form for samples of size approaching the periodic unit; however, these distributions become closer to normal with further increase in sample size before finally reverting to a Weibull form for macroscopic sized samples. In terms of scaling, we find that the weakest link scaling applies only to microscopic, and not macroscopic scale, samples. These findings are discussed in relation to failure patterns computed at different size-scales. (authors)

  10. Estimating heterotrophic respiration at large scales: Challenges, approaches, and next steps

    Science.gov (United States)

    Bond-Lamberty, Ben; Epron, Daniel; Harden, Jennifer W.; Harmon, Mark E.; Hoffman, Forrest; Kumar, Jitendra; McGuire, Anthony David; Vargas, Rodrigo

    2016-01-01

    Heterotrophic respiration (HR), the aerobic and anaerobic processes mineralizing organic matter, is a key carbon flux but one impossible to measure at scales significantly larger than small experimental plots. This impedes our ability to understand carbon and nutrient cycles, benchmark models, or reliably upscale point measurements. Given that a new generation of highly mechanistic, genomic-specific global models is not imminent, we suggest that a useful step to improve this situation would be the development of “Decomposition Functional Types” (DFTs). Analogous to plant functional types (PFTs), DFTs would abstract and capture important differences in HR metabolism and flux dynamics, allowing modelers and experimentalists to efficiently group and vary these characteristics across space and time. We argue that DFTs should be initially informed by top-down expert opinion, but ultimately developed using bottom-up, data-driven analyses, and provide specific examples of potential dependent and independent variables that could be used. We present an example clustering analysis to show how annual HR can be broken into distinct groups associated with global variability in biotic and abiotic factors, and demonstrate that these groups are distinct from (but complementary to) already-existing PFTs. A similar analysis incorporating observational data could form the basis for future DFTs. Finally, we suggest next steps and critical priorities: collection and synthesis of existing data; more in-depth analyses combining open data with rigorous testing of analytical results; using point measurements and realistic forcing variables to constrain process-based models; and planning by the global modeling community for decoupling decomposition from fixed site data. These are all critical steps to build a foundation for DFTs in global models, thus providing the ecological and climate change communities with robust, scalable estimates of HR.

  11. The ESI scale, an ethical approach to the evaluation of seismic hazards

    Science.gov (United States)

    Porfido, Sabina; Nappi, Rosa; De Lucia, Maddalena; Gaudiosi, Germana; Alessio, Giuliana; Guerrieri, Luca

    2015-04-01

    The dissemination of correct information about seismic hazard is an ethical duty of scientific community worldwide. A proper assessment of a earthquake severity and impact should not ignore the evaluation of its intensity, taking into account both the effects on humans, man-made structures, as well as on the natural evironment. We illustrate the new macroseismic scale that measures the intensity taking into account the effects of earthquakes on the environment: the ESI 2007 (Environmental Seismic Intensity) scale (Michetti et al., 2007), ratified by the INQUA (International Union for Quaternary Research) during the XVII Congress in Cairns (Australia). The ESI scale integrates and completes the traditional macroseismic scales, of which it represents the evolution, allowing to assess the intensity parameter also where buildings are absent or damage-based diagnostic elements saturate. Each degree reflects the corresponding strength of an earthquake and the role of ground effects, evaluating the Intensity on the basis of the characteristics and size of primary (e.g. surface faulting and tectonic uplift/subsidence) and secondary effects (e.g. ground cracks, slope movements, liquefaction phenomena, hydrological changes, anomalous waves, tsunamis, trees shaking, dust clouds and jumping stones). This approach can be considered "ethical" because helps to define the real scenario of an earthquake, regardless of the country's socio-economic conditions and level of development. Here lies the value and the relevance of macroseismic scales even today, one hundred years after the death of Giuseppe Mercalli, who conceived the homonymous scale for the evaluation of earthquake intensity. For an appropriate mitigation strategy in seismic areas, it is fundamental to consider the role played by seismically induced effects on ground, such as active faults (size in length and displacement) and secondary effects (the total area affecting). With these perspectives two different cases

  12. Topological fixed point theory of multivalued mappings

    CERN Document Server

    Górniewicz, Lech

    1999-01-01

    This volume presents a broad introduction to the topological fixed point theory of multivalued (set-valued) mappings, treating both classical concepts as well as modern techniques. A variety of up-to-date results is described within a unified framework. Topics covered include the basic theory of set-valued mappings with both convex and nonconvex values, approximation and homological methods in the fixed point theory together with a thorough discussion of various index theories for mappings with a topologically complex structure of values, applications to many fields of mathematics, mathematical economics and related subjects, and the fixed point approach to the theory of ordinary differential inclusions. The work emphasises the topological aspect of the theory, and gives special attention to the Lefschetz and Nielsen fixed point theory for acyclic valued mappings with diverse compactness assumptions via graph approximation and the homological approach. Audience: This work will be of interest to researchers an...

  13. Multi-scale approach in numerical reservoir simulation; Uma abordagem multiescala na simulacao numerica de reservatorios

    Energy Technology Data Exchange (ETDEWEB)

    Guedes, Solange da Silva

    1998-07-01

    Advances in petroleum reservoir descriptions have provided an amount of data that can not be handled directly during numerical simulations. This detailed geological information must be incorporated into a coarser model during multiphase fluid flow simulations by means of some upscaling technique. the most used approach is the pseudo relative permeabilities and the more widely used is the Kyte and Berry method (1975). In this work, it is proposed a multi-scale computational model for multiphase flow that implicitly treats the upscaling without using pseudo functions. By solving a sequence of local problems on subdomains of the refined scale it is possible to achieve results with a coarser grid without expensive computations of a fine grid model. The main advantage of this new procedure is to treat the upscaling step implicitly in the solution process, overcoming some practical difficulties related the use of traditional pseudo functions. results of bidimensional two phase flow simulations considering homogeneous porous media are presented. Some examples compare the results of this approach and the commercial upscaling program PSEUDO, a module of the reservoir simulation software ECLIPSE. (author)

  14. Evaluation of low impact development approach for mitigating flood inundation at a watershed scale in China.

    Science.gov (United States)

    Hu, Maochuan; Sayama, Takahiro; Zhang, Xingqi; Tanaka, Kenji; Takara, Kaoru; Yang, Hong

    2017-05-15

    Low impact development (LID) has attracted growing attention as an important approach for urban flood mitigation. Most studies evaluating LID performance for mitigating floods focus on the changes of peak flow and runoff volume. This paper assessed the performance of LID practices for mitigating flood inundation hazards as retrofitting technologies in an urbanized watershed in Nanjing, China. The findings indicate that LID practices are effective for flood inundation mitigation at the watershed scale, and especially for reducing inundated areas with a high flood hazard risk. Various scenarios of LID implementation levels can reduce total inundated areas by 2%-17% and areas with a high flood hazard level by 6%-80%. Permeable pavement shows better performance than rainwater harvesting against mitigating urban waterlogging. The most efficient scenario is combined rainwater harvesting on rooftops with a cistern capacity of 78.5 mm and permeable pavement installed on 75% of non-busy roads and other impervious surfaces. Inundation modeling is an effective approach to obtaining the information necessary to guide decision-making for designing LID practices at watershed scales. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Hybrid fixed point in CAT(0 spaces

    Directory of Open Access Journals (Sweden)

    Hemant Kumar Pathak

    2018-02-01

    Full Text Available In this paper, we introduce an ultrapower approach to prove fixed point theorems for $H^{+}$-nonexpansive multi-valued mappings in the setting of CAT(0 spaces and prove several hybrid fixed point results in CAT(0 spaces for families of single-valued nonexpansive or quasinonexpansive mappings and multi-valued upper semicontinuous, almost lower semicontinuous or $H^{+}$-nonexpansive mappings which are weakly commuting. We also establish a result about structure of the set of fixed points of $H^{+}$-quasinonexpansive mapping on a CAT(0 space.

  16. Modeling and control of a large nuclear reactor. A three-time-scale approach

    Energy Technology Data Exchange (ETDEWEB)

    Shimjith, S.R. [Indian Institute of Technology Bombay, Mumbai (India); Bhabha Atomic Research Centre, Mumbai (India); Tiwari, A.P. [Bhabha Atomic Research Centre, Mumbai (India); Bandyopadhyay, B. [Indian Institute of Technology Bombay, Mumbai (India). IDP in Systems and Control Engineering

    2013-07-01

    Recent research on Modeling and Control of a Large Nuclear Reactor. Presents a three-time-scale approach. Written by leading experts in the field. Control analysis and design of large nuclear reactors requires a suitable mathematical model representing the steady state and dynamic behavior of the reactor with reasonable accuracy. This task is, however, quite challenging because of several complex dynamic phenomena existing in a reactor. Quite often, the models developed would be of prohibitively large order, non-linear and of complex structure not readily amenable for control studies. Moreover, the existence of simultaneously occurring dynamic variations at different speeds makes the mathematical model susceptible to numerical ill-conditioning, inhibiting direct application of standard control techniques. This monograph introduces a technique for mathematical modeling of large nuclear reactors in the framework of multi-point kinetics, to obtain a comparatively smaller order model in standard state space form thus overcoming these difficulties. It further brings in innovative methods for controller design for systems exhibiting multi-time-scale property, with emphasis on three-time-scale systems.

  17. A multi-objective constraint-based approach for modeling genome-scale microbial ecosystems.

    Science.gov (United States)

    Budinich, Marko; Bourdon, Jérémie; Larhlimi, Abdelhalim; Eveillard, Damien

    2017-01-01

    Interplay within microbial communities impacts ecosystems on several scales, and elucidation of the consequent effects is a difficult task in ecology. In particular, the integration of genome-scale data within quantitative models of microbial ecosystems remains elusive. This study advocates the use of constraint-based modeling to build predictive models from recent high-resolution -omics datasets. Following recent studies that have demonstrated the accuracy of constraint-based models (CBMs) for simulating single-strain metabolic networks, we sought to study microbial ecosystems as a combination of single-strain metabolic networks that exchange nutrients. This study presents two multi-objective extensions of CBMs for modeling communities: multi-objective flux balance analysis (MO-FBA) and multi-objective flux variability analysis (MO-FVA). Both methods were applied to a hot spring mat model ecosystem. As a result, multiple trade-offs between nutrients and growth rates, as well as thermodynamically favorable relative abundances at community level, were emphasized. We expect this approach to be used for integrating genomic information in microbial ecosystems. Following models will provide insights about behaviors (including diversity) that take place at the ecosystem scale.

  18. An Integrated Assessment Approach to Address Artisanal and Small-Scale Gold Mining in Ghana

    Directory of Open Access Journals (Sweden)

    Niladri Basu

    2015-09-01

    Full Text Available Artisanal and small-scale gold mining (ASGM is growing in many regions of the world including Ghana. The problems in these communities are complex and multi-faceted. To help increase understanding of such problems, and to enable consensus-building and effective translation of scientific findings to stakeholders, help inform policies, and ultimately improve decision making, we utilized an Integrated Assessment approach to study artisanal and small-scale gold mining activities in Ghana. Though Integrated Assessments have been used in the fields of environmental science and sustainable development, their use in addressing specific matter in public health, and in particular, environmental and occupational health is quite limited despite their many benefits. The aim of the current paper was to describe specific activities undertaken and how they were organized, and the outputs and outcomes of our activity. In brief, three disciplinary workgroups (Natural Sciences, Human Health, Social Sciences and Economics were formed, with 26 researchers from a range of Ghanaian institutions plus international experts. The workgroups conducted activities in order to address the following question: What are the causes, consequences and correctives of small-scale gold mining in Ghana? More specifically: What alternatives are available in resource-limited settings in Ghana that allow for gold-mining to occur in a manner that maintains ecological health and human health without hindering near- and long-term economic prosperity? Several response options were identified and evaluated, and are currently being disseminated to various stakeholders within Ghana and internationally.

  19. A Ranking Approach on Large-Scale Graph With Multidimensional Heterogeneous Information.

    Science.gov (United States)

    Wei, Wei; Gao, Bin; Liu, Tie-Yan; Wang, Taifeng; Li, Guohui; Li, Hang

    2016-04-01

    Graph-based ranking has been extensively studied and frequently applied in many applications, such as webpage ranking. It aims at mining potentially valuable information from the raw graph-structured data. Recently, with the proliferation of rich heterogeneous information (e.g., node/edge features and prior knowledge) available in many real-world graphs, how to effectively and efficiently leverage all information to improve the ranking performance becomes a new challenging problem. Previous methods only utilize part of such information and attempt to rank graph nodes according to link-based methods, of which the ranking performances are severely affected by several well-known issues, e.g., over-fitting or high computational complexity, especially when the scale of graph is very large. In this paper, we address the large-scale graph-based ranking problem and focus on how to effectively exploit rich heterogeneous information of the graph to improve the ranking performance. Specifically, we propose an innovative and effective semi-supervised PageRank (SSP) approach to parameterize the derived information within a unified semi-supervised learning framework (SSLF-GR), then simultaneously optimize the parameters and the ranking scores of graph nodes. Experiments on the real-world large-scale graphs demonstrate that our method significantly outperforms the algorithms that consider such graph information only partially.

  20. A multi-objective constraint-based approach for modeling genome-scale microbial ecosystems.

    Directory of Open Access Journals (Sweden)

    Marko Budinich

    Full Text Available Interplay within microbial communities impacts ecosystems on several scales, and elucidation of the consequent effects is a difficult task in ecology. In particular, the integration of genome-scale data within quantitative models of microbial ecosystems remains elusive. This study advocates the use of constraint-based modeling to build predictive models from recent high-resolution -omics datasets. Following recent studies that have demonstrated the accuracy of constraint-based models (CBMs for simulating single-strain metabolic networks, we sought to study microbial ecosystems as a combination of single-strain metabolic networks that exchange nutrients. This study presents two multi-objective extensions of CBMs for modeling communities: multi-objective flux balance analysis (MO-FBA and multi-objective flux variability analysis (MO-FVA. Both methods were applied to a hot spring mat model ecosystem. As a result, multiple trade-offs between nutrients and growth rates, as well as thermodynamically favorable relative abundances at community level, were emphasized. We expect this approach to be used for integrating genomic information in microbial ecosystems. Following models will provide insights about behaviors (including diversity that take place at the ecosystem scale.

  1. An Integrated Assessment Approach to Address Artisanal and Small-Scale Gold Mining in Ghana.

    Science.gov (United States)

    Basu, Niladri; Renne, Elisha P; Long, Rachel N

    2015-09-17

    Artisanal and small-scale gold mining (ASGM) is growing in many regions of the world including Ghana. The problems in these communities are complex and multi-faceted. To help increase understanding of such problems, and to enable consensus-building and effective translation of scientific findings to stakeholders, help inform policies, and ultimately improve decision making, we utilized an Integrated Assessment approach to study artisanal and small-scale gold mining activities in Ghana. Though Integrated Assessments have been used in the fields of environmental science and sustainable development, their use in addressing specific matter in public health, and in particular, environmental and occupational health is quite limited despite their many benefits. The aim of the current paper was to describe specific activities undertaken and how they were organized, and the outputs and outcomes of our activity. In brief, three disciplinary workgroups (Natural Sciences, Human Health, Social Sciences and Economics) were formed, with 26 researchers from a range of Ghanaian institutions plus international experts. The workgroups conducted activities in order to address the following question: What are the causes, consequences and correctives of small-scale gold mining in Ghana? More specifically: What alternatives are available in resource-limited settings in Ghana that allow for gold-mining to occur in a manner that maintains ecological health and human health without hindering near- and long-term economic prosperity? Several response options were identified and evaluated, and are currently being disseminated to various stakeholders within Ghana and internationally.

  2. Wine consumers’ preferences in Spain: an analysis using the best-worst scaling approach

    Directory of Open Access Journals (Sweden)

    Tiziana de-Magistris

    2014-06-01

    Full Text Available Research on wine consumers’ preferences has largely been explored in the academic literature and the importance of wine attributes has been measured by rating or ranking scales. However, the most recent literature on wine preferences has applied the best-worst scaling approach to avoid the biased outcomes derived from using rating or ranking scales in surveys. This study investigates premium red wine consumers’ preferences in Spain by applying best-worst alternatives. To achieve this goal, a random parameter logit model is applied to assess the impacts of wine attributes on the probability of choosing premium quality red wine by using data from an ad-hoc survey conducted in a medium-sized Spanish city. The results suggest that some wine attributes related to past experience (i.e. it matches food followed by some related to personal knowledge (i.e. the designation of origin are valued as the most important, whereas other attributes related to the image of the New World (i.e. label or brand name are perceived as the least important or indifferent.

  3. A watershed-scale goals approach to assessing and funding wastewater infrastructure.

    Science.gov (United States)

    Rahm, Brian G; Vedachalam, Sridhar; Shen, Jerry; Woodbury, Peter B; Riha, Susan J

    2013-11-15

    Capital needs during the next twenty years for public wastewater treatment, piping, combined sewer overflow correction, and storm-water management are estimated to be approximately $300 billion for the USA. Financing these needs is a significant challenge, as Federal funding for the Clean Water Act has been reduced by 70% during the last twenty years. There is an urgent need for new approaches to assist states and other decision makers to prioritize wastewater maintenance and improvements. We present a methodology for performing an integrated quantitative watershed-scale goals assessment for sustaining wastewater infrastructure. We applied this methodology to ten watersheds of the Hudson-Mohawk basin in New York State, USA that together are home to more than 2.7 million people, cover 3.5 million hectares, and contain more than 36,000 km of streams. We assembled data on 183 POTWs treating approximately 1.5 million m(3) of wastewater per day. For each watershed, we analyzed eight metrics: Growth Capacity, Capacity Density, Soil Suitability, Violations, Tributary Length Impacted, Tributary Capital Cost, Volume Capital Cost, and Population Capital Cost. These metrics were integrated into three goals for watershed-scale management: Tributary Protection, Urban Development, and Urban-Rural Integration. Our results demonstrate that the methodology can be implemented using widely available data, although some verification of data is required. Furthermore, we demonstrate substantial differences in character, need, and the appropriateness of different management strategies among the ten watersheds. These results suggest that it is feasible to perform watershed-scale goals assessment to augment existing approaches to wastewater infrastructure analysis and planning. Copyright © 2013 Elsevier Ltd. All rights reserved.

  4. Gene prediction in metagenomic fragments: A large scale machine learning approach

    Directory of Open Access Journals (Sweden)

    Morgenstern Burkhard

    2008-04-01

    Full Text Available Abstract Background Metagenomics is an approach to the characterization of microbial genomes via the direct isolation of genomic sequences from the environment without prior cultivation. The amount of metagenomic sequence data is growing fast while computational methods for metagenome analysis are still in their infancy. In contrast to genomic sequences of single species, which can usually be assembled and analyzed by many available methods, a large proportion of metagenome data remains as unassembled anonymous sequencing reads. One of the aims of all metagenomic sequencing projects is the identification of novel genes. Short length, for example, Sanger sequencing yields on average 700 bp fragments, and unknown phylogenetic origin of most fragments require approaches to gene prediction that are different from the currently available methods for genomes of single species. In particular, the large size of metagenomic samples requires fast and accurate methods with small numbers of false positive predictions. Results We introduce a novel gene prediction algorithm for metagenomic fragments based on a two-stage machine learning approach. In the first stage, we use linear discriminants for monocodon usage, dicodon usage and translation initiation sites to extract features from DNA sequences. In the second stage, an artificial neural network combines these features with open reading frame length and fragment GC-content to compute the probability that this open reading frame encodes a protein. This probability is used for the classification and scoring of gene candidates. With large scale training, our method provides fast single fragment predictions with good sensitivity and specificity on artificially fragmented genomic DNA. Additionally, this method is able to predict translation initiation sites accurately and distinguishes complete from incomplete genes with high reliability. Conclusion Large scale machine learning methods are well-suited for gene

  5. Traffic sign recognition based on a context-aware scale-invariant feature transform approach

    Science.gov (United States)

    Yuan, Xue; Hao, Xiaoli; Chen, Houjin; Wei, Xueye

    2013-10-01

    A new context-aware scale-invariant feature transform (CASIFT) approach is proposed, which is designed for the use in traffic sign recognition (TSR) systems. The following issues remain in previous works in which SIFT is used for matching or recognition: (1) SIFT is unable to provide color information; (2) SIFT only focuses on local features while ignoring the distribution of global shapes; (3) the template with the maximum number of matching points selected as the final result is instable, especially for images with simple patterns; and (4) SIFT is liable to result in errors when different images share the same local features. In order to resolve these problems, a new CASIFT approach is proposed. The contributions of the work are as follows: (1) color angular patterns are used to provide the color distinguishing information; (2) a CASIFT which effectively combines local and global information is proposed; and (3) a method for computing the similarity between two images is proposed, which focuses on the distribution of the matching points, rather than using the traditional SIFT approach of selecting the template with maximum number of matching points as the final result. The proposed approach is particularly effective in dealing with traffic signs which have rich colors and varied global shape distribution. Experiments are performed to validate the effectiveness of the proposed approach in TSR systems, and the experimental results are satisfying even for images containing traffic signs that have been rotated, damaged, altered in color, have undergone affine transformations, or images which were photographed under different weather or illumination conditions.

  6. A Disciplined Architectural Approach to Scaling Data Analysis for Massive, Scientific Data

    Science.gov (United States)

    Crichton, D. J.; Braverman, A. J.; Cinquini, L.; Turmon, M.; Lee, H.; Law, E.

    2014-12-01

    Data collections across remote sensing and ground-based instruments in astronomy, Earth science, and planetary science are outpacing scientists' ability to analyze them. Furthermore, the distribution, structure, and heterogeneity of the measurements themselves pose challenges that limit the scalability of data analysis using traditional approaches. Methods for developing science data processing pipelines, distribution of scientific datasets, and performing analysis will require innovative approaches that integrate cyber-infrastructure, algorithms, and data into more systematic approaches that can more efficiently compute and reduce data, particularly distributed data. This requires the integration of computer science, machine learning, statistics and domain expertise to identify scalable architectures for data analysis. The size of data returned from Earth Science observing satellites and the magnitude of data from climate model output, is predicted to grow into the tens of petabytes challenging current data analysis paradigms. This same kind of growth is present in astronomy and planetary science data. One of the major challenges in data science and related disciplines defining new approaches to scaling systems and analysis in order to increase scientific productivity and yield. Specific needs include: 1) identification of optimized system architectures for analyzing massive, distributed data sets; 2) algorithms for systematic analysis of massive data sets in distributed environments; and 3) the development of software infrastructures that are capable of performing massive, distributed data analysis across a comprehensive data science framework. NASA/JPL has begun an initiative in data science to address these challenges. Our goal is to evaluate how scientific productivity can be improved through optimized architectural topologies that identify how to deploy and manage the access, distribution, computation, and reduction of massive, distributed data, while

  7. Integrating macro and micro scale approaches in the agent-based modeling of residential dynamics

    Science.gov (United States)

    Saeedi, Sara

    2018-06-01

    With the advancement of computational modeling and simulation (M&S) methods as well as data collection technologies, urban dynamics modeling substantially improved over the last several decades. The complex urban dynamics processes are most effectively modeled not at the macro-scale, but following a bottom-up approach, by simulating the decisions of individual entities, or residents. Agent-based modeling (ABM) provides the key to a dynamic M&S framework that is able to integrate socioeconomic with environmental models, and to operate at both micro and macro geographical scales. In this study, a multi-agent system is proposed to simulate residential dynamics by considering spatiotemporal land use changes. In the proposed ABM, macro-scale land use change prediction is modeled by Artificial Neural Network (ANN) and deployed as the agent environment and micro-scale residential dynamics behaviors autonomously implemented by household agents. These two levels of simulation interacted and jointly promoted urbanization process in an urban area of Tehran city in Iran. The model simulates the behavior of individual households in finding ideal locations to dwell. The household agents are divided into three main groups based on their income rank and they are further classified into different categories based on a number of attributes. These attributes determine the households' preferences for finding new dwellings and change with time. The ABM environment is represented by a land-use map in which the properties of the land parcels change dynamically over the simulation time. The outputs of this model are a set of maps showing the pattern of different groups of households in the city. These patterns can be used by city planners to find optimum locations for building new residential units or adding new services to the city. The simulation results show that combining macro- and micro-level simulation can give full play to the potential of the ABM to understand the driving

  8. Data-Driven Approach for Analyzing Hydrogeology and Groundwater Quality Across Multiple Scales.

    Science.gov (United States)

    Curtis, Zachary K; Li, Shu-Guang; Liao, Hua-Sheng; Lusch, David

    2017-08-29

    Recent trends of assimilating water well records into statewide databases provide a new opportunity for evaluating spatial dynamics of groundwater quality and quantity. However, these datasets are scarcely rigorously analyzed to address larger scientific problems because they are of lower quality and massive. We develop an approach for utilizing well databases to analyze physical and geochemical aspects of groundwater systems, and apply it to a multiscale investigation of the sources and dynamics of chloride (Cl - ) in the near-surface groundwater of the Lower Peninsula of Michigan. Nearly 500,000 static water levels (SWLs) were critically evaluated, extracted, and analyzed to delineate long-term, average groundwater flow patterns using a nonstationary kriging technique at the basin-scale (i.e., across the entire peninsula). Two regions identified as major basin-scale discharge zones-the Michigan and Saginaw Lowlands-were further analyzed with regional- and local-scale SWL models. Groundwater valleys ("discharge" zones) and mounds ("recharge" zones) were identified for all models, and the proportions of wells with elevated Cl - concentrations in each zone were calculated, visualized, and compared. Concentrations in discharge zones, where groundwater is expected to flow primarily upwards, are consistently and significantly higher than those in recharge zones. A synoptic sampling campaign in the Michigan Lowlands revealed concentrations generally increase with depth, a trend noted in previous studies of the Saginaw Lowlands. These strong, consistent SWL and Cl - distribution patterns across multiple scales suggest that a deep source (i.e., Michigan brines) is the primary cause for the elevated chloride concentrations observed in discharge areas across the peninsula. © 2017, National Ground Water Association.

  9. Mean-cluster approach indicates cell sorting time scales are determined by collective dynamics

    Science.gov (United States)

    Beatrici, Carine P.; de Almeida, Rita M. C.; Brunnet, Leonardo G.

    2017-03-01

    Cell migration is essential to cell segregation, playing a central role in tissue formation, wound healing, and tumor evolution. Considering random mixtures of two cell types, it is still not clear which cell characteristics define clustering time scales. The mass of diffusing clusters merging with one another is expected to grow as td /d +2 when the diffusion constant scales with the inverse of the cluster mass. Cell segregation experiments deviate from that behavior. Explanations for that could arise from specific microscopic mechanisms or from collective effects, typical of active matter. Here we consider a power law connecting diffusion constant and cluster mass to propose an analytic approach to model cell segregation where we explicitly take into account finite-size corrections. The results are compared with active matter model simulations and experiments available in the literature. To investigate the role played by different mechanisms we considered different hypotheses describing cell-cell interaction: differential adhesion hypothesis and different velocities hypothesis. We find that the simulations yield normal diffusion for long time intervals. Analytic and simulation results show that (i) cluster evolution clearly tends to a scaling regime, disrupted only at finite-size limits; (ii) cluster diffusion is greatly enhanced by cell collective behavior, such that for high enough tendency to follow the neighbors, cluster diffusion may become independent of cluster size; (iii) the scaling exponent for cluster growth depends only on the mass-diffusion relation, not on the detailed local segregation mechanism. These results apply for active matter systems in general and, in particular, the mechanisms found underlying the increase in cell sorting speed certainly have deep implications in biological evolution as a selection mechanism.

  10. Solution approach for a large scale personnel transport system for a large company in Latin America

    Energy Technology Data Exchange (ETDEWEB)

    Garzón-Garnica, Eduardo-Arturo; Caballero-Morales, Santiago-Omar; Martínez-Flores, José-Luis

    2017-07-01

    The present paper focuses on the modelling and solution of a large-scale personnel transportation system in Mexico where many routes and vehicles are currently used to service 525 points. The routing system proposed can be applied to many cities in the Latin-American region. Design/methodology/approach: This system was modelled as a VRP model considering the use of real-world transit times, and the fact that routes start at the farthest point from the destination center. Experiments were performed on different sized sets of service points. As the size of the instances was increased, the performance of the heuristic method was assessed in comparison with the results of an exact algorithm, the results remaining very close between both. When the size of the instance was full-scale and the exact algorithm took too much time to solve the problem, then the heuristic algorithm provided a feasible solution. Supported by the validation with smaller scale instances, where the difference between both solutions was close to a 6%, the full –scale solution obtained with the heuristic algorithm was considered to be within that same range. Findings: The proposed modelling and solving method provided a solution that would produce significant savings in the daily operation of the routes. Originality/value: The urban distribution of the cities in Latin America is unique to other regions in the world. The general layout of the large cities in this region includes a small town center, usually antique, and a somewhat disordered outer region. The lack of a vehicle-centered urban planning poses distinct challenges for vehicle routing problems in the region. The use of a heuristic VRP combined with the results of an exact VRP, allowed the obtention of an improved routing plan specific to the requirements of the region.

  11. Solution approach for a large scale personnel transport system for a large company in Latin America

    International Nuclear Information System (INIS)

    Garzón-Garnica, Eduardo-Arturo; Caballero-Morales, Santiago-Omar; Martínez-Flores, José-Luis

    2017-01-01

    The present paper focuses on the modelling and solution of a large-scale personnel transportation system in Mexico where many routes and vehicles are currently used to service 525 points. The routing system proposed can be applied to many cities in the Latin-American region. Design/methodology/approach: This system was modelled as a VRP model considering the use of real-world transit times, and the fact that routes start at the farthest point from the destination center. Experiments were performed on different sized sets of service points. As the size of the instances was increased, the performance of the heuristic method was assessed in comparison with the results of an exact algorithm, the results remaining very close between both. When the size of the instance was full-scale and the exact algorithm took too much time to solve the problem, then the heuristic algorithm provided a feasible solution. Supported by the validation with smaller scale instances, where the difference between both solutions was close to a 6%, the full –scale solution obtained with the heuristic algorithm was considered to be within that same range. Findings: The proposed modelling and solving method provided a solution that would produce significant savings in the daily operation of the routes. Originality/value: The urban distribution of the cities in Latin America is unique to other regions in the world. The general layout of the large cities in this region includes a small town center, usually antique, and a somewhat disordered outer region. The lack of a vehicle-centered urban planning poses distinct challenges for vehicle routing problems in the region. The use of a heuristic VRP combined with the results of an exact VRP, allowed the obtention of an improved routing plan specific to the requirements of the region.

  12. Solution approach for a large scale personnel transport system for a large company in Latin America

    Directory of Open Access Journals (Sweden)

    Eduardo-Arturo Garzón-Garnica

    2017-10-01

    Full Text Available Purpose: The present paper focuses on the modelling and solution of a large-scale personnel transportation system in Mexico where many routes and vehicles are currently used to service 525 points. The routing system proposed can be applied to many cities in the Latin-American region. Design/methodology/approach: This system was modelled as a VRP model considering the use of real-world transit times, and the fact that routes start at the farthest point from the destination center. Experiments were performed on different sized sets of service points. As the size of the instances was increased, the performance of the heuristic method was assessed in comparison with the results of an exact algorithm, the results remaining very close between both.  When the size of the instance was full-scale and the exact algorithm took too much time to solve the problem, then the heuristic algorithm provided a feasible solution. Supported by the validation with smaller scale instances, where the difference between both solutions was close to a 6%, the full –scale solution obtained with the heuristic algorithm was considered to be within that same range. Findings: The proposed modelling and solving method provided a solution that would produce significant savings in the daily operation of the routes. Originality/value: The urban distribution of the cities in Latin America is unique to other regions in the world. The general layout of the large cities in this region includes a small town center, usually antique, and a somewhat disordered outer region. The lack of a vehicle-centered urban planning poses distinct challenges for vehicle routing problems in the region. The use of a heuristic VRP combined with the results of an exact VRP, allowed the obtention of an improved routing plan specific to the requirements of the region.

  13. Fixed target flammable gas upgrades

    International Nuclear Information System (INIS)

    Schmitt, R.; Squires, B.; Gasteyer, T.; Richardson, R.

    1996-12-01

    In the past, fixed target flammable gas systems were not supported in an organized fashion. The Research Division, Mechanical Support Department began to support these gas systems for the 1995 run. This technical memo describes the new approach being used to supply chamber gasses to fixed target experiments at Fermilab. It describes the engineering design features, system safety, system documentation and performance results. Gas mixtures provide the medium for electron detection in proportional and drift chambers. Usually a mixture of a noble gas and a polyatomic quenching gas is used. Sometimes a small amount of electronegative gas is added as well. The mixture required is a function of the specific chamber design, including working voltage, gain requirements, high rate capability, aging and others. For the 1995 fixed target run all the experiments requested once through gas systems. We obtained a summary of problems from the 1990 fixed target run and made a summary of the operations logbook entries from the 1991 run. These summaries primarily include problems involving flammable gas alarms, but also include incidents where Operations was involved or informed. Usually contamination issues were dealt with by the experimenters. The summaries are attached. We discussed past operational issues with the experimenters involved. There were numerous incidents of drift chamber failure where contaminated gas was suspect. However analyses of the gas at the time usually did not show any particular problems. This could have been because the analysis did not look for the troublesome component, the contaminant was concentrated in the gas over the liquid and vented before the sample was taken, or that contaminants were drawn into the chambers directly through leaks or sub-atmospheric pressures. After some study we were unable to determine specific causes of past contamination problems, although in argon-ethane systems the problems were due to the ethane only

  14. Setting the renormalization scale in pQCD: Comparisons of the principle of maximum conformality with the sequential extended Brodsky-Lepage-Mackenzie approach

    Energy Technology Data Exchange (ETDEWEB)

    Ma, Hong -Hao [Chongqing Univ., Chongqing (People' s Republic of China); Wu, Xing -Gang [Chongqing Univ., Chongqing (People' s Republic of China); Ma, Yang [Chongqing Univ., Chongqing (People' s Republic of China); Brodsky, Stanley J. [Stanford Univ., Stanford, CA (United States); Mojaza, Matin [KTH Royal Inst. of Technology and Stockholm Univ., Stockholm (Sweden)

    2015-05-26

    A key problem in making precise perturbative QCD (pQCD) predictions is how to set the renormalization scale of the running coupling unambiguously at each finite order. The elimination of the uncertainty in setting the renormalization scale in pQCD will greatly increase the precision of collider tests of the Standard Model and the sensitivity to new phenomena. Renormalization group invariance requires that predictions for observables must also be independent on the choice of the renormalization scheme. The well-known Brodsky-Lepage-Mackenzie (BLM) approach cannot be easily extended beyond next-to-next-to-leading order of pQCD. Several suggestions have been proposed to extend the BLM approach to all orders. In this paper we discuss two distinct methods. One is based on the “Principle of Maximum Conformality” (PMC), which provides a systematic all-orders method to eliminate the scale and scheme ambiguities of pQCD. The PMC extends the BLM procedure to all orders using renormalization group methods; as an outcome, it significantly improves the pQCD convergence by eliminating renormalon divergences. An alternative method is the “sequential extended BLM” (seBLM) approach, which has been primarily designed to improve the convergence of pQCD series. The seBLM, as originally proposed, introduces auxiliary fields and follows the pattern of the β0-expansion to fix the renormalization scale. However, the seBLM requires a recomputation of pQCD amplitudes including the auxiliary fields; due to the limited availability of calculations using these auxiliary fields, the seBLM has only been applied to a few processes at low orders. In order to avoid the complications of adding extra fields, we propose a modified version of seBLM which allows us to apply this method to higher orders. As a result, we then perform detailed numerical comparisons of the two alternative scale-setting approaches by investigating their predictions for the annihilation cross section ratio R

  15. An Interdisciplinary Approach to Developing Renewable Energy Mixes at the Community Scale

    Science.gov (United States)

    Gormally, Alexandra M.; Whyatt, James D.; Timmis, Roger J.; Pooley, Colin G.

    2013-04-01

    Renewable energy has risen on the global political agenda due to concerns over climate change and energy security. The European Union (EU) currently has a target of 20% renewable energy by the year 2020 and there is increasing focus on the ways in which these targets can be achieved. Here we focus on the UK context which could be considered to be lagging behind other EU countries in terms of targets and implementation. The UK has a lower overall target of 15% renewable energy by 2020 and in 2011 reached only 3.8 % (DUKES, 2012), one of the lowest progressions compared to other EU Member States (European Commission, 2012). The reticence of the UK to reach such targets could in part be due to their dependence on their current energy mix and a highly centralised electricity grid system, which does not lend itself easily to the adoption of renewable technologies. Additionally, increasing levels of demand and the need to raise energy awareness are key concerns in terms of achieving energy security in the UK. There is also growing concern from the public about increasing fuel and energy bills. One possible solution to some of these problems could be through the adoption of small-scale distributed renewable schemes implemented at the community-scale with local ownership or involvement, for example, through energy co-operatives. The notion of the energy co-operative is well understood elsewhere in Europe but unfamiliar to many UK residents due to its centralised approach to energy provision. There are many benefits associated with engaging in distributed renewable energy systems. In addition to financial benefits, participation may raise energy awareness and can lead to positive responses towards renewable technologies. Here we briefly explore how a mix of small-scale renewables, including wind, hydro-power and solar PV, have been implemented and managed by a small island community in the Scottish Hebrides to achieve over 90% of their electricity needs from renewable

  16. A Self-Organizing Spatial Clustering Approach to Support Large-Scale Network RTK Systems.

    Science.gov (United States)

    Shen, Lili; Guo, Jiming; Wang, Lei

    2018-06-06

    The network real-time kinematic (RTK) technique can provide centimeter-level real time positioning solutions and play a key role in geo-spatial infrastructure. With ever-increasing popularity, network RTK systems will face issues in the support of large numbers of concurrent users. In the past, high-precision positioning services were oriented towards professionals and only supported a few concurrent users. Currently, precise positioning provides a spatial foundation for artificial intelligence (AI), and countless smart devices (autonomous cars, unmanned aerial-vehicles (UAVs), robotic equipment, etc.) require precise positioning services. Therefore, the development of approaches to support large-scale network RTK systems is urgent. In this study, we proposed a self-organizing spatial clustering (SOSC) approach which automatically clusters online users to reduce the computational load on the network RTK system server side. The experimental results indicate that both the SOSC algorithm and the grid algorithm can reduce the computational load efficiently, while the SOSC algorithm gives a more elastic and adaptive clustering solution with different datasets. The SOSC algorithm determines the cluster number and the mean distance to cluster center (MDTCC) according to the data set, while the grid approaches are all predefined. The side-effects of clustering algorithms on the user side are analyzed with real global navigation satellite system (GNSS) data sets. The experimental results indicate that 10 km can be safely used as the cluster radius threshold for the SOSC algorithm without significantly reducing the positioning precision and reliability on the user side.

  17. A Large-Scale Design Integration Approach Developed in Conjunction with the Ares Launch Vehicle Program

    Science.gov (United States)

    Redmon, John W.; Shirley, Michael C.; Kinard, Paul S.

    2012-01-01

    This paper presents a method for performing large-scale design integration, taking a classical 2D drawing envelope and interface approach and applying it to modern three dimensional computer aided design (3D CAD) systems. Today, the paradigm often used when performing design integration with 3D models involves a digital mockup of an overall vehicle, in the form of a massive, fully detailed, CAD assembly; therefore, adding unnecessary burden and overhead to design and product data management processes. While fully detailed data may yield a broad depth of design detail, pertinent integration features are often obscured under the excessive amounts of information, making them difficult to discern. In contrast, the envelope and interface method results in a reduction in both the amount and complexity of information necessary for design integration while yielding significant savings in time and effort when applied to today's complex design integration projects. This approach, combining classical and modern methods, proved advantageous during the complex design integration activities of the Ares I vehicle. Downstream processes, benefiting from this approach by reducing development and design cycle time, include: Creation of analysis models for the Aerodynamic discipline; Vehicle to ground interface development; Documentation development for the vehicle assembly.

  18. An objective and parsimonious approach for classifying natural flow regimes at a continental scale

    Science.gov (United States)

    Archfield, S. A.; Kennen, J.; Carlisle, D.; Wolock, D.

    2013-12-01

    Hydroecological stream classification--the process of grouping streams by similar hydrologic responses and, thereby, similar aquatic habitat--has been widely accepted and is often one of the first steps towards developing ecological flow targets. Despite its importance, the last national classification of streamgauges was completed about 20 years ago. A new classification of 1,534 streamgauges in the contiguous United States is presented using a novel and parsimonious approach to understand similarity in ecological streamflow response. This new classification approach uses seven fundamental daily streamflow statistics (FDSS) rather than winnowing down an uncorrelated subset from 200 or more ecologically relevant streamflow statistics (ERSS) commonly used in hydroecological classification studies. The results of this investigation demonstrate that the distributions of 33 tested ERSS are consistently different among the classes derived from the seven FDSS. It is further shown that classification based solely on the 33 ERSS generally does a poorer job in grouping similar streamgauges than the classification based on the seven FDSS. This new classification approach has the additional advantages of overcoming some of the subjectivity associated with the selection of the classification variables and provides a set of robust continental-scale classes of US streamgauges.

  19. Fixed-film processes. Part 1

    International Nuclear Information System (INIS)

    Canziani, R.

    1999-01-01

    Recently, full scale fixed-film or mixed suspended and fixed biomass bioreactors have been applied in many wastewater treatments plants. These process no longer depend on biomass settle ability and can be used to improve the performance of existing plants as required by more stringent discharge permit limits, especially for nutrients and suspended solid. Also, processes may work at high rates making it possible to build small footprint installations. Fixed-film process include trickling filter, moving bed reactors fluidized bed reactors. In the first part, the theoretical base governing fixed-film processes are briefly outlined with some simple examples of calculations underlining the main differences with conventional activated sludge processes [it

  20. Fixed-film processes. Part 2

    International Nuclear Information System (INIS)

    Canziani, R.

    1999-01-01

    Recently, full scale fixed-film or mixed suspended have been applied in many wastewater treatments plants. These processes no longer depend on biomass settle ability and can be used to improve the performance of existing plants as required by more stringent discharge permit limits, especially for nutrients suspended solids. Also, processes may work at high rates making is possible to build small footprint installations. Fixed-film processes include trickling filters (and combined suspended and fixed-films processes), rotating biological contactors, biological aerated submerged, filters moving bed reactors, fluidized bed reactors. In the first part, the theoretical based governing fixed-film processes are briefly outlined, with some simple examples of calculations, underlining the main differences with conventional activate sludge processes. In the second part, the most common types of reactors are reviewed [it

  1. Estimate the time varying brain receptor occupancy in PET imaging experiments using non-linear fixed and mixed effect modeling approach

    International Nuclear Information System (INIS)

    Zamuner, Stefano; Gomeni, Roberto; Bye, Alan

    2002-01-01

    Positron-Emission Tomography (PET) is an imaging technology currently used in drug development as a non-invasive measure of drug distribution and interaction with biochemical target system. The level of receptor occupancy achieved by a compound can be estimated by comparing time-activity measurements in an experiment done using tracer alone with the activity measured when the tracer is given following administration of unlabelled compound. The effective use of this surrogate marker as an enabling tool for drug development requires the definition of a model linking the brain receptor occupancy with the fluctuation of plasma concentrations. However, the predictive performance of such a model is strongly related to the precision on the estimate of receptor occupancy evaluated in PET scans collected at different times following drug treatment. Several methods have been proposed for the analysis and the quantification of the ligand-receptor interactions investigated from PET data. The aim of the present study is to evaluate alternative parameter estimation strategies based on the use of non-linear mixed effect models allowing to account for intra and inter-subject variability on the time-activity and for covariates potentially explaining this variability. A comparison of the different modeling approaches is presented using real data. The results of this comparison indicates that the mixed effect approach with a primary model partitioning the variance in term of Inter-Individual Variability (IIV) and Inter-Occasion Variability (IOV) and a second stage model relating the changes on binding potential to the dose of unlabelled drug is definitely the preferred approach

  2. Least fixed points revisited

    NARCIS (Netherlands)

    J.W. de Bakker (Jaco)

    1975-01-01

    textabstractParameter mechanisms for recursive procedures are investigated. Contrary to the view of Manna et al., it is argued that both call-by-value and call-by-name mechanisms yield the least fixed points of the functionals determined by the bodies of the procedures concerned. These functionals

  3. A computationally inexpensive CFD approach for small-scale biomass burners equipped with enhanced air staging

    International Nuclear Information System (INIS)

    Buchmayr, M.; Gruber, J.; Hargassner, M.; Hochenauer, C.

    2016-01-01

    Highlights: • Time efficient CFD model to predict biomass boiler performance. • Boundary conditions for numerical modeling are provided by measurements. • Tars in the product from primary combustion was considered. • Simulation results were validated by experiments on a real-scale reactor. • Very good accordance between experimental and simulation results. - Abstract: Computational Fluid Dynamics (CFD) is an upcoming technique for optimization and as a part of the design process of biomass combustion systems. An accurate simulation of biomass combustion can only be provided with high computational effort so far. This work presents an accurate, time efficient CFD approach for small-scale biomass combustion systems equipped with enhanced air staging. The model can handle the high amount of biomass tars in the primary combustion product at very low primary air ratios. Gas-phase combustion in the freeboard was performed by the Steady Flamelet Model (SFM) together with a detailed heptane combustion mechanism. The advantage of the SFM is that complex combustion chemistry can be taken into account at low computational effort because only two additional transport equations have to be solved to describe the chemistry in the reacting flow. Boundary conditions for primary combustion product composition were obtained from the fuel bed by experiments. The fuel bed data were used as fuel inlet boundary condition for the gas-phase combustion model. The numerical and experimental investigations were performed for different operating conditions and varying wood-chip moisture on a special designed real-scale reactor. The numerical predictions were validated with experimental results and a very good agreement was found. With the presented approach accurate results can be provided within 24 h using a standard Central Processing Unit (CPU) consisting of six cores. Case studies e.g. for combustion geometry improvement can be realized effectively due to the short calculation

  4. A Concurrent Mixed Methods Approach to Examining the Quantitative and Qualitative Meaningfulness of Absolute Magnitude Estimation Scales in Survey Research

    Science.gov (United States)

    Koskey, Kristin L. K.; Stewart, Victoria C.

    2014-01-01

    This small "n" observational study used a concurrent mixed methods approach to address a void in the literature with regard to the qualitative meaningfulness of the data yielded by absolute magnitude estimation scaling (MES) used to rate subjective stimuli. We investigated whether respondents' scales progressed from less to more and…

  5. A new approach to motion control of torque-constrained manipulators by using time-scaling of reference trajectories

    Energy Technology Data Exchange (ETDEWEB)

    Moreno-Valenzuela, Javier; Orozco-Manriquez, Ernesto [Digital del IPN, CITEDI-IPN, Tijuana, (Mexico)

    2009-12-15

    We introduce a control scheme based on using a trajectory tracking controller and an algorithm for on-line time scaling of the reference trajectories. The reference trajectories are time-scaled according to the measured tracking errors and the detected torque/acceleration saturation. Experiments are presented to illustrate the advantages of the proposed approach

  6. A new approach to motion control of torque-constrained manipulators by using time-scaling of reference trajectories

    International Nuclear Information System (INIS)

    Moreno-Valenzuela, Javier; Orozco-Manriquez, Ernesto

    2009-01-01

    We introduce a control scheme based on using a trajectory tracking controller and an algorithm for on-line time scaling of the reference trajectories. The reference trajectories are time-scaled according to the measured tracking errors and the detected torque/acceleration saturation. Experiments are presented to illustrate the advantages of the proposed approach

  7. Sodium-cutting: a new top-down approach to cut open nanostructures on nonplanar surfaces on a large scale.

    Science.gov (United States)

    Chen, Wei; Deng, Da

    2014-11-11

    We report a new, low-cost and simple top-down approach, "sodium-cutting", to cut and open nanostructures deposited on a nonplanar surface on a large scale. The feasibility of sodium-cutting was demonstrated with the successfully cutting open of ∼100% carbon nanospheres into nanobowls on a large scale from Sn@C nanospheres for the first time.

  8. Assessing a Top-Down Modeling Approach for Seasonal Scale Snow Sensitivity

    Science.gov (United States)

    Luce, C. H.; Lute, A.

    2017-12-01

    Mechanistic snow models are commonly applied to assess changes to snowpacks in a warming climate. Such assessments involve a number of assumptions about details of weather at daily to sub-seasonal time scales. Models of season-scale behavior can provide contrast for evaluating behavior at time scales more in concordance with climate warming projections. Such top-down models, however, involve a degree of empiricism, with attendant caveats about the potential of a changing climate to affect calibrated relationships. We estimated the sensitivity of snowpacks from 497 Snowpack Telemetry (SNOTEL) stations in the western U.S. based on differences in climate between stations (spatial analog). We examined the sensitivity of April 1 snow water equivalent (SWE) and mean snow residence time (SRT) to variations in Nov-Mar precipitation and average Nov-Mar temperature using multivariate local-fit regressions. We tested the modeling approach using a leave-one-out cross-validation as well as targeted two-fold non-random cross-validations contrasting, for example, warm vs. cold years, dry vs. wet years, and north vs. south stations. Nash-Sutcliffe Efficiency (NSE) values for the validations were strong for April 1 SWE, ranging from 0.71 to 0.90, and still reasonable, but weaker, for SRT, in the range of 0.64 to 0.81. From these ranges, we exclude validations where the training data do not represent the range of target data. A likely reason for differences in validation between the two metrics is that the SWE model reflects the influence of conservation of mass while using temperature as an indicator of the season-scale energy balance; in contrast, SRT depends more strongly on the energy balance aspects of the problem. Model forms with lower numbers of parameters generally validated better than more complex model forms, with the caveat that pseudoreplication could encourage selection of more complex models when validation contrasts were weak. Overall, the split sample validations

  9. Preparing laboratory and real-world EEG data for large-scale analysis: A containerized approach

    Directory of Open Access Journals (Sweden)

    Nima eBigdely-Shamlo

    2016-03-01

    Full Text Available Large-scale analysis of EEG and other physiological measures promises new insights into brain processes and more accurate and robust brain-computer interface (BCI models.. However, the absence of standard-ized vocabularies for annotating events in a machine understandable manner, the welter of collection-specific data organizations, the diffi-culty in moving data across processing platforms, and the unavailability of agreed-upon standards for preprocessing have prevented large-scale analyses of EEG. Here we describe a containerized approach and freely available tools we have developed to facilitate the process of an-notating, packaging, and preprocessing EEG data collections to enable data sharing, archiving, large-scale machine learning/data mining and (meta-analysis. The EEG Study Schema (ESS comprises three data Levels, each with its own XML-document schema and file/folder convention, plus a standardized (PREP pipeline to move raw (Data Level 1 data to a basic preprocessed state (Data Level 2 suitable for application of a large class of EEG analysis methods. Researchers can ship a study as a single unit and operate on its data using a standardized interface. ESS does not require a central database and provides all the metadata data necessary to execute a wide variety of EEG processing pipelines. The primary focus of ESS is automated in-depth analysis and meta-analysis EEG studies. However, ESS can also encapsulate meta-information for the other modalities such as eye tracking, that are in-creasingly used in both laboratory and real-world neuroimaging. ESS schema and tools are freely available at eegstudy.org, and a central cata-log of over 850 GB of existing data in ESS format is available at study-catalog.org. These tools and resources are part of a larger effort to ena-ble data sharing at sufficient scale for researchers to engage in truly large-scale EEG analysis and data mining (BigEEG.org.

  10. Stage I surface crack formation in thermal fatigue: A predictive multi-scale approach

    International Nuclear Information System (INIS)

    Osterstock, S.; Robertson, C.; Sauzay, M.; Aubin, V.; Degallaix, S.

    2010-01-01

    A multi-scale numerical model is developed, predicting the formation of stage I cracks, in thermal fatigue loading conditions. The proposed approach comprises 2 distinct calculation steps. Firstly, the number of cycles to micro-crack initiation is determined, in individual grains. The adopted initiation model depends on local stress-strain conditions, relative to sub-grain plasticity, grain orientation and grain deformation incompatibilities. Secondly, 2-4 grains long surface cracks (stage I) is predicted, by accounting for micro-crack coalescence, in 3 dimensions. The method described in this paper is applied to a 500 grains aggregate, loaded in representative thermal fatigue conditions. Preliminary results provide quantitative insight regarding position, density, spacing and orientations of stage I surface cracks and subsequent formation of crack networks. The proposed method is fully deterministic, provided all grain crystallographic orientations and micro-crack linking thresholds are specified. (authors)

  11. A Person-Centered Approach to Financial Capacity Assessment: Preliminary Development of a New Rating Scale.

    Science.gov (United States)

    Lichtenberg, Peter A; Stoltman, Jonathan; Ficker, Lisa J; Iris, Madelyn; Mast, Benjamin

    2015-01-01

    Financial exploitation and financial capacity issues often overlap when a gerontologist assesses whether an older adult's financial decision is an autonomous, capable choice. Our goal is to describe a new conceptual model for assessing financial decisions using principles of person-centered approaches and to introduce a new instrument, the Lichtenberg Financial Decision Rating Scale (LFDRS). We created a conceptual model, convened meetings of experts from various disciplines to critique the model and provide input on content and structure, and select final items. We then videotaped administration of the LFDRS to five older adults and had 10 experts provide independent ratings. The LFDRS demonstrated good to excellent inter-rater agreement. The LFDRS is a new tool that allows gerontologists to systematically gather information about a specific financial decision and the decisional abilities in question.

  12. Modelling an industrial anaerobic granular reactor using a multi-scale approach

    DEFF Research Database (Denmark)

    Feldman, Hannah; Flores Alsina, Xavier; Ramin, Pedram

    2017-01-01

    The objective of this paper is to show the results of an industrial project dealing with modelling of anaerobic digesters. A multi-scale mathematical approach is developed to describe reactor hydrodynamics, granule growth/distribution and microbial competition/inhibition for substrate/space within...... the biofilm. The main biochemical and physico-chemical processes in the model are based on the Anaerobic Digestion Model No 1 (ADM1) extended with the fate of phosphorus (P), sulfur (S) and ethanol (Et-OH). Wastewater dynamic conditions are reproduced and data frequency increased using the Benchmark...... simulations show the effects on the overall process performance when operational (pH) and loading (S:COD) conditions are modified. Lastly, the effect of intra-granular precipitation on the overall organic/inorganic distribution is assessed at: 1) different times; and, 2) reactor heights. Finally...

  13. Quantum scaling in many-body systems an approach to quantum phase transitions

    CERN Document Server

    Continentino, Mucio

    2017-01-01

    Quantum phase transitions are strongly relevant in a number of fields, ranging from condensed matter to cold atom physics and quantum field theory. This book, now in its second edition, approaches the problem of quantum phase transitions from a new and unifying perspective. Topics addressed include the concepts of scale and time invariance and their significance for quantum criticality, as well as brand new chapters on superfluid and superconductor quantum critical points, and quantum first order transitions. The renormalisation group in real and momentum space is also established as the proper language to describe the behaviour of systems close to a quantum phase transition. These phenomena introduce a number of theoretical challenges which are of major importance for driving new experiments. Being strongly motivated and oriented towards understanding experimental results, this is an excellent text for graduates, as well as theorists, experimentalists and those with an interest in quantum criticality.

  14. Burnout of pulverized biomass particles in large scale boiler - Single particle model approach

    Energy Technology Data Exchange (ETDEWEB)

    Saastamoinen, Jaakko; Aho, Martti; Moilanen, Antero [VTT Technical Research Centre of Finland, Box 1603, 40101 Jyvaeskylae (Finland); Soerensen, Lasse Holst [ReaTech/ReAddit, Frederiksborgsveij 399, Niels Bohr, DK-4000 Roskilde (Denmark); Clausen, Soennik [Risoe National Laboratory, DK-4000 Roskilde (Denmark); Berg, Mogens [ENERGI E2 A/S, A.C. Meyers Vaenge 9, DK-2450 Copenhagen SV (Denmark)

    2010-05-15

    Burning of coal and biomass particles are studied and compared by measurements in an entrained flow reactor and by modelling. The results are applied to study the burning of pulverized biomass in a large scale utility boiler originally planned for coal. A simplified single particle approach, where the particle combustion model is coupled with one-dimensional equation of motion of the particle, is applied for the calculation of the burnout in the boiler. The particle size of biomass can be much larger than that of coal to reach complete burnout due to lower density and greater reactivity. The burner location and the trajectories of the particles might be optimised to maximise the residence time and burnout. (author)

  15. A semiparametric graphical modelling approach for large-scale equity selection.

    Science.gov (United States)

    Liu, Han; Mulvey, John; Zhao, Tianqi

    2016-01-01

    We propose a new stock selection strategy that exploits rebalancing returns and improves portfolio performance. To effectively harvest rebalancing gains, we apply ideas from elliptical-copula graphical modelling and stability inference to select stocks that are as independent as possible. The proposed elliptical-copula graphical model has a latent Gaussian representation; its structure can be effectively inferred using the regularized rank-based estimators. The resulting algorithm is computationally efficient and scales to large data-sets. To show the efficacy of the proposed method, we apply it to conduct equity selection based on a 16-year health care stock data-set and a large 34-year stock data-set. Empirical tests show that the proposed method is superior to alternative strategies including a principal component analysis-based approach and the classical Markowitz strategy based on the traditional buy-and-hold assumption.

  16. Integrating adaptive behaviour in large-scale flood risk assessments: an Agent-Based Modelling approach

    Science.gov (United States)

    Haer, Toon; Aerts, Jeroen

    2015-04-01

    Between 1998 and 2009, Europe suffered over 213 major damaging floods, causing 1126 deaths, displacing around half a million people. In this period, floods caused at least 52 billion euro in insured economic losses making floods the most costly natural hazard faced in Europe. In many low-lying areas, the main strategy to cope with floods is to reduce the risk of the hazard through flood defence structures, like dikes and levees. However, it is suggested that part of the responsibility for flood protection needs to shift to households and businesses in areas at risk, and that governments and insurers can effectively stimulate the implementation of individual protective measures. However, adaptive behaviour towards flood risk reduction and the interaction between the government, insurers, and individuals has hardly been studied in large-scale flood risk assessments. In this study, an European Agent-Based Model is developed including agent representatives for the administrative stakeholders of European Member states, insurers and reinsurers markets, and individuals following complex behaviour models. The Agent-Based Modelling approach allows for an in-depth analysis of the interaction between heterogeneous autonomous agents and the resulting (non-)adaptive behaviour. Existing flood damage models are part of the European Agent-Based Model to allow for a dynamic response of both the agents and the environment to changing flood risk and protective efforts. By following an Agent-Based Modelling approach this study is a first contribution to overcome the limitations of traditional large-scale flood risk models in which the influence of individual adaptive behaviour towards flood risk reduction is often lacking.

  17. Simple Kinematic Pathway Approach (KPA) to Catchment-scale Travel Time and Water Age Distributions

    Science.gov (United States)

    Soltani, S. S.; Cvetkovic, V.; Destouni, G.

    2017-12-01

    The distribution of catchment-scale water travel times is strongly influenced by morphological dispersion and is partitioned between hillslope and larger, regional scales. We explore whether hillslope travel times are predictable using a simple semi-analytical "kinematic pathway approach" (KPA) that accounts for dispersion on two levels of morphological and macro-dispersion. The study gives new insights to shallow (hillslope) and deep (regional) groundwater travel times by comparing numerical simulations of travel time distributions, referred to as "dynamic model", with corresponding KPA computations for three different real catchment case studies in Sweden. KPA uses basic structural and hydrological data to compute transient water travel time (forward mode) and age (backward mode) distributions at the catchment outlet. Longitudinal and morphological dispersion components are reflected in KPA computations by assuming an effective Peclet number and topographically driven pathway length distributions, respectively. Numerical simulations of advective travel times are obtained by means of particle tracking using the fully-integrated flow model MIKE SHE. The comparison of computed cumulative distribution functions of travel times shows significant influence of morphological dispersion and groundwater recharge rate on the compatibility of the "kinematic pathway" and "dynamic" models. Zones of high recharge rate in "dynamic" models are associated with topographically driven groundwater flow paths to adjacent discharge zones, e.g. rivers and lakes, through relatively shallow pathway compartments. These zones exhibit more compatible behavior between "dynamic" and "kinematic pathway" models than the zones of low recharge rate. Interestingly, the travel time distributions of hillslope compartments remain almost unchanged with increasing recharge rates in the "dynamic" models. This robust "dynamic" model behavior suggests that flow path lengths and travel times in shallow

  18. Development and Psychometric Evaluation of the School Bullying Scales: A Rasch Measurement Approach

    Science.gov (United States)

    Cheng, Ying-Yao; Chen, Li-Ming; Liu, Kun-Shia; Chen, Yi-Ling

    2011-01-01

    The study aims to develop three school bullying scales--the Bully Scale, the Victim Scale, and the Witness Scale--to assess secondary school students' bullying behaviors, including physical bullying, verbal bullying, relational bullying, and cyber bullying. The items of the three scales were developed from viewpoints of bullies, victims, and…

  19. Fractal and multifractal approaches for the analysis of crack-size dependent scaling laws in fatigue

    Energy Technology Data Exchange (ETDEWEB)

    Paggi, Marco [Politecnico di Torino, Department of Structural Engineering and Geotechnics, Corso Duca degli Abruzzi 24, 10129 Torino (Italy)], E-mail: marco.paggi@polito.it; Carpinteri, Alberto [Politecnico di Torino, Department of Structural Engineering and Geotechnics, Corso Duca degli Abruzzi 24, 10129 Torino (Italy)

    2009-05-15

    The enhanced ability to detect and measure very short cracks, along with a great interest in applying fracture mechanics formulae to smaller and smaller crack sizes, has pointed out the so-called anomalous behavior of short cracks with respect to their longer counterparts. The crack-size dependencies of both the fatigue threshold and the Paris' constant C are only two notable examples of these anomalous scaling laws. In this framework, a unified theoretical model seems to be missing and the behavior of short cracks can still be considered as an open problem. In this paper, we propose a critical reexamination of the fractal models for the analysis of crack-size effects in fatigue. The limitations of each model are put into evidence and removed. At the end, a new generalized theory based on fractal geometry is proposed, which permits to consistently interpret the short crack-related anomalous scaling laws within a unified theoretical formulation. Finally, this approach is herein used to interpret relevant experimental data related to the crack-size dependence of the fatigue threshold in metals.

  20. Fractal and multifractal approaches for the analysis of crack-size dependent scaling laws in fatigue

    International Nuclear Information System (INIS)

    Paggi, Marco; Carpinteri, Alberto

    2009-01-01

    The enhanced ability to detect and measure very short cracks, along with a great interest in applying fracture mechanics formulae to smaller and smaller crack sizes, has pointed out the so-called anomalous behavior of short cracks with respect to their longer counterparts. The crack-size dependencies of both the fatigue threshold and the Paris' constant C are only two notable examples of these anomalous scaling laws. In this framework, a unified theoretical model seems to be missing and the behavior of short cracks can still be considered as an open problem. In this paper, we propose a critical reexamination of the fractal models for the analysis of crack-size effects in fatigue. The limitations of each model are put into evidence and removed. At the end, a new generalized theory based on fractal geometry is proposed, which permits to consistently interpret the short crack-related anomalous scaling laws within a unified theoretical formulation. Finally, this approach is herein used to interpret relevant experimental data related to the crack-size dependence of the fatigue threshold in metals.

  1. A computational approach to modeling cellular-scale blood flow in complex geometry

    Science.gov (United States)

    Balogh, Peter; Bagchi, Prosenjit

    2017-04-01

    We present a computational methodology for modeling cellular-scale blood flow in arbitrary and highly complex geometry. Our approach is based on immersed-boundary methods, which allow modeling flows in arbitrary geometry while resolving the large deformation and dynamics of every blood cell with high fidelity. The present methodology seamlessly integrates different modeling components dealing with stationary rigid boundaries of complex shape, moving rigid bodies, and highly deformable interfaces governed by nonlinear elasticity. Thus it enables us to simulate 'whole' blood suspensions flowing through physiologically realistic microvascular networks that are characterized by multiple bifurcating and merging vessels, as well as geometrically complex lab-on-chip devices. The focus of the present work is on the development of a versatile numerical technique that is able to consider deformable cells and rigid bodies flowing in three-dimensional arbitrarily complex geometries over a diverse range of scenarios. After describing the methodology, a series of validation studies are presented against analytical theory, experimental data, and previous numerical results. Then, the capability of the methodology is demonstrated by simulating flows of deformable blood cells and heterogeneous cell suspensions in both physiologically realistic microvascular networks and geometrically intricate microfluidic devices. It is shown that the methodology can predict several complex microhemodynamic phenomena observed in vascular networks and microfluidic devices. The present methodology is robust and versatile, and has the potential to scale up to very large microvascular networks at organ levels.

  2. Multi-scale approach of plasticity mechanisms in irradiated austenitic steels

    International Nuclear Information System (INIS)

    Nogaret, Th.

    2007-12-01

    The plasticity in irradiated metals is characterized by the localization of the deformation in clear bands, defect free, formed by the dislocation passage. We investigated the clear band formation thanks to a multi-scale approach. Molecular dynamics simulations show that screw dislocations mainly un-fault and absorb the defects as helical turns, are strongly pinned by the helical turns and are remitted in new glide planes when they unpin whereas edge dislocations mainly shear the defects for moderate stresses and can drag the helical turns. The interaction mechanisms were implemented into the discrete dislocation dynamics code in order to study the clear band formation at the micron scale. As dislocations are issued from grain boundaries, we consider a dislocation source located on a box border that emits dislocations when the dislocation nucleation stress is reached. The hardening was seen mainly due to the screw dislocations that are strongly pinned by helical turns. Edge dislocations are less pinned and glide on long distances, letting long screw dislocation segments. As more dislocations are emitted, screw dislocation pile-ups form and this permits the unpinning of screw dislocations. They unpin by activating dislocation segments in new glide planes, which broadens the clear band. When the segments activate, they create edge parts that sweep the screw dislocation lines by dragging away the super-jogs towards the box borders where they accumulate, which clears the band. (author)

  3. Solving Large-Scale TSP Using a Fast Wedging Insertion Partitioning Approach

    Directory of Open Access Journals (Sweden)

    Zuoyong Xiang

    2015-01-01

    Full Text Available A new partitioning method, called Wedging Insertion, is proposed for solving large-scale symmetric Traveling Salesman Problem (TSP. The idea of our proposed algorithm is to cut a TSP tour into four segments by nodes’ coordinate (not by rectangle, such as Strip, FRP, and Karp. Each node is located in one of their segments, which excludes four particular nodes, and each segment does not twist with other segments. After the partitioning process, this algorithm utilizes traditional construction method, that is, the insertion method, for each segment to improve the quality of tour, and then connects the starting node and the ending node of each segment to obtain the complete tour. In order to test the performance of our proposed algorithm, we conduct the experiments on various TSPLIB instances. The experimental results show that our proposed algorithm in this paper is more efficient for solving large-scale TSPs. Specifically, our approach is able to obviously reduce the time complexity for running the algorithm; meanwhile, it will lose only about 10% of the algorithm’s performance.

  4. Using scale and feather traits for module construction provides a functional approach to chicken epidermal development.

    Science.gov (United States)

    Bao, Weier; Greenwold, Matthew J; Sawyer, Roger H

    2017-11-01

    Gene co-expression network analysis has been a research method widely used in systematically exploring gene function and interaction. Using the Weighted Gene Co-expression Network Analysis (WGCNA) approach to construct a gene co-expression network using data from a customized 44K microarray transcriptome of chicken epidermal embryogenesis, we have identified two distinct modules that are highly correlated with scale or feather development traits. Signaling pathways related to feather development were enriched in the traditional KEGG pathway analysis and functional terms relating specifically to embryonic epidermal development were also enriched in the Gene Ontology analysis. Significant enrichment annotations were discovered from customized enrichment tools such as Modular Single-Set Enrichment Test (MSET) and Medical Subject Headings (MeSH). Hub genes in both trait-correlated modules showed strong specific functional enrichment toward epidermal development. Also, regulatory elements, such as transcription factors and miRNAs, were targeted in the significant enrichment result. This work highlights the advantage of this methodology for functional prediction of genes not previously associated with scale- and feather trait-related modules.

  5. An approach for classification of hydrogeological systems at the regional scale based on groundwater hydrographs

    Science.gov (United States)

    Haaf, Ezra; Barthel, Roland

    2016-04-01

    When assessing hydrogeological conditions at the regional scale, the analyst is often confronted with uncertainty of structures, inputs and processes while having to base inference on scarce and patchy data. Haaf and Barthel (2015) proposed a concept for handling this predicament by developing a groundwater systems classification framework, where information is transferred from similar, but well-explored and better understood to poorly described systems. The concept is based on the central hypothesis that similar systems react similarly to the same inputs and vice versa. It is conceptually related to PUB (Prediction in ungauged basins) where organization of systems and processes by quantitative methods is intended and used to improve understanding and prediction. Furthermore, using the framework it is expected that regional conceptual and numerical models can be checked or enriched by ensemble generated data from neighborhood-based estimators. In a first step, groundwater hydrographs from a large dataset in Southern Germany are compared in an effort to identify structural similarity in groundwater dynamics. A number of approaches to group hydrographs, mostly based on a similarity measure - which have previously only been used in local-scale studies, can be found in the literature. These are tested alongside different global feature extraction techniques. The resulting classifications are then compared to a visual "expert assessment"-based classification which serves as a reference. A ranking of the classification methods is carried out and differences shown. Selected groups from the classifications are related to geological descriptors. Here we present the most promising results from a comparison of classifications based on series correlation, different series distances and series features, such as the coefficients of the discrete Fourier transform and the intrinsic mode functions of empirical mode decomposition. Additionally, we show examples of classes

  6. Resilience Design Patterns - A Structured Approach to Resilience at Extreme Scale (version 1.0)

    Energy Technology Data Exchange (ETDEWEB)

    Hukerikar, Saurabh [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Engelmann, Christian [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-10-01

    Reliability is a serious concern for future extreme-scale high-performance computing (HPC) systems. Projections based on the current generation of HPC systems and technology roadmaps suggest that very high fault rates in future systems. The errors resulting from these faults will propagate and generate various kinds of failures, which may result in outcomes ranging from result corruptions to catastrophic application crashes. Practical limits on power consumption in HPC systems will require future systems to embrace innovative architectures, increasing the levels of hardware and software complexities. The resilience challenge for extreme-scale HPC systems requires management of various hardware and software technologies that are capable of handling a broad set of fault models at accelerated fault rates. These techniques must seek to improve resilience at reasonable overheads to power consumption and performance. While the HPC community has developed various solutions, application-level as well as system-based solutions, the solution space of HPC resilience techniques remains fragmented. There are no formal methods and metrics to investigate and evaluate resilience holistically in HPC systems that consider impact scope, handling coverage, and performance & power eciency across the system stack. Additionally, few of the current approaches are portable to newer architectures and software ecosystems, which are expected to be deployed on future systems. In this document, we develop a structured approach to the management of HPC resilience based on the concept of resilience-based design patterns. A design pattern is a general repeatable solution to a commonly occurring problem. We identify the commonly occurring problems and solutions used to deal with faults, errors and failures in HPC systems. The catalog of resilience design patterns provides designers with reusable design elements. We define a design framework that enhances our understanding of the important

  7. Resilience Design Patterns - A Structured Approach to Resilience at Extreme Scale (version 1.1)

    Energy Technology Data Exchange (ETDEWEB)

    Hukerikar, Saurabh [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Engelmann, Christian [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-12-01

    Reliability is a serious concern for future extreme-scale high-performance computing (HPC) systems. Projections based on the current generation of HPC systems and technology roadmaps suggest the prevalence of very high fault rates in future systems. The errors resulting from these faults will propagate and generate various kinds of failures, which may result in outcomes ranging from result corruptions to catastrophic application crashes. Therefore the resilience challenge for extreme-scale HPC systems requires management of various hardware and software technologies that are capable of handling a broad set of fault models at accelerated fault rates. Also, due to practical limits on power consumption in HPC systems future systems are likely to embrace innovative architectures, increasing the levels of hardware and software complexities. As a result the techniques that seek to improve resilience must navigate the complex trade-off space between resilience and the overheads to power consumption and performance. While the HPC community has developed various resilience solutions, application-level techniques as well as system-based solutions, the solution space of HPC resilience techniques remains fragmented. There are no formal methods and metrics to investigate and evaluate resilience holistically in HPC systems that consider impact scope, handling coverage, and performance & power efficiency across the system stack. Additionally, few of the current approaches are portable to newer architectures and software environments that will be deployed on future systems. In this document, we develop a structured approach to the management of HPC resilience using the concept of resilience-based design patterns. A design pattern is a general repeatable solution to a commonly occurring problem. We identify the commonly occurring problems and solutions used to deal with faults, errors and failures in HPC systems. Each established solution is described in the form of a pattern that

  8. Fixed-Point Configurable Hardware Components

    Directory of Open Access Journals (Sweden)

    Rocher Romuald

    2006-01-01

    Full Text Available To reduce the gap between the VLSI technology capability and the designer productivity, design reuse based on IP (intellectual properties is commonly used. In terms of arithmetic accuracy, the generated architecture can generally only be configured through the input and output word lengths. In this paper, a new kind of method to optimize fixed-point arithmetic IP has been proposed. The architecture cost is minimized under accuracy constraints defined by the user. Our approach allows exploring the fixed-point search space and the algorithm-level search space to select the optimized structure and fixed-point specification. To significantly reduce the optimization and design times, analytical models are used for the fixed-point optimization process.

  9. Characterizing Synergistic Water and Energy Efficiency at the Residential Scale Using a Cost Abatement Curve Approach

    Science.gov (United States)

    Stillwell, A. S.; Chini, C. M.; Schreiber, K. L.; Barker, Z. A.

    2015-12-01

    Energy and water are two increasingly correlated resources. Electricity generation at thermoelectric power plants requires cooling such that large water withdrawal and consumption rates are associated with electricity consumption. Drinking water and wastewater treatment require significant electricity inputs to clean, disinfect, and pump water. Due to this energy-water nexus, energy efficiency measures might be a cost-effective approach to reducing water use and water efficiency measures might support energy savings as well. This research characterizes the cost-effectiveness of different efficiency approaches in households by quantifying the direct and indirect water and energy savings that could be realized through efficiency measures, such as low-flow fixtures, energy and water efficient appliances, distributed generation, and solar water heating. Potential energy and water savings from these efficiency measures was analyzed in a product-lifetime adjusted economic model comparing efficiency measures to conventional counterparts. Results were displayed as cost abatement curves indicating the most economical measures to implement for a target reduction in water and/or energy consumption. These cost abatement curves are useful in supporting market innovation and investment in residential-scale efficiency.

  10. Fixed points of quantum gravity in extra dimensions

    International Nuclear Information System (INIS)

    Fischer, Peter; Litim, Daniel F.

    2006-01-01

    We study quantum gravity in more than four dimensions with renormalisation group methods. We find a non-trivial ultraviolet fixed point in the Einstein-Hilbert action. The fixed point connects with the perturbative infrared domain through finite renormalisation group trajectories. We show that our results for fixed points and related scaling exponents are stable. If this picture persists at higher order, quantum gravity in the metric field is asymptotically safe. We discuss signatures of the gravitational fixed point in models with low scale quantum gravity and compact extra dimensions

  11. Vilified and Fixed

    DEFF Research Database (Denmark)

    Jensen, Thessa; Westberg, Lysa

    , and imbalances of power between scholars and journalists on one side, and fans on the other are not rare occurrences. An analysis of a number of recent news articles, scholarly works, and websites, shows how the attempt of fixing fandom still prevails. Like Said's view on how the Orient is treated, fandom...... and tween website", 'Teen' managed to outrage fans. It took days and hundreds of comments, tweets, and mails to the publishers, before the article was taken down. Vilification in scholarly works and the media may have significantly lessened in recent years. Still, misunderstandings, applied exoticism...... is similarly exotisised, incorporated, and fixed. Scholars explain how to become better fans, attempting authority over fandom by applying rules to a culture, which already has their own. This, the notion of the 'better fan', devalues the existing discourses, rules, and traditions within fandom. The expert...

  12. Image subsampling and point scoring approaches for large-scale marine benthic monitoring programs

    Science.gov (United States)

    Perkins, Nicholas R.; Foster, Scott D.; Hill, Nicole A.; Barrett, Neville S.

    2016-07-01

    Benthic imagery is an effective tool for quantitative description of ecologically and economically important benthic habitats and biota. The recent development of autonomous underwater vehicles (AUVs) allows surveying of spatial scales that were previously unfeasible. However, an AUV collects a large number of images, the scoring of which is time and labour intensive. There is a need to optimise the way that subsamples of imagery are chosen and scored to gain meaningful inferences for ecological monitoring studies. We examine the trade-off between the number of images selected within transects and the number of random points scored within images on the percent cover of target biota, the typical output of such monitoring programs. We also investigate the efficacy of various image selection approaches, such as systematic or random, on the bias and precision of cover estimates. We use simulated biotas that have varying size, abundance and distributional patterns. We find that a relatively small sampling effort is required to minimise bias. An increased precision for groups that are likely to be the focus of monitoring programs is best gained through increasing the number of images sampled rather than the number of points scored within images. For rare species, sampling using point count approaches is unlikely to provide sufficient precision, and alternative sampling approaches may need to be employed. The approach by which images are selected (simple random sampling, regularly spaced etc.) had no discernible effect on mean and variance estimates, regardless of the distributional pattern of biota. Field validation of our findings is provided through Monte Carlo resampling analysis of a previously scored benthic survey from temperate waters. We show that point count sampling approaches are capable of providing relatively precise cover estimates for candidate groups that are not overly rare. The amount of sampling required, in terms of both the number of images and

  13. Comparison of Remote Sensing and Fixed-Site Monitoring Approaches for Examining Air Pollution and Health in a National Study Population

    Science.gov (United States)

    Prud'homme, Genevieve; Dobbin, Nina A.; Sun, Liu; Burnet, Richard T.; Martin, Randall V.; Davidson, Andrew; Cakmak, Sabit; Villeneuve, Paul J.; Lamsal, Lok N.; vanDonkelaar, Aaron; hide

    2013-01-01

    Satellite remote sensing (RS) has emerged as a cutting edge approach for estimating ground level ambient air pollution. Previous studies have reported a high correlation between ground level PM2.5 and NO2 estimated by RS and measurements collected at regulatory monitoring sites. The current study examined associations between air pollution and adverse respiratory and allergic health outcomes using multi-year averages of NO2 and PM2.5 from RS and from regulatory monitoring. RS estimates were derived using satellite measurements from OMI, MODIS, and MISR instruments. Regulatory monitoring data were obtained from Canada's National Air Pollution Surveillance Network. Self-reported prevalence of doctor-diagnosed asthma, current asthma, allergies, and chronic bronchitis were obtained from the Canadian Community Health Survey (a national sample of individuals 12 years of age and older). Multi-year ambient pollutant averages were assigned to each study participant based on their six digit postal code at the time of health survey, and were used as a marker for long-term exposure to air pollution. RS derived estimates of NO2 and PM2.5 were associated with 6e10% increases in respiratory and allergic health outcomes per interquartile range (3.97 mg m3 for PM2.5 and 1.03 ppb for NO2) among adults (aged 20e64) in the national study population. Risk estimates for air pollution and respiratory/ allergic health outcomes based on RS were similar to risk estimates based on regulatory monitoring for areas where regulatory monitoring data were available (within 40 km of a regulatory monitoring station). RS derived estimates of air pollution were also associated with adverse health outcomes among participants residing outside the catchment area of the regulatory monitoring network (p < 0.05).

  14. CERN: Fixed target targets

    Energy Technology Data Exchange (ETDEWEB)

    Anon.

    1993-03-15

    Full text: While the immediate priority of CERN's research programme is to exploit to the full the world's largest accelerator, the LEP electron-positron collider and its concomitant LEP200 energy upgrade (January, page 1), CERN is also mindful of its long tradition of diversified research. Away from LEP and preparations for the LHC proton-proton collider to be built above LEP in the same 27-kilometre tunnel, CERN is also preparing for a new generation of heavy ion experiments using a new source, providing heavier ions (April 1992, page 8), with first physics expected next year. CERN's smallest accelerator, the LEAR Low Energy Antiproton Ring continues to cover a wide range of research topics, and saw a record number of hours of operation in 1992. The new ISOLDE on-line isotope separator was inaugurated last year (July, page 5) and physics is already underway. The remaining effort concentrates around fixed target experiments at the SPS synchrotron, which formed the main thrust of CERN's research during the late 1970s. With the SPS and LEAR now approaching middle age, their research future was extensively studied last year. Broadly, a vigorous SPS programme looks assured until at least the end of 1995. Decisions for the longer term future of the West Experimental Area of the SPS will have to take into account the heavy demand for test beams from work towards experiments at big colliders, both at CERN and elsewhere. The North Experimental Area is the scene of larger experiments with longer lead times. Several more years of LEAR exploitation are already in the pipeline, but for the longer term, the ambitious Superlear project for a superconducting ring (January 1992, page 7) did not catch on. Neutrino physics has a long tradition at CERN, and this continues with the preparations for two major projects, the Chorus and Nomad experiments (November 1991, page 7), to start next year in the West Area. Delicate neutrino oscillation effects could become visible for the first

  15. INTEGRATED IMAGING APPROACHES SUPPORTING THE EXCAVATION ACTIVITIES. MULTI-SCALE GEOSPATIAL DOCUMENTATION IN HIERAPOLIS (TK

    Directory of Open Access Journals (Sweden)

    A. Spanò

    2018-05-01

    Full Text Available The paper focuses on the exploration of the suitability and the discretization of applicability issues about advanced surveying integrated techniques, mainly based on image-based approaches compared and integrated to range-based ones that have been developed with the use of the cutting-edge solutions tested on field. The investigated techniques integrate both technological devices for 3D data acquisition and thus editing and management systems to handle metric models and multi-dimensional data in a geospatial perspective, in order to innovate and speed up the extraction of information during the archaeological excavation activities. These factors, have been experienced in the outstanding site of the Hierapolis of Phrygia ancient city (Turkey, downstream the 2017 surveying missions, in order to produce high-scale metric deliverables in terms of high-detailed Digital Surface Models (DSM, 3D continuous surface models and high-resolution orthoimages products. In particular, the potentialities in the use of UAV platforms for low altitude acquisitions in aerial photogrammetric approach, together with terrestrial panoramic acquisitions (Trimble V10 imaging rover, have been investigated with a comparison toward consolidated Terrestrial Laser Scanning (TLS measurements. One of the main purposes of the paper is to evaluate the results offered by the technologies used independently and using integrated approaches. A section of the study in fact, is specifically dedicated to experimenting the union of different sensor dense clouds: both dense clouds derived from UAV have been integrated with terrestrial Lidar clouds, to evaluate their fusion. Different test cases have been considered, representing typical situations that can be encountered in archaeological sites.

  16. A Self-Organizing Spatial Clustering Approach to Support Large-Scale Network RTK Systems

    Directory of Open Access Journals (Sweden)

    Lili Shen

    2018-06-01

    Full Text Available The network real-time kinematic (RTK technique can provide centimeter-level real time positioning solutions and play a key role in geo-spatial infrastructure. With ever-increasing popularity, network RTK systems will face issues in the support of large numbers of concurrent users. In the past, high-precision positioning services were oriented towards professionals and only supported a few concurrent users. Currently, precise positioning provides a spatial foundation for artificial intelligence (AI, and countless smart devices (autonomous cars, unmanned aerial-vehicles (UAVs, robotic equipment, etc. require precise positioning services. Therefore, the development of approaches to support large-scale network RTK systems is urgent. In this study, we proposed a self-organizing spatial clustering (SOSC approach which automatically clusters online users to reduce the computational load on the network RTK system server side. The experimental results indicate that both the SOSC algorithm and the grid algorithm can reduce the computational load efficiently, while the SOSC algorithm gives a more elastic and adaptive clustering solution with different datasets. The SOSC algorithm determines the cluster number and the mean distance to cluster center (MDTCC according to the data set, while the grid approaches are all predefined. The side-effects of clustering algorithms on the user side are analyzed with real global navigation satellite system (GNSS data sets. The experimental results indicate that 10 km can be safely used as the cluster radius threshold for the SOSC algorithm without significantly reducing the positioning precision and reliability on the user side.

  17. Modelling an industrial anaerobic granular reactor using a multi-scale approach.

    Science.gov (United States)

    Feldman, H; Flores-Alsina, X; Ramin, P; Kjellberg, K; Jeppsson, U; Batstone, D J; Gernaey, K V

    2017-12-01

    The objective of this paper is to show the results of an industrial project dealing with modelling of anaerobic digesters. A multi-scale mathematical approach is developed to describe reactor hydrodynamics, granule growth/distribution and microbial competition/inhibition for substrate/space within the biofilm. The main biochemical and physico-chemical processes in the model are based on the Anaerobic Digestion Model No 1 (ADM1) extended with the fate of phosphorus (P), sulfur (S) and ethanol (Et-OH). Wastewater dynamic conditions are reproduced and data frequency increased using the Benchmark Simulation Model No 2 (BSM2) influent generator. All models are tested using two plant data sets corresponding to different operational periods (#D1, #D2). Simulation results reveal that the proposed approach can satisfactorily describe the transformation of organics, nutrients and minerals, the production of methane, carbon dioxide and sulfide and the potential formation of precipitates within the bulk (average deviation between computer simulations and measurements for both #D1, #D2 is around 10%). Model predictions suggest a stratified structure within the granule which is the result of: 1) applied loading rates, 2) mass transfer limitations and 3) specific (bacterial) affinity for substrate. Hence, inerts (X I ) and methanogens (X ac ) are situated in the inner zone, and this fraction lowers as the radius increases favouring the presence of acidogens (X su ,X aa , X fa ) and acetogens (X c4 ,X pro ). Additional simulations show the effects on the overall process performance when operational (pH) and loading (S:COD) conditions are modified. Lastly, the effect of intra-granular precipitation on the overall organic/inorganic distribution is assessed at: 1) different times; and, 2) reactor heights. Finally, the possibilities and opportunities offered by the proposed approach for conducting engineering optimization projects are discussed. Copyright © 2017 Elsevier Ltd. All

  18. Fixed-Target Electron Accelerators

    International Nuclear Information System (INIS)

    Brooks, William K.

    2001-01-01

    A tremendous amount of scientific insight has been garnered over the past half-century by using particle accelerators to study physical systems of sub-atomic dimensions. These giant instruments begin with particles at rest, then greatly increase their energy of motion, forming a narrow trajectory or beam of particles. In fixed-target accelerators, the particle beam impacts upon a stationary sample or target which contains or produces the sub-atomic system being studied. This is in distinction to colliders, where two beams are produced and are steered into each other so that their constituent particles can collide. The acceleration process always relies on the particle being accelerated having an electric charge; however, both the details of producing the beam and the classes of scientific investigations possible vary widely with the specific type of particle being accelerated. This article discusses fixed-target accelerators which produce beams of electrons, the lightest charged particle. As detailed in the report, the beam energy has a close connection with the size of the physical system studied. Here a useful unit of energy is a GeV, i.e., a giga electron-volt. (ne GeV, the energy an electron would have if accelerated through a billion volts, is equal to 1.6 x 10 -10 joules.) To study systems on a distance scale much smaller than an atomic nucleus requires beam energies ranging from a few GeV up to hundreds of GeV and more

  19. Comparison of remote sensing and fixed-site monitoring approaches for examining air pollution and health in a national study population

    Science.gov (United States)

    Prud'homme, Genevieve; Dobbin, Nina A.; Sun, Liu; Burnett, Richard T.; Martin, Randall V.; Davidson, Andrew; Cakmak, Sabit; Villeneuve, Paul J.; Lamsal, Lok N.; van Donkelaar, Aaron; Peters, Paul A.; Johnson, Markey

    2013-12-01

    Satellite remote sensing (RS) has emerged as a cutting edge approach for estimating ground level ambient air pollution. Previous studies have reported a high correlation between ground level PM2.5 and NO2 estimated by RS and measurements collected at regulatory monitoring sites. The current study examined associations between air pollution and adverse respiratory and allergic health outcomes using multi-year averages of NO2 and PM2.5 from RS and from regulatory monitoring. RS estimates were derived using satellite measurements from OMI, MODIS, and MISR instruments. Regulatory monitoring data were obtained from Canada's National Air Pollution Surveillance Network. Self-reported prevalence of doctor-diagnosed asthma, current asthma, allergies, and chronic bronchitis were obtained from the Canadian Community Health Survey (a national sample of individuals 12 years of age and older). Multi-year ambient pollutant averages were assigned to each study participant based on their six digit postal code at the time of health survey, and were used as a marker for long-term exposure to air pollution. RS derived estimates of NO2 and PM2.5 were associated with 6-10% increases in respiratory and allergic health outcomes per interquartile range (3.97 μg m-3 for PM2.5 and 1.03 ppb for NO2) among adults (aged 20-64) in the national study population. Risk estimates for air pollution and respiratory/allergic health outcomes based on RS were similar to risk estimates based on regulatory monitoring for areas where regulatory monitoring data were available (within 40 km of a regulatory monitoring station). RS derived estimates of air pollution were also associated with adverse health outcomes among participants residing outside the catchment area of the regulatory monitoring network (p health among participants living outside the catchment area for regulatory monitoring suggest that RS can provide useful estimates of long-term ambient air pollution in epidemiologic studies. This is

  20. Prediction and verification of centrifugal dewatering of P. pastoris fermentation cultures using an ultra scale-down approach.

    Science.gov (United States)

    Lopes, A G; Keshavarz-Moore, E

    2012-08-01

    Recent years have seen a dramatic rise in fermentation broth cell densities and a shift to extracellular product expression in microbial cells. As a result, dewatering characteristics during cell separation is of importance, as any liquor trapped in the sediment results in loss of product, and thus a decrease in product recovery. In this study, an ultra scale-down (USD) approach was developed to enable the rapid assessment of dewatering performance of pilot-scale centrifuges with intermittent solids discharge. The results were then verified at scale for two types of pilot-scale centrifuges: a tubular bowl equipment and a disk-stack centrifuge. Initial experiments showed that employing a laboratory-scale centrifugal mimic based on using a comparable feed concentration to that of the pilot-scale centrifuge, does not successfully predict the dewatering performance at scale (P-value centrifuge. Initial experiments used Baker's yeast feed suspensions followed by fresh Pichia pastoris fermentation cultures. This work presents a simple and novel USD approach to predict dewatering levels in two types of pilot-scale centrifuges using small quantities of feedstock (centrifuge needs to be operated, reducing the need for repeated pilot-scale runs during early stages of process development. Copyright © 2012 Wiley Periodicals, Inc.

  1. Canopy structure and topography effects on snow distribution at a catchment scale: Application of multivariate approaches

    Directory of Open Access Journals (Sweden)

    Jenicek Michal

    2018-03-01

    Full Text Available The knowledge of snowpack distribution at a catchment scale is important to predict the snowmelt runoff. The objective of this study is to select and quantify the most important factors governing the snowpack distribution, with special interest in the role of different canopy structure. We applied a simple distributed sampling design with measurement of snow depth and snow water equivalent (SWE at a catchment scale. We selected eleven predictors related to character of specific localities (such as elevation, slope orientation and leaf area index and to winter meteorological conditions (such as irradiance, sum of positive air temperature and sum of new snow depth. The forest canopy structure was described using parameters calculated from hemispherical photographs. A degree-day approach was used to calculate melt factors. Principal component analysis, cluster analysis and Spearman rank correlation were applied to reduce the number of predictors and to analyze measured data. The SWE in forest sites was by 40% lower than in open areas, but this value depended on the canopy structure. The snow ablation in large openings was on average almost two times faster compared to forest sites. The snow ablation in the forest was by 18% faster after forest defoliation (due to the bark beetle. The results from multivariate analyses showed that the leaf area index was a better predictor to explain the SWE distribution during accumulation period, while irradiance was better predictor during snowmelt period. Despite some uncertainty, parameters derived from hemispherical photographs may replace measured incoming solar radiation if this meteorological variable is not available.

  2. Large scale debris-flow hazard assessment: a geotechnical approach and GIS modelling

    Directory of Open Access Journals (Sweden)

    G. Delmonaco

    2003-01-01

    Full Text Available A deterministic distributed model has been developed for large-scale debris-flow hazard analysis in the basin of River Vezza (Tuscany Region – Italy. This area (51.6 km 2 was affected by over 250 landslides. These were classified as debris/earth flow mainly involving the metamorphic geological formations outcropping in the area, triggered by the pluviometric event of 19 June 1996. In the last decades landslide hazard and risk analysis have been favoured by the development of GIS techniques permitting the generalisation, synthesis and modelling of stability conditions on a large scale investigation (>1:10 000. In this work, the main results derived by the application of a geotechnical model coupled with a hydrological model for the assessment of debris flows hazard analysis, are reported. This analysis has been developed starting by the following steps: landslide inventory map derived by aerial photo interpretation, direct field survey, generation of a database and digital maps, elaboration of a DTM and derived themes (i.e. slope angle map, definition of a superficial soil thickness map, geotechnical soil characterisation through implementation of a backanalysis on test slopes, laboratory test analysis, inference of the influence of precipitation, for distinct return times, on ponding time and pore pressure generation, implementation of a slope stability model (infinite slope model and generalisation of the safety factor for estimated rainfall events with different return times. Such an approach has allowed the identification of potential source areas of debris flow triggering. This is used to detected precipitation events with estimated return time of 10, 50, 75 and 100 years. The model shows a dramatic decrease of safety conditions for the simulation when is related to a 75 years return time rainfall event. It corresponds to an estimated cumulated daily intensity of 280–330 mm. This value can be considered the hydrological triggering

  3. A New Approach to Adaptive Control of Multiple Scales in Plasma Simulations

    Science.gov (United States)

    Omelchenko, Yuri

    2007-04-01

    A new approach to temporal refinement of kinetic (Particle-in-Cell, Vlasov) and fluid (MHD, two-fluid) simulations of plasmas is presented: Discrete-Event Simulation (DES). DES adaptively distributes CPU resources in accordance with local time scales and enables asynchronous integration of inhomogeneous nonlinear systems with multiple time scales on meshes of arbitrary topologies. This removes computational penalties usually incurred in explicit codes due to the global Courant-Friedrich-Levy (CFL) restriction on a time-step size. DES stands apart from multiple time-stepping algorithms in that it requires neither selecting a global synchronization time step nor pre-determining a sequence of time-integration operations for individual parts of the system (local time increments need not bear any integer multiple relations). Instead, elements of a mesh-distributed solution self-adaptively predict and synchronize their temporal trajectories by directly enforcing local causality (accuracy) constraints, which are formulated in terms of incremental changes to the evolving solution. Together with flux-conservative propagation of information, this new paradigm ensures stable and fast asynchronous runs, where idle computation is automatically eliminated. DES is parallelized via a novel Preemptive Event Processing (PEP) technique, which automatically synchronizes elements with similar update rates. In this mode, events with close execution times are projected onto time levels, which are adaptively determined by the program. PEP allows reuse of standard message-passing algorithms on distributed architectures. For optimum accuracy, DES can be combined with adaptive mesh refinement (AMR) techniques for structured and unstructured meshes. Current examples of event-driven models range from electrostatic, hybrid particle-in-cell plasma systems to reactive fluid dynamics simulations. They demonstrate the superior performance of DES in terms of accuracy, speed and robustness.

  4. A new approach to inventorying bodies of water, from local to global scale

    Directory of Open Access Journals (Sweden)

    Bartout, Pascal

    2015-12-01

    Full Text Available Having reliable estimates of the number of water bodies on different geographical scales is of great importance to better understand biogeochemical cycles and to tackle the social issues related to the economic and cultural use of water bodies. However, limnological research suffers from a lack of reliable inventories; the available scientific references are predominately based on water bodies of natural origin, large in size and preferentially located in previously glaciated areas. Artificial, small and randomly distributed water bodies, especially ponds, are usually not inventoried. Following Wetzel’s theory (1990, some authors included them in global inventories by using remote sensing or mathematical extrapolation, but fieldwork on the ground has been done on a very limited amount of territory. These studies have resulted in an explosive increase in the estimated number of water bodies, going from 8.44 million lakes (Meybeck 1995 to 3.5 billion water bodies (Downing 2010. These numbers raise several questions, especially about the methodology used for counting small-sized water bodies and the methodological treatment of spatial variables. In this study, we use inventories of water bodies for Sweden, Finland, Estonia and France to show incoherencies generated by the “global to local” approach. We demonstrate that one universal relationship does not suffice for generating the regional or global inventories of water bodies because local conditions vary greatly from one region to another and cannot be offset adequately by each other. The current paradigm for global estimates of water bodies in limnology, which is based on one representative model applied to different territories, does not produce sufficiently exact global inventories. The step-wise progression from the local to the global scale requires the development of many regional equations based on fieldwork; a specific equation that adequately reflects the actual relationship

  5. Resilience Design Patterns: A Structured Approach to Resilience at Extreme Scale

    International Nuclear Information System (INIS)

    Engelmann, Christian; Hukerikar, Saurabh

    2017-01-01

    Reliability is a serious concern for future extreme-scale high-performance computing (HPC) systems. Projections based on the current generation of HPC systems and technology roadmaps suggest the prevalence of very high fault rates in future systems. While the HPC community has developed various resilience solutions, application-level techniques as well as system-based solutions, the solution space remains fragmented. There are no formal methods and metrics to integrate the various HPC resilience techniques into composite solutions, nor are there methods to holistically evaluate the adequacy and efficacy of such solutions in terms of their protection coverage, and their performance \\& power efficiency characteristics. Additionally, few of the current approaches are portable to newer architectures and software environments that will be deployed on future systems. In this paper, we develop a structured approach to the design, evaluation and optimization of HPC resilience using the concept of design patterns. A design pattern is a general repeatable solution to a commonly occurring problem. We identify the problems caused by various types of faults, errors and failures in HPC systems and the techniques used to deal with these events. Each well-known solution that addresses a specific HPC resilience challenge is described in the form of a pattern. We develop a complete catalog of such resilience design patterns, which may be used by system architects, system software and tools developers, application programmers, as well as users and operators as essential building blocks when designing and deploying resilience solutions. We also develop a design framework that enhances a designer's understanding the opportunities for integrating multiple patterns across layers of the system stack and the important constraints during implementation of the individual patterns. It is also useful for defining mechanisms and interfaces to coordinate flexible fault management across

  6. Effective modelling of percolation at the landscape scale using data-based approaches

    Science.gov (United States)

    Selle, Benny; Lischeid, Gunnar; Huwe, Bernd

    2008-06-01

    Process-based models have been extensively applied to assess the impact of landuse change on water quantity and quality at landscape scales. However, the routine application of those models suffers from large computational efforts, lack of transparency and the requirement of many input parameters. Data-based models such as Feed-Forward Multilayer Perceptrons (MLP) and Classification and Regression Trees (CART) may be used as effective models, i.e. simple approximations of complex process-based models. These data-based approaches can subsequently be applied for scenario analysis and as a transparent management tool provided climatic boundary conditions and the basic model assumptions of the process-based models do not change dramatically. In this study, we apply MLP, CART and Multiple Linear Regression (LR) to model the spatially distributed and spatially aggregated percolation in soils using weather, groundwater and soil data. The percolation data is obtained via numerical experiments with Hydrus1D. Thus, the complex process-based model is approximated using simpler data-based approaches. The MLP model explains most of the percolation variance in time and space without using any soil information. This reflects the effective dimensionality of the process-based model and suggests that percolation in the study area may be modelled much simpler than using Hydrus1D. The CART model shows that soil properties play a negligible role for percolation under wet climatic conditions. However, they become more important if the conditions turn drier. The LR method does not yield satisfactory predictions for the spatially distributed percolation however the spatially aggregated percolation is well approximated. This may indicate that the soils behave simpler (i.e. more linear) when percolation dynamics are upscaled.

  7. A novel design approach for small scale low enthalpy binary geothermal power plants

    International Nuclear Information System (INIS)

    Gabbrielli, Roberto

    2012-01-01

    Highlights: ► Off-design analysis of ORC geothermal power plants through the years and the days. ► Thermal degradation of the geothermal source reduces largely the plant performances. ► The plant capacity factor is low if the brine temperature is far from the design value. ► The performances through the life are more important than those at the design point. ► ORC geothermal power plants should be designed with the end-life brine temperature. - Abstract: In this paper a novel design approach for small scale low enthalpy binary geothermal power plants is proposed. After the suction, the hot water (brine) superheats an organic fluid (R134a) in a Rankine cycle and, then, is injected back underground. This fact causes the well-known thermal degradation of the geothermal resource during the years. Hence, the binary geothermal power plants have to operate with conditions that largely vary during their life and, consequently, the most part of their functioning is executed in off-design conditions. So, as the novel approach here proposed, the design temperature of the geothermal resource is selected between its highest and lowest values, that correspond to the beginning and the end of the operative life of the geothermal power plant, respectively. Hence, using a detailed off-design performance model, the optimal design point of the geothermal power plant is evaluated maximizing the total actualized cash flow from the incentives for renewable power generation. Under different renewable energy incentive scenarios, the power plant that is designed using the lowest temperature of the geothermal resource always results the best option.

  8. A long-term, continuous simulation approach for large-scale flood risk assessments

    Science.gov (United States)

    Falter, Daniela; Schröter, Kai; Viet Dung, Nguyen; Vorogushyn, Sergiy; Hundecha, Yeshewatesfa; Kreibich, Heidi; Apel, Heiko; Merz, Bruno

    2014-05-01

    The Regional Flood Model (RFM) is a process based model cascade developed for flood risk assessments of large-scale basins. RFM consists of four model parts: the rainfall-runoff model SWIM, a 1D channel routing model, a 2D hinterland inundation model and the flood loss estimation model for residential buildings FLEMOps+r. The model cascade was recently undertaken a proof-of-concept study at the Elbe catchment (Germany) to demonstrate that flood risk assessments, based on a continuous simulation approach, including rainfall-runoff, hydrodynamic and damage estimation models, are feasible for large catchments. The results of this study indicated that uncertainties are significant, especially for hydrodynamic simulations. This was basically a consequence of low data quality and disregarding dike breaches. Therefore, RFM was applied with a refined hydraulic model setup for the Elbe tributary Mulde. The study area Mulde catchment comprises about 6,000 km2 and 380 river-km. The inclusion of more reliable information on overbank cross-sections and dikes considerably improved the results. For the application of RFM for flood risk assessments, long-term climate input data is needed to drive the model chain. This model input was provided by a multi-site, multi-variate weather generator that produces sets of synthetic meteorological data reproducing the current climate statistics. The data set comprises 100 realizations of 100 years of meteorological data. With the proposed continuous simulation approach of RFM, we simulated a virtual period of 10,000 years covering the entire flood risk chain including hydrological, 1D/2D hydrodynamic and flood damage estimation models. This provided a record of around 2.000 inundation events affecting the study area with spatially detailed information on inundation depths and damage to residential buildings on a resolution of 100 m. This serves as basis for a spatially consistent, flood risk assessment for the Mulde catchment presented in

  9. A new generic approach for estimating the concentrations of down-the-drain chemicals at catchment and national scale

    Energy Technology Data Exchange (ETDEWEB)

    Keller, V.D.J. [Centre for Ecology and Hydrology, Hydrological Risks and Resources, Maclean Building, Crowmarsh Gifford, Wallingford OX10 8BB (United Kingdom)]. E-mail: vke@ceh.ac.uk; Rees, H.G. [Centre for Ecology and Hydrology, Hydrological Risks and Resources, Maclean Building, Crowmarsh Gifford, Wallingford OX10 8BB (United Kingdom); Fox, K.K. [University of Lancaster (United Kingdom); Whelan, M.J. [Unilever Safety and Environmental Assurance Centre, Colworth (United Kingdom)

    2007-07-15

    A new generic approach for estimating chemical concentrations in rivers at catchment and national scales is presented. Domestic chemical loads in waste water are estimated using gridded population data. River flows are estimated by combining predicted runoff with topographically derived flow direction. Regional scale exposure is characterised by two summary statistics: PEC{sub works}, the average concentration immediately downstream of emission points, and, PEC{sub area}, the catchment-average chemical concentration. The method was applied to boron at national (England and Wales) and catchment (Aire-Calder) scales. Predicted concentrations were within 50% of measured mean values in the Aire-Calder catchment and in agreement with results from the GREAT-ER model. The concentration grids generated provide a picture of the spatial distribution of expected chemical concentrations at various scales, and can be used to identify areas of potentially high risk. - A new grid-based approach to predict spatially-referenced freshwater concentrations of domestic chemicals.

  10. A review of analogue modelling of geodynamic processes: Approaches, scaling, materials and quantification, with an application to subduction experiments

    Science.gov (United States)

    Schellart, Wouter P.; Strak, Vincent

    2016-10-01

    We present a review of the analogue modelling method, which has been used for 200 years, and continues to be used, to investigate geological phenomena and geodynamic processes. We particularly focus on the following four components: (1) the different fundamental modelling approaches that exist in analogue modelling; (2) the scaling theory and scaling of topography; (3) the different materials and rheologies that are used to simulate the complex behaviour of rocks; and (4) a range of recording techniques that are used for qualitative and quantitative analyses and interpretations of analogue models. Furthermore, we apply these four components to laboratory-based subduction models and describe some of the issues at hand with modelling such systems. Over the last 200 years, a wide variety of analogue materials have been used with different rheologies, including viscous materials (e.g. syrups, silicones, water), brittle materials (e.g. granular materials such as sand, microspheres and sugar), plastic materials (e.g. plasticine), visco-plastic materials (e.g. paraffin, waxes, petrolatum) and visco-elasto-plastic materials (e.g. hydrocarbon compounds and gelatins). These materials have been used in many different set-ups to study processes from the microscale, such as porphyroclast rotation, to the mantle scale, such as subduction and mantle convection. Despite the wide variety of modelling materials and great diversity in model set-ups and processes investigated, all laboratory experiments can be classified into one of three different categories based on three fundamental modelling approaches that have been used in analogue modelling: (1) The external approach, (2) the combined (external + internal) approach, and (3) the internal approach. In the external approach and combined approach, energy is added to the experimental system through the external application of a velocity, temperature gradient or a material influx (or a combination thereof), and so the system is open

  11. A multi-scale experimental and simulation approach for fractured subsurface systems

    Science.gov (United States)

    Viswanathan, H. S.; Carey, J. W.; Frash, L.; Karra, S.; Hyman, J.; Kang, Q.; Rougier, E.; Srinivasan, G.

    2017-12-01

    Fractured systems play an important role in numerous subsurface applications including hydraulic fracturing, carbon sequestration, geothermal energy and underground nuclear test detection. Fractures that range in scale from microns to meters and their structure control the behavior of these systems which provide over 85% of our energy and 50% of US drinking water. Determining the key mechanisms in subsurface fractured systems has been impeded due to the lack of sophisticated experimental methods to measure fracture aperture and connectivity, multiphase permeability, and chemical exchange capacities at the high temperature, pressure, and stresses present in the subsurface. In this study, we developed and use microfluidic and triaxial core flood experiments required to reveal the fundamental dynamics of fracture-fluid interactions. In addition we have developed high fidelity fracture propagation and discrete fracture network flow models to simulate these fractured systems. We also have developed reduced order models of these fracture simulators in order to conduct uncertainty quantification for these systems. We demonstrate an integrated experimental/modeling approach that allows for a comprehensive characterization of fractured systems and develop models that can be used to optimize the reservoir operating conditions over a range of subsurface conditions.

  12. Computational approach on PEB process in EUV resist: multi-scale simulation

    Science.gov (United States)

    Kim, Muyoung; Moon, Junghwan; Choi, Joonmyung; Lee, Byunghoon; Jeong, Changyoung; Kim, Heebom; Cho, Maenghyo

    2017-03-01

    For decades, downsizing has been a key issue for high performance and low cost of semiconductor, and extreme ultraviolet lithography is one of the promising candidates to achieve the goal. As a predominant process in extreme ultraviolet lithography on determining resolution and sensitivity, post exposure bake has been mainly studied by experimental groups, but development of its photoresist is at the breaking point because of the lack of unveiled mechanism during the process. Herein, we provide theoretical approach to investigate underlying mechanism on the post exposure bake process in chemically amplified resist, and it covers three important reactions during the process: acid generation by photo-acid generator dissociation, acid diffusion, and deprotection. Density functional theory calculation (quantum mechanical simulation) was conducted to quantitatively predict activation energy and probability of the chemical reactions, and they were applied to molecular dynamics simulation for constructing reliable computational model. Then, overall chemical reactions were simulated in the molecular dynamics unit cell, and final configuration of the photoresist was used to predict the line edge roughness. The presented multiscale model unifies the phenomena of both quantum and atomic scales during the post exposure bake process, and it will be helpful to understand critical factors affecting the performance of the resulting photoresist and design the next-generation material.

  13. Neural ensemble communities: Open-source approaches to hardware for large-scale electrophysiology

    Science.gov (United States)

    Siegle, Joshua H.; Hale, Gregory J.; Newman, Jonathan P.; Voigts, Jakob

    2014-01-01

    One often-overlooked factor when selecting a platform for large-scale electrophysiology is whether or not a particular data acquisition system is “open” or “closed”: that is, whether or not the system’s schematics and source code are available to end users. Open systems have a reputation for being difficult to acquire, poorly documented, and hard to maintain. With the arrival of more powerful and compact integrated circuits, rapid prototyping services, and web-based tools for collaborative development, these stereotypes must be reconsidered. We discuss some of the reasons why multichannel extracellular electrophysiology could benefit from open-source approaches and describe examples of successful community-driven tool development within this field. In order to promote the adoption of open-source hardware and to reduce the need for redundant development efforts, we advocate a move toward standardized interfaces that connect each element of the data processing pipeline. This will give researchers the flexibility to modify their tools when necessary, while allowing them to continue to benefit from the high-quality products and expertise provided by commercial vendors. PMID:25528614

  14. Quantifying in-stream retention of nitrate at catchment scales using a practical mass balance approach.

    Science.gov (United States)

    Schwientek, Marc; Selle, Benny

    2016-02-01

    As field data on in-stream nitrate retention is scarce at catchment scales, this study aimed at quantifying net retention of nitrate within the entire river network of a fourth-order stream. For this purpose, a practical mass balance approach combined with a Lagrangian sampling scheme was applied and seasonally repeated to estimate daily in-stream net retention of nitrate for a 17.4 km long, agriculturally influenced, segment of the Steinlach River in southwestern Germany. This river segment represents approximately 70% of the length of the main stem and about 32% of the streambed area of the entire river network. Sampling days in spring and summer were biogeochemically more active than in autumn and winter. Results obtained for the main stem of Steinlach River were subsequently extrapolated to the stream network in the catchment. It was demonstrated that, for baseflow conditions in spring and summer, in-stream nitrate retention could sum up to a relevant term of the catchment's nitrogen balance if the entire stream network was considered.

  15. A space and time scale-dependent nonlinear geostatistical approach for downscaling daily precipitation and temperature

    KAUST Repository

    Jha, Sanjeev Kumar

    2015-07-21

    A geostatistical framework is proposed to downscale daily precipitation and temperature. The methodology is based on multiple-point geostatistics (MPS), where a multivariate training image is used to represent the spatial relationship between daily precipitation and daily temperature over several years. Here, the training image consists of daily rainfall and temperature outputs from the Weather Research and Forecasting (WRF) model at 50 km and 10 km resolution for a twenty year period ranging from 1985 to 2004. The data are used to predict downscaled climate variables for the year 2005. The result, for each downscaled pixel, is daily time series of precipitation and temperature that are spatially dependent. Comparison of predicted precipitation and temperature against a reference dataset indicates that both the seasonal average climate response together with the temporal variability are well reproduced. The explicit inclusion of time dependence is explored by considering the climate properties of the previous day as an additional variable. Comparison of simulations with and without inclusion of time dependence shows that the temporal dependence only slightly improves the daily prediction because the temporal variability is already well represented in the conditioning data. Overall, the study shows that the multiple-point geostatistics approach is an efficient tool to be used for statistical downscaling to obtain local scale estimates of precipitation and temperature from General Circulation Models. This article is protected by copyright. All rights reserved.

  16. Effective use of integrated hydrological models in basin-scale water resources management: surrogate modeling approaches

    Science.gov (United States)

    Zheng, Y.; Wu, B.; Wu, X.

    2015-12-01

    Integrated hydrological models (IHMs) consider surface water and subsurface water as a unified system, and have been widely adopted in basin-scale water resources studies. However, due to IHMs' mathematical complexity and high computational cost, it is difficult to implement them in an iterative model evaluation process (e.g., Monte Carlo Simulation, simulation-optimization analysis, etc.), which diminishes their applicability for supporting decision-making in real-world situations. Our studies investigated how to effectively use complex IHMs to address real-world water issues via surrogate modeling. Three surrogate modeling approaches were considered, including 1) DYCORS (DYnamic COordinate search using Response Surface models), a well-established response surface-based optimization algorithm; 2) SOIM (Surrogate-based Optimization for Integrated surface water-groundwater Modeling), a response surface-based optimization algorithm that we developed specifically for IHMs; and 3) Probabilistic Collocation Method (PCM), a stochastic response surface approach. Our investigation was based on a modeling case study in the Heihe River Basin (HRB), China's second largest endorheic river basin. The GSFLOW (Coupled Ground-Water and Surface-Water Flow Model) model was employed. Two decision problems were discussed. One is to optimize, both in time and in space, the conjunctive use of surface water and groundwater for agricultural irrigation in the middle HRB region; and the other is to cost-effectively collect hydrological data based on a data-worth evaluation. Overall, our study results highlight the value of incorporating an IHM in making decisions of water resources management and hydrological data collection. An IHM like GSFLOW can provide great flexibility to formulating proper objective functions and constraints for various optimization problems. On the other hand, it has been demonstrated that surrogate modeling approaches can pave the path for such incorporation in real

  17. Large scale atomistic approaches to thermal transport and phonon scattering in nanostructured materials

    Science.gov (United States)

    Savic, Ivana

    2012-02-01

    Decreasing the thermal conductivity of bulk materials by nanostructuring and dimensionality reduction, or by introducing some amount of disorder represents a promising strategy in the search for efficient thermoelectric materials [1]. For example, considerable improvements of the thermoelectric efficiency in nanowires with surface roughness [2], superlattices [3] and nanocomposites [4] have been attributed to a significantly reduced thermal conductivity. In order to accurately describe thermal transport processes in complex nanostructured materials and directly compare with experiments, the development of theoretical and computational approaches that can account for both anharmonic and disorder effects in large samples is highly desirable. We will first summarize the strengths and weaknesses of the standard atomistic approaches to thermal transport (molecular dynamics [5], Boltzmann transport equation [6] and Green's function approach [7]) . We will then focus on the methods based on the solution of the Boltzmann transport equation, that are computationally too demanding, at present, to treat large scale systems and thus to investigate realistic materials. We will present a Monte Carlo method [8] to solve the Boltzmann transport equation in the relaxation time approximation [9], that enables computation of the thermal conductivity of ordered and disordered systems with a number of atoms up to an order of magnitude larger than feasible with straightforward integration. We will present a comparison between exact and Monte Carlo Boltzmann transport results for small SiGe nanostructures and then use the Monte Carlo method to analyze the thermal properties of realistic SiGe nanostructured materials. This work is done in collaboration with Davide Donadio, Francois Gygi, and Giulia Galli from UC Davis.[4pt] [1] See e.g. A. J. Minnich, M. S. Dresselhaus, Z. F. Ren, and G. Chen, Energy Environ. Sci. 2, 466 (2009).[0pt] [2] A. I. Hochbaum et al, Nature 451, 163 (2008).[0pt

  18. Fixed target beams

    CERN Document Server

    Kain, V; Cettour-Cave, S; Cornelis, K; Fraser, M A; Gatignon, L; Goddard, B; Velotti, F

    2017-01-01

    The CERN SPS (Super Proton Synchrotron) serves asLHC injector and provides beam for the North Area fixedtarget experiments. At low energy, the vertical acceptancebecomes critical with high intensity large emittance fixed tar-get beams. Optimizing the vertical available aperture is a keyingredient to optimize transmission and reduce activationaround the ring. During the 2016 run a tool was developed toprovide an automated local aperture scan around the entirering.The flux of particles slow extracted with the1/3inte-ger resonance from the Super Proton Synchrotron at CERNshould ideally be constant over the length of the extractionplateau, for optimum use of the beam by the fixed target ex-periments in the North Area. The extracted intensity is con-trolled in feed-forward correction of the horizontal tune viathe main SPS quadrupoles. The Mains power supply noiseat 50 Hz and harmonics is also corrected in feed-forwardby small amplitude tune modulation at the respective fre-quencies with a dedicated additional quad...

  19. Social pedagogy: an approach without fixed recipes

    DEFF Research Database (Denmark)

    Rothuizen, Jan Jakob Egbert; Harbo, Lotte Junker

    2017-01-01

    A historical and theoretical reconstruction of the specificity and peculiarity of the discipline of social pedagogy, as it has developed in Denmark. Social pedagogy takes its departure from the idea that the individual person and the community are complementary but at the same time opposed to each...

  20. CERN: Fixed target targets

    International Nuclear Information System (INIS)

    Anon.

    1993-01-01

    Full text: While the immediate priority of CERN's research programme is to exploit to the full the world's largest accelerator, the LEP electron-positron collider and its concomitant LEP200 energy upgrade (January, page 1), CERN is also mindful of its long tradition of diversified research. Away from LEP and preparations for the LHC proton-proton collider to be built above LEP in the same 27-kilometre tunnel, CERN is also preparing for a new generation of heavy ion experiments using a new source, providing heavier ions (April 1992, page 8), with first physics expected next year. CERN's smallest accelerator, the LEAR Low Energy Antiproton Ring continues to cover a wide range of research topics, and saw a record number of hours of operation in 1992. The new ISOLDE on-line isotope separator was inaugurated last year (July, page 5) and physics is already underway. The remaining effort concentrates around fixed target experiments at the SPS synchrotron, which formed the main thrust of CERN's research during the late 1970s. With the SPS and LEAR now approaching middle age, their research future was extensively studied last year. Broadly, a vigorous SPS programme looks assured until at least the end of 1995. Decisions for the longer term future of the West Experimental Area of the SPS will have to take into account the heavy demand for test beams from work towards experiments at big colliders, both at CERN and elsewhere. The North Experimental Area is the scene of larger experiments with longer lead times. Several more years of LEAR exploitation are already in the pipeline, but for the longer term, the ambitious Superlear project for a superconducting ring (January 1992, page 7) did not catch on. Neutrino physics has a long tradition at CERN, and this continues with the preparations for two major projects, the Chorus and Nomad experiments (November 1991, page 7), to start next year in the West Area. Delicate neutrino oscillation effects could become

  1. A multi-scale approach to monitor urban carbon-dioxide emissions in the atmosphere over Vancouver, Canada

    Science.gov (United States)

    Christen, A.; Crawford, B.; Ketler, R.; Lee, J. K.; McKendry, I. G.; Nesic, Z.; Caitlin, S.

    2015-12-01

    Measurements of long-lived greenhouse gases in the urban atmosphere are potentially useful to constrain and validate urban emission inventories, or space-borne remote-sensing products. We summarize and compare three different approaches, operating at different scales, that directly or indirectly identify, attribute and quantify emissions (and uptake) of carbon dioxide (CO2) in urban environments. All three approaches are illustrated using in-situ measurements in the atmosphere in and over Vancouver, Canada. Mobile sensing may be a promising way to quantify and map CO2 mixing ratios at fine scales across heterogenous and complex urban environments. We developed a system for monitoring CO2 mixing ratios at street level using a network of mobile CO2 sensors deployable on vehicles and bikes. A total of 5 prototype sensors were built and simultaneously used in a measurement campaign across a range of urban land use types and densities within a short time frame (3 hours). The dataset is used to aid in fine scale emission mapping in combination with simultaneous tower-based flux measurements. Overall, calculated CO2 emissions are realistic when compared against a spatially disaggregated scale emission inventory. The second approach is based on mass flux measurements of CO2 using a tower-based eddy covariance (EC) system. We present a continuous 7-year long dataset of CO2 fluxes measured by EC at the 28m tall flux tower 'Vancouver-Sunset'. We show how this dataset can be combined with turbulent source area models to quantify and partition different emission processes at the neighborhood-scale. The long-term EC measurements are within 10% of a spatially disaggregated scale emission inventory. Thirdly, at the urban scale, we present a dataset of CO2 mixing ratios measured using a tethered balloon system in the urban boundary layer above Vancouver. Using a simple box model, net city-scale CO2 emissions can be determined using measured rate of change of CO2 mixing ratios

  2. What scaling means in wind engineering: Complementary role of the reduced scale approach in a BLWT and the full scale testing in a large climatic wind tunnel

    Science.gov (United States)

    Flamand, Olivier

    2017-12-01

    Wind engineering problems are commonly studied by wind tunnel experiments at a reduced scale. This introduces several limitations and calls for a careful planning of the tests and the interpretation of the experimental results. The talk first revisits the similitude laws and discusses how they are actually applied in wind engineering. It will also remind readers why different scaling laws govern in different wind engineering problems. Secondly, the paper focuses on the ways to simplify a detailed structure (bridge, building, platform) when fabricating the downscaled models for the tests. This will be illustrated by several examples from recent engineering projects. Finally, under the most severe weather conditions, manmade structures and equipment should remain operational. What “recreating the climate” means and aims to achieve will be illustrated through common practice in climatic wind tunnel modelling.

  3. Instantaneous variance scaling of AIRS thermodynamic profiles using a circular area Monte Carlo approach

    Science.gov (United States)

    Dorrestijn, Jesse; Kahn, Brian H.; Teixeira, João; Irion, Fredrick W.

    2018-05-01

    Satellite observations are used to obtain vertical profiles of variance scaling of temperature (T) and specific humidity (q) in the atmosphere. A higher spatial resolution nadir retrieval at 13.5 km complements previous Atmospheric Infrared Sounder (AIRS) investigations with 45 km resolution retrievals and enables the derivation of power law scaling exponents to length scales as small as 55 km. We introduce a variable-sized circular-area Monte Carlo methodology to compute exponents instantaneously within the swath of AIRS that yields additional insight into scaling behavior. While this method is approximate and some biases are likely to exist within non-Gaussian portions of the satellite observational swaths of T and q, this method enables the estimation of scale-dependent behavior within instantaneous swaths for individual tropical and extratropical systems of interest. Scaling exponents are shown to fluctuate between β = -1 and -3 at scales ≥ 500 km, while at scales ≤ 500 km they are typically near β ≈ -2, with q slightly lower than T at the smallest scales observed. In the extratropics, the large-scale β is near -3. Within the tropics, however, the large-scale β for T is closer to -1 as small-scale moist convective processes dominate. In the tropics, q exhibits large-scale β between -2 and -3. The values of β are generally consistent with previous works of either time-averaged spatial variance estimates, or aircraft observations that require averaging over numerous flight observational segments. The instantaneous variance scaling methodology is relevant for cloud parameterization development and the assessment of time variability of scaling exponents.

  4. A modular approach to large-scale design optimization of aerospace systems

    Science.gov (United States)

    Hwang, John T.

    Gradient-based optimization and the adjoint method form a synergistic combination that enables the efficient solution of large-scale optimization problems. Though the gradient-based approach struggles with non-smooth or multi-modal problems, the capability to efficiently optimize up to tens of thousands of design variables provides a valuable design tool for exploring complex tradeoffs and finding unintuitive designs. However, the widespread adoption of gradient-based optimization is limited by the implementation challenges for computing derivatives efficiently and accurately, particularly in multidisciplinary and shape design problems. This thesis addresses these difficulties in two ways. First, to deal with the heterogeneity and integration challenges of multidisciplinary problems, this thesis presents a computational modeling framework that solves multidisciplinary systems and computes their derivatives in a semi-automated fashion. This framework is built upon a new mathematical formulation developed in this thesis that expresses any computational model as a system of algebraic equations and unifies all methods for computing derivatives using a single equation. The framework is applied to two engineering problems: the optimization of a nanosatellite with 7 disciplines and over 25,000 design variables; and simultaneous allocation and mission optimization for commercial aircraft involving 330 design variables, 12 of which are integer variables handled using the branch-and-bound method. In both cases, the framework makes large-scale optimization possible by reducing the implementation effort and code complexity. The second half of this thesis presents a differentiable parametrization of aircraft geometries and structures for high-fidelity shape optimization. Existing geometry parametrizations are not differentiable, or they are limited in the types of shape changes they allow. This is addressed by a novel parametrization that smoothly interpolates aircraft

  5. a Stochastic Approach to Multiobjective Optimization of Large-Scale Water Reservoir Networks

    Science.gov (United States)

    Bottacin-Busolin, A.; Worman, A. L.

    2013-12-01

    A main challenge for the planning and management of water resources is the development of multiobjective strategies for operation of large-scale water reservoir networks. The optimal sequence of water releases from multiple reservoirs depends on the stochastic variability of correlated hydrologic inflows and on various processes that affect water demand and energy prices. Although several methods have been suggested, large-scale optimization problems arising in water resources management are still plagued by the high dimensional state space and by the stochastic nature of the hydrologic inflows. In this work, the optimization of reservoir operation is approached using approximate dynamic programming (ADP) with policy iteration and function approximators. The method is based on an off-line learning process in which operating policies are evaluated for a number of stochastic inflow scenarios, and the resulting value functions are used to design new, improved policies until convergence is attained. A case study is presented of a multi-reservoir system in the Dalälven River, Sweden, which includes 13 interconnected reservoirs and 36 power stations. Depending on the late spring and summer peak discharges, the lowlands adjacent to Dalälven can often be flooded during the summer period, and the presence of stagnating floodwater during the hottest months of the year is the cause of a large proliferation of mosquitos, which is a major problem for the people living in the surroundings. Chemical pesticides are currently being used as a preventive countermeasure, which do not provide an effective solution to the problem and have adverse environmental impacts. In this study, ADP was used to analyze the feasibility of alternative operating policies for reducing the flood risk at a reasonable economic cost for the hydropower companies. To this end, mid-term operating policies were derived by combining flood risk reduction with hydropower production objectives. The performance

  6. Topographic mapping on large-scale tidal flats with an iterative approach on the waterline method

    Science.gov (United States)

    Kang, Yanyan; Ding, Xianrong; Xu, Fan; Zhang, Changkuan; Ge, Xiaoping

    2017-05-01

    Tidal flats, which are both a natural ecosystem and a type of landscape, are of significant importance to ecosystem function and land resource potential. Morphologic monitoring of tidal flats has become increasingly important with respect to achieving sustainable development targets. Remote sensing is an established technique for the measurement of topography over tidal flats; of the available methods, the waterline method is particularly effective for constructing a digital elevation model (DEM) of intertidal areas. However, application of the waterline method is more limited in large-scale, shifting tidal flats areas, where the tides are not synchronized and the waterline is not a quasi-contour line. For this study, a topographical map of the intertidal regions within the Radial Sand Ridges (RSR) along the Jiangsu Coast, China, was generated using an iterative approach on the waterline method. A series of 21 multi-temporal satellite images (18 HJ-1A/B CCD and three Landsat TM/OLI) of the RSR area collected at different water levels within a five month period (31 December 2013-28 May 2014) was used to extract waterlines based on feature extraction techniques and artificial further modification. These 'remotely-sensed waterlines' were combined with the corresponding water levels from the 'model waterlines' simulated by a hydrodynamic model with an initial generalized DEM of exposed tidal flats. Based on the 21 heighted 'remotely-sensed waterlines', a DEM was constructed using the ANUDEM interpolation method. Using this new DEM as the input data, it was re-entered into the hydrodynamic model, and a new round of water level assignment of waterlines was performed. A third and final output DEM was generated covering an area of approximately 1900 km2 of tidal flats in the RSR. The water level simulation accuracy of the hydrodynamic model was within 0.15 m based on five real-time tide stations, and the height accuracy (root mean square error) of the final DEM was 0.182 m

  7. Approaches to Debugging at Scale on the Peregrine System | High-Performance

    Science.gov (United States)

    possible approaches. One approach provides those nodes as soon as possible but the time of their administrators. Approach 1: Run an Interactive Job Submit an interactive job asking for the number of nodes you to end the interactive job, and then type exit again to end the screen session. Approach 2: Request a

  8. A rank-based approach for correcting systematic biases in spatial disaggregation of coarse-scale climate simulations

    Science.gov (United States)

    Nahar, Jannatun; Johnson, Fiona; Sharma, Ashish

    2017-07-01

    Use of General Circulation Model (GCM) precipitation and evapotranspiration sequences for hydrologic modelling can result in unrealistic simulations due to the coarse scales at which GCMs operate and the systematic biases they contain. The Bias Correction Spatial Disaggregation (BCSD) method is a popular statistical downscaling and bias correction method developed to address this issue. The advantage of BCSD is its ability to reduce biases in the distribution of precipitation totals at the GCM scale and then introduce more realistic variability at finer scales than simpler spatial interpolation schemes. Although BCSD corrects biases at the GCM scale before disaggregation; at finer spatial scales biases are re-introduced by the assumptions made in the spatial disaggregation process. Our study focuses on this limitation of BCSD and proposes a rank-based approach that aims to reduce the spatial disaggregation bias especially for both low and high precipitation extremes. BCSD requires the specification of a multiplicative bias correction anomaly field that represents the ratio of the fine scale precipitation to the disaggregated precipitation. It is shown that there is significant temporal variation in the anomalies, which is masked when a mean anomaly field is used. This can be improved by modelling the anomalies in rank-space. Results from the application of the rank-BCSD procedure improve the match between the distributions of observed and downscaled precipitation at the fine scale compared to the original BCSD approach. Further improvements in the distribution are identified when a scaling correction to preserve mass in the disaggregation process is implemented. An assessment of the approach using a single GCM over Australia shows clear advantages especially in the simulation of particularly low and high downscaled precipitation amounts.

  9. Fixed term employment

    International Nuclear Information System (INIS)

    Durant, B.W.; Schonberner, M.J.

    1999-01-01

    A series of brief notes were included with this presentation which highlighted certain aspects of contract management. Several petroleum companies have realized the benefits of taking advantage of contract personnel to control fixed G and A, manage the impacts on their organization, contain costs, to manage termination costs, and to fill gaps in lean personnel rosters. An independent contractor was described as being someone who is self employed, often with a variety of work experiences. The tax benefits and flexibility of contractor personnel were also described. Some liability aspects of hiring an independent contractor were also reviewed. The courts have developed the following 4 tests to help determine whether an individual is an employee or an independent contractor: (1) the control test, (2) the business integration test, (3) specific result test, and (4) the economic reality test

  10. Fixed Access Network Sharing

    Science.gov (United States)

    Cornaglia, Bruno; Young, Gavin; Marchetta, Antonio

    2015-12-01

    Fixed broadband network deployments are moving inexorably to the use of Next Generation Access (NGA) technologies and architectures. These NGA deployments involve building fiber infrastructure increasingly closer to the customer in order to increase the proportion of fiber on the customer's access connection (Fibre-To-The-Home/Building/Door/Cabinet… i.e. FTTx). This increases the speed of services that can be sold and will be increasingly required to meet the demands of new generations of video services as we evolve from HDTV to "Ultra-HD TV" with 4k and 8k lines of video resolution. However, building fiber access networks is a costly endeavor. It requires significant capital in order to cover any significant geographic coverage. Hence many companies are forming partnerships and joint-ventures in order to share the NGA network construction costs. One form of such a partnership involves two companies agreeing to each build to cover a certain geographic area and then "cross-selling" NGA products to each other in order to access customers within their partner's footprint (NGA coverage area). This is tantamount to a bi-lateral wholesale partnership. The concept of Fixed Access Network Sharing (FANS) is to address the possibility of sharing infrastructure with a high degree of flexibility for all network operators involved. By providing greater configuration control over the NGA network infrastructure, the service provider has a greater ability to define the network and hence to define their product capabilities at the active layer. This gives the service provider partners greater product development autonomy plus the ability to differentiate from each other at the active network layer.

  11. Thermal fatigue of austenitic stainless steel: influence of surface conditions through a multi-scale approach

    International Nuclear Information System (INIS)

    Le-Pecheur, Anne

    2008-01-01

    Some cases of cracking of 304L austenitic stainless steel components due to thermal fatigue were encountered in particular on the Residual Heat Removal Circuits (RHR) of the Pressurized Water Reactor (PWR). EDF has initiated a R and D program to understand assess the risks of damage on nuclear plant mixing zones. The INTHERPOL test developed at EDF is designed in order to perform pure thermal fatigue test on tubular specimen under mono-frequency thermal load. These tests are carried out under various loadings, surface finish qualities and welding in order to give an account of these parameters on crack initiation. The main topic of this study is the research of a fatigue criterion using a micro:macro modelling approach. The first part of work deals with material characterization (stainless steel 304L) emphasising the specificities of the surface roughness link with a strong hardening gradient. The first results of the characterization on the surface show a strong work-hardening gradient on a 250 microns layer. This gradient does not evolved after thermal cycling. Micro hardness measurements and TEM observations were intensively used to characterize this gradient. The second part is the macroscopic modelling of INTHERPOL tests in order to determine the components of the stress and strain tensors due to thermal cycling. The third part of work is thus to evaluate the effect of surface roughness and hardening gradient using a calculation on a finer scale. This simulation is based on the variation of dislocation density. A goal for the future is the determination of the fatigue criterion mainly based on polycrystalline modelling. Stocked energy or critical plane being available that allows making a sound choice for the criteria. (author)

  12. Assessing Weather-Yield Relationships in Rice at Local Scale Using Data Mining Approaches.

    Directory of Open Access Journals (Sweden)

    Sylvain Delerce

    Full Text Available Seasonal and inter-annual climate variability have become important issues for farmers, and climate change has been shown to increase them. Simultaneously farmers and agricultural organizations are increasingly collecting observational data about in situ crop performance. Agriculture thus needs new tools to cope with changing environmental conditions and to take advantage of these data. Data mining techniques make it possible to extract embedded knowledge associated with farmer experiences from these large observational datasets in order to identify best practices for adapting to climate variability. We introduce new approaches through a case study on irrigated and rainfed rice in Colombia. Preexisting observational datasets of commercial harvest records were combined with in situ daily weather series. Using Conditional Inference Forest and clustering techniques, we assessed the relationships between climatic factors and crop yield variability at the local scale for specific cultivars and growth stages. The analysis showed clear relationships in the various location-cultivar combinations, with climatic factors explaining 6 to 46% of spatiotemporal variability in yield, and with crop responses to weather being non-linear and cultivar-specific. Climatic factors affected cultivars differently during each stage of development. For instance, one cultivar was affected by high nighttime temperatures in the reproductive stage but responded positively to accumulated solar radiation during the ripening stage. Another was affected by high nighttime temperatures during both the vegetative and reproductive stages. Clustering of the weather patterns corresponding to individual cropping events revealed different groups of weather patterns for irrigated and rainfed systems with contrasting yield levels. Best-suited cultivars were identified for some weather patterns, making weather-site-specific recommendations possible. This study illustrates the potential of

  13. Integrated Approach for Improving Small Scale Market Oriented Dairy Systems in Pakistan: Economic Impact of Interventions

    Directory of Open Access Journals (Sweden)

    A. Ghaffar

    2010-02-01

    Full Text Available The International Atomic Energy Agency (IAEA launched a Coordinated Research Program in 10 developing countries including Pakistan involving small scale market oriented dairy farmers to identify and prioritize the constraints and opportunities in the selected dairy farms, develop intervention strategies and assess the economic impact of the intervention. The interventions in animal health (control of mastitis at sub-clinical stage and reduction in calf mortality, nutrition (balanced feed reproduction (mineral supplementation, and general management (training of farmers were identified and implemented in a participatory approach at the selected dairy farms. The calf mortality was reduced from 35 to 13 percent up to the age of 3 months. Use of Alfa Deval post milking teat dips reduced the incidence of sub-clinical mastitis from 34 to 5% showing economical benefits of the interventions. Partial budget technique was used to analyze its impact in the registered herds. The farmers recorded monthly quantities of different feed ingredients and seasonal green fodder offered to the animals. From this data set total metabolizeable energy requirements and availability from feed were computed which revealed that animals were deficient in metabolizeable energy in all locations. This was also confirmed by seasonal variation in body condition scoring. At some selected farms the mineral mixture supplement was introduced which exhibited increased milk yield by 5 % in addition to shorten service period by 30 days. Three sessions of training were arranged to train the farmers to care new born calves, daily farm management and detect the animals in heat efficiently to enhance the over all income of the farmers. The overall income of the farm was increased by 40%.

  14. Assessing Weather-Yield Relationships in Rice at Local Scale Using Data Mining Approaches.

    Science.gov (United States)

    Delerce, Sylvain; Dorado, Hugo; Grillon, Alexandre; Rebolledo, Maria Camila; Prager, Steven D; Patiño, Victor Hugo; Garcés Varón, Gabriel; Jiménez, Daniel

    2016-01-01

    Seasonal and inter-annual climate variability have become important issues for farmers, and climate change has been shown to increase them. Simultaneously farmers and agricultural organizations are increasingly collecting observational data about in situ crop performance. Agriculture thus needs new tools to cope with changing environmental conditions and to take advantage of these data. Data mining techniques make it possible to extract embedded knowledge associated with farmer experiences from these large observational datasets in order to identify best practices for adapting to climate variability. We introduce new approaches through a case study on irrigated and rainfed rice in Colombia. Preexisting observational datasets of commercial harvest records were combined with in situ daily weather series. Using Conditional Inference Forest and clustering techniques, we assessed the relationships between climatic factors and crop yield variability at the local scale for specific cultivars and growth stages. The analysis showed clear relationships in the various location-cultivar combinations, with climatic factors explaining 6 to 46% of spatiotemporal variability in yield, and with crop responses to weather being non-linear and cultivar-specific. Climatic factors affected cultivars differently during each stage of development. For instance, one cultivar was affected by high nighttime temperatures in the reproductive stage but responded positively to accumulated solar radiation during the ripening stage. Another was affected by high nighttime temperatures during both the vegetative and reproductive stages. Clustering of the weather patterns corresponding to individual cropping events revealed different groups of weather patterns for irrigated and rainfed systems with contrasting yield levels. Best-suited cultivars were identified for some weather patterns, making weather-site-specific recommendations possible. This study illustrates the potential of data mining for

  15. Thermal conductivity of granular porous media: A pore scale modeling approach

    Directory of Open Access Journals (Sweden)

    R. Askari

    2015-09-01

    Full Text Available Pore scale modeling method has been widely used in the petrophysical studies to estimate macroscopic properties (e.g. porosity, permeability, and electrical resistivity of porous media with respect to their micro structures. Although there is a sumptuous literature about the application of the method to study flow in porous media, there are fewer studies regarding its application to thermal conduction characterization, and the estimation of effective thermal conductivity, which is a salient parameter in many engineering surveys (e.g. geothermal resources and heavy oil recovery. By considering thermal contact resistance, we demonstrate the robustness of the method for predicting the effective thermal conductivity. According to our results obtained from Utah oil sand samples simulations, the simulation of thermal contact resistance is pivotal to grant reliable estimates of effective thermal conductivity. Our estimated effective thermal conductivities exhibit a better compatibility with the experimental data in companion with some famous experimental and analytical equations for the calculation of the effective thermal conductivity. In addition, we reconstruct a porous medium for an Alberta oil sand sample. By increasing roughness, we observe the effect of thermal contact resistance in the decrease of the effective thermal conductivity. However, the roughness effect becomes more noticeable when there is a higher thermal conductivity of solid to fluid ratio. Moreover, by considering the thermal resistance in porous media with different grains sizes, we find that the effective thermal conductivity augments with increased grain size. Our observation is in a reasonable accordance with experimental results. This demonstrates the usefulness of our modeling approach for further computational studies of heat transfer in porous media.

  16. A national scale flood hazard mapping methodology: The case of Greece - Protection and adaptation policy approaches.

    Science.gov (United States)

    Kourgialas, Nektarios N; Karatzas, George P

    2017-12-01

    The present work introduces a national scale flood hazard assessment methodology, using multi-criteria analysis and artificial neural networks (ANNs) techniques in a GIS environment. The proposed methodology was applied in Greece, where flash floods are a relatively frequent phenomenon and it has become more intense over the last decades, causing significant damages in rural and urban sectors. In order the most prone flooding areas to be identified, seven factor-maps (that are directly related to flood generation) were combined in a GIS environment. These factor-maps are: a) the Flow accumulation (F), b) the Land use (L), c) the Altitude (A), b) the Slope (S), e) the soil Erodibility (E), f) the Rainfall intensity (R), and g) the available water Capacity (C). The name to the proposed method is "FLASERC". The flood hazard for each one of these factors is classified into five categories: Very low, low, moderate, high, and very high. The above factors are combined and processed using the appropriate ANN algorithm tool. For the ANN training process spatial distribution of historical flooded points in Greece within the five different flood hazard categories of the aforementioned seven factor-maps were combined. In this way, the overall flood hazard map for Greece was determined. The final results are verified using additional historical flood events that have occurred in Greece over the last 100years. In addition, an overview of flood protection measures and adaptation policy approaches were proposed for agricultural and urban areas located at very high flood hazard areas. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Psychometric properties of the Epworth Sleepiness Scale: A factor analysis and item-response theory approach.

    Science.gov (United States)

    Pilcher, June J; Switzer, Fred S; Munc, Alec; Donnelly, Janet; Jellen, Julia C; Lamm, Claus

    2018-04-01

    The purpose of this study is to examine the psychometric properties of the Epworth Sleepiness Scale (ESS) in two languages, German and English. Students from a university in Austria (N = 292; 55 males; mean age = 18.71 ± 1.71 years; 237 females; mean age = 18.24 ± 0.88 years) and a university in the US (N = 329; 128 males; mean age = 18.71 ± 0.88 years; 201 females; mean age = 21.59 ± 2.27 years) completed the ESS. An exploratory-factor analysis was completed to examine dimensionality of the ESS. Item response theory (IRT) analyses were used to provide information about the response rates on the items on the ESS and provide differential item functioning (DIF) analyses to examine whether the items were interpreted differently between the two languages. The factor analyses suggest that the ESS measures two distinct sleepiness constructs. These constructs indicate that the ESS is probing sleepiness in settings requiring active versus passive responding. The IRT analyses found that overall, the items on the ESS perform well as a measure of sleepiness. However, Item 8 and to a lesser extent Item 6 were being interpreted differently by respondents in comparison to the other items. In addition, the DIF analyses showed that the responses between German and English were very similar indicating that there are only minor measurement differences between the two language versions of the ESS. These findings suggest that the ESS provides a reliable measure of propensity to sleepiness; however, it does convey a two-factor approach to sleepiness. Researchers and clinicians can use the German and English versions of the ESS but may wish to exclude Item 8 when calculating a total sleepiness score.

  18. Infrared fixed points and fixed lines in the top-bottom-tau sector in supersymmetric grand unification

    International Nuclear Information System (INIS)

    Schrempp, B.

    1994-10-01

    The two loop 'top-down' renormalization group flow for the top, bottom and tau Yukawa couplings, from μ=M GUT ≅O(10 16 GeV) to μ≅m t , is explored in the framework of supersymmetric grand unification; reproduction of the physical bottom and tau masses is required. Instead of following the recent trend of implementing exact Yukawa coupling unification i) a search for infrared (IR) fixed lines and fixed points in the m t pole -tan β plane is performed and ii) the extent to which these imply approximate Yukawa unification is determined. In the m t pole -tan β plane two IR fixed lines, intersecting in an IR fixed point, are located. The more attractive fixed line has a branch of almost constant top mass, m t pole ≅168≅180 GeV (close to the experimental value), for the large interval 2.5 GUT approximately. The less attractive fixed line as well as the fixed point at m t pole ≅170 GeV, tan β≅55 implement approximate top-bottom Yukawa unification at all scales μ. The renormalization group flow is attracted towards the IR fixed point by way of the more attractive IR fixed line. The fixed point and lines are distinct from the much quoted effective IR fixed point m t pole ≅O(200 GeV) sin β. (orig.)

  19. Airframe Noise Prediction of a Full Aircraft in Model and Full Scale Using a Lattice Boltzmann Approach

    Science.gov (United States)

    Fares, Ehab; Duda, Benjamin; Khorrami, Mehdi R.

    2016-01-01

    Unsteady flow computations are presented for a Gulfstream aircraft model in landing configuration, i.e., flap deflected 39deg and main landing gear deployed. The simulations employ the lattice Boltzmann solver PowerFLOW(Trademark) to simultaneously capture the flow physics and acoustics in the near field. Sound propagation to the far field is obtained using a Ffowcs Williams and Hawkings acoustic analogy approach. Two geometry representations of the same aircraft are analyzed: an 18% scale, high-fidelity, semi-span model at wind tunnel Reynolds number and a full-scale, full-span model at half-flight Reynolds number. Previously published and newly generated model-scale results are presented; all full-scale data are disclosed here for the first time. Reynolds number and geometrical fidelity effects are carefully examined to discern aerodynamic and aeroacoustic trends with a special focus on the scaling of surface pressure fluctuations and farfield noise. An additional study of the effects of geometrical detail on farfield noise is also documented. The present investigation reveals that, overall, the model-scale and full-scale aeroacoustic results compare rather well. Nevertheless, the study also highlights that finer geometrical details that are typically not captured at model scales can have a non-negligible contribution to the farfield noise signature.

  20. Route Optimization for Offloading Congested Meter Fixes

    Science.gov (United States)

    Xue, Min; Zelinski, Shannon

    2016-01-01

    The Optimized Route Capability (ORC) concept proposed by the FAA facilitates traffic managers to identify and resolve arrival flight delays caused by bottlenecks formed at arrival meter fixes when there exists imbalance between arrival fixes and runways. ORC makes use of the prediction capability of existing automation tools, monitors the traffic delays based on these predictions, and searches the best reroutes upstream of the meter fixes based on the predictions and estimated arrival schedules when delays are over a predefined threshold. Initial implementation and evaluation of the ORC concept considered only reroutes available at the time arrival congestion was first predicted. This work extends previous work by introducing an additional dimension in reroute options such that ORC can find the best time to reroute and overcome the 'firstcome- first-reroute' phenomenon. To deal with the enlarged reroute solution space, a genetic algorithm was developed to solve this problem. Experiments were conducted using the same traffic scenario used in previous work, when an arrival rush was created for one of the four arrival meter fixes at George Bush Intercontinental Houston Airport. Results showed the new approach further improved delay savings. The suggested route changes from the new approach were on average 30 minutes later than those using other approaches, and fewer numbers of reroutes were required. Fewer numbers of reroutes reduce operational complexity and later reroutes help decision makers deal with uncertain situations.

  1. Scaling solutions for dilaton quantum gravity

    Energy Technology Data Exchange (ETDEWEB)

    Henz, T.; Pawlowski, J.M., E-mail: j.pawlowski@thphys.uni-heidelberg.de; Wetterich, C.

    2017-06-10

    Scaling solutions for the effective action in dilaton quantum gravity are investigated within the functional renormalization group approach. We find numerical solutions that connect ultraviolet and infrared fixed points as the ratio between scalar field and renormalization scale k is varied. In the Einstein frame the quantum effective action corresponding to the scaling solutions becomes independent of k. The field equations derived from this effective action can be used directly for cosmology. Scale symmetry is spontaneously broken by a non-vanishing cosmological value of the scalar field. For the cosmology corresponding to our scaling solutions, inflation arises naturally. The effective cosmological constant becomes dynamical and vanishes asymptotically as time goes to infinity.

  2. A new compact fixed-point blackbody furnace

    International Nuclear Information System (INIS)

    Hiraka, K.; Oikawa, H.; Shimizu, T.; Kadoya, S.; Kobayashi, T.; Yamada, Y.; Ishii, J.

    2013-01-01

    More and more NMIs are realizing their primary scale themselves with fixed-point blackbodies as their reference standard. However, commercially available fixed-point blackbody furnaces of sufficient quality are not always easy to obtain. CHINO Corp. and NMIJ, AIST jointly developed a new compact fixed-point blackbody furnace. The new furnace has such features as 1) improved temperature uniformity when compared to previous products, enabling better plateau quality, 2) adoption of the hybrid fixed-point cell structure with internal insulation to improve robustness and thereby to extend lifetime, 3) easily ejectable and replaceable heater unit and fixed-point cell design, leading to reduced maintenance cost, 4) interchangeability among multiple fixed points from In to Cu points. The replaceable cell feature facilitates long term maintenance of the scale through management of a group of fixed-point cells of the same type. The compact furnace is easily transportable and therefore can also function as a traveling standard for disseminating the radiation temperature scale, and for maintaining the scale at the secondary level and industrial calibration laboratories. It is expected that the furnace will play a key role of the traveling standard in the anticipated APMP supplementary comparison of the radiation thermometry scale

  3. Approaching Repetitive Short Circuit Tests on MW-Scale Power Modules by means of an Automatic Testing Setup

    DEFF Research Database (Denmark)

    Reigosa, Paula Diaz; Wang, Huai; Iannuzzo, Francesco

    2016-01-01

    An automatic testing system to perform repetitive short-circuit tests on megawatt-scale IGBT power modules is pre-sented and described in this paper, pointing out the advantages and features of such testing approach. The developed system is based on a non-destructive short-circuit tester, which has...

  4. Stability and Control of Large-Scale Dynamical Systems A Vector Dissipative Systems Approach

    CERN Document Server

    Haddad, Wassim M

    2011-01-01

    Modern complex large-scale dynamical systems exist in virtually every aspect of science and engineering, and are associated with a wide variety of physical, technological, environmental, and social phenomena, including aerospace, power, communications, and network systems, to name just a few. This book develops a general stability analysis and control design framework for nonlinear large-scale interconnected dynamical systems, and presents the most complete treatment on vector Lyapunov function methods, vector dissipativity theory, and decentralized control architectures. Large-scale dynami

  5. Scale Economies and Industry Agglomeration Externalities: A Dynamic Cost Function Approach

    OpenAIRE

    Donald S. Siegel; Catherine J. Morrison Paul

    1999-01-01

    Scale economies and agglomeration externalities are alleged to be important determinants of economic growth. To assess these effects, the authors outline and estimate a microfoundations model based on a dynamic cost function specification. This model provides for the separate identification of the impacts of externalities and cyclical utilization on short- and long-run scale economies and input substitution patterns. The authors find that scale economies are prevalent in U.S manufacturing; co...

  6. An analytical and experimental approach for pressure distribution analysis of a particular lobe and plain bearing performance keeping in view of all impeding varying parameters associating with fixed lubrication SAE20W40

    International Nuclear Information System (INIS)

    Biswas, Nabarun; Chakraborti, Prasun; Belkar, Sanjay

    2016-01-01

    This paper presents both analytical and experimental pressure analysis approach of a typical Lobe and plain bearing for determining effective performance of the bearing. This is found to be dependent on several variables viz. angular velocity (1200-1900 rpm), load (300- 750 N) and pressure angle (0.deg.-180.deg.). This study in particular has been carried out for better rectifications and comparative prediction of lobe and plain bearing in terms of pressure distribution behavior under lubrication oil grade of SAE20W40. Influencing parameters were varied in this set up only to get optimum parametric combination considering all relevant practical issues. The experimentation was done based on significant directives of relevant literatures in these sectors. Attempt was made to compare the analytical findings with experimental results and found matched appreciably. After that attention was diverted to find the nature of pressure and load carrying capacity at various fluctuating speed and load with a fixed lubrication of SAE20W40 for appropriate decision making towards its characteristic performance. The analytical data generated by MATLAB are compared with experimental data which is generated by JBTR

  7. An analytical and experimental approach for pressure distribution analysis of a particular lobe and plain bearing performance keeping in view of all impeding varying parameters associating with fixed lubrication SAE20W40

    Energy Technology Data Exchange (ETDEWEB)

    Biswas, Nabarun; Chakraborti, Prasun [National Institute of Technology Agartala, Jirania (India); Belkar, Sanjay [Pravara Rural Engineering CollegeLoni, Rahata taluka (India)

    2016-05-15

    This paper presents both analytical and experimental pressure analysis approach of a typical Lobe and plain bearing for determining effective performance of the bearing. This is found to be dependent on several variables viz. angular velocity (1200-1900 rpm), load (300- 750 N) and pressure angle (0.deg.-180.deg.). This study in particular has been carried out for better rectifications and comparative prediction of lobe and plain bearing in terms of pressure distribution behavior under lubrication oil grade of SAE20W40. Influencing parameters were varied in this set up only to get optimum parametric combination considering all relevant practical issues. The experimentation was done based on significant directives of relevant literatures in these sectors. Attempt was made to compare the analytical findings with experimental results and found matched appreciably. After that attention was diverted to find the nature of pressure and load carrying capacity at various fluctuating speed and load with a fixed lubrication of SAE20W40 for appropriate decision making towards its characteristic performance. The analytical data generated by MATLAB are compared with experimental data which is generated by JBTR.

  8. A watershed-scale approach to tracing metal contamination in the environment

    Science.gov (United States)

    Church, Stanley E

    1996-01-01

    IntroductionPublic policy during the 1800's encouraged mining in the western United States. Mining on Federal lands played an important role in the growing economy creating national wealth from our abundant and diverse mineral resource base. The common industrial practice from the early days of mining through about 1970 in the U.S. was for mine operators to dispose of the mine wastes and mill tailings in the nearest stream reach or lake. As a result of this contamination, many stream reaches below old mines, mills, and mining districts and some major rivers and lakes no longer support aquatic life. Riparian habitats within these affected watersheds have also been impacted. Often, the water from these affected stream reaches is generally not suitable for drinking, creating a public health hazard. The recent Department of Interior Abandoned Mine Lands (AML) Initiative is an effort on the part of the Federal Government to address the adverse environmental impact of these past mining practices on Federal lands. The AML Initiative has adopted a watershed approach to determine those sites that contribute the majority of the contaminants in the watershed. By remediating the largest sources of contamination within the watershed, the impact of metal contamination in the environment within the watershed as a whole is reduced rather than focusing largely on those sites for which principal responsible parties can be found.The scope of the problem of metal contamination in the environment from past mining practices in the coterminous U.S. is addressed in a recent report by Ferderer (1996). Using the USGS1:2,000,000-scale hydrologic drainage basin boundaries and the USGS Minerals Availability System (MAS) data base, he plotted the distribution of 48,000 past-producing metal mines on maps showing the boundaries of lands administered by the various Federal Land Management Agencies (FLMA). Census analysis of these data provided an initial screening tool for prioritization of

  9. Biodiversity conservation in Swedish forests: ways forward for a 30-year-old multi-scaled approach.

    Science.gov (United States)

    Gustafsson, Lena; Perhans, Karin

    2010-12-01

    A multi-scaled model for biodiversity conservation in forests was introduced in Sweden 30 years ago, which makes it a pioneer example of an integrated ecosystem approach. Trees are set aside for biodiversity purposes at multiple scale levels varying from individual trees to areas of thousands of hectares, with landowner responsibility at the lowest level and with increasing state involvement at higher levels. Ecological theory supports the multi-scaled approach, and retention efforts at every harvest occasion stimulate landowners' interest in conservation. We argue that the model has large advantages but that in a future with intensified forestry and global warming, development based on more progressive thinking is necessary to maintain and increase biodiversity. Suggestions for the future include joint planning for several forest owners, consideration of cost-effectiveness, accepting opportunistic work models, adjusting retention levels to stand and landscape composition, introduction of temporary reserves, creation of "receiver habitats" for species escaping climate change, and protection of young forests.

  10. Climatic and physiographic controls on catchment-scale nitrate loss at different spatial scales: insights from a top-down model development approach

    Science.gov (United States)

    Shafii, Mahyar; Basu, Nandita; Schiff, Sherry; Van Cappellen, Philippe

    2017-04-01

    Dramatic increase in nitrogen circulating in the biosphere due to anthropogenic activities has resulted in impairment of water quality in groundwater and surface water causing eutrophication in coastal regions. Understanding the fate and transport of nitrogen from landscape to coastal areas requires exploring the drivers of nitrogen processes in both time and space, as well as the identification of appropriate flow pathways. Conceptual models can be used as diagnostic tools to provide insights into such controls. However, diagnostic evaluation of coupled hydrological-biogeochemical models is challenging. This research proposes a top-down methodology utilizing hydrochemical signatures to develop conceptual models for simulating the integrated streamflow and nitrate responses while taking into account dominant controls on nitrate variability (e.g., climate, soil water content, etc.). Our main objective is to seek appropriate model complexity that sufficiently reproduces multiple hydrological and nitrate signatures. Having developed a suitable conceptual model for a given watershed, we employ it in sensitivity studies to demonstrate the dominant process controls that contribute to the nitrate response at scales of interest. We apply the proposed approach to nitrate simulation in a range of small to large sub-watersheds in the Grand River Watershed (GRW) located in Ontario. Such multi-basin modeling experiment will enable us to address process scaling and investigate the consequences of lumping processes in terms of models' predictive capability. The proposed methodology can be applied to the development of large-scale models that can help decision-making associated with nutrients management at regional scale.

  11. Improved regional-scale Brazilian cropping systems' mapping based on a semi-automatic object-based clustering approach

    Science.gov (United States)

    Bellón, Beatriz; Bégué, Agnès; Lo Seen, Danny; Lebourgeois, Valentine; Evangelista, Balbino Antônio; Simões, Margareth; Demonte Ferraz, Rodrigo Peçanha

    2018-06-01

    Cropping systems' maps at fine scale over large areas provide key information for further agricultural production and environmental impact assessments, and thus represent a valuable tool for effective land-use planning. There is, therefore, a growing interest in mapping cropping systems in an operational manner over large areas, and remote sensing approaches based on vegetation index time series analysis have proven to be an efficient tool. However, supervised pixel-based approaches are commonly adopted, requiring resource consuming field campaigns to gather training data. In this paper, we present a new object-based unsupervised classification approach tested on an annual MODIS 16-day composite Normalized Difference Vegetation Index time series and a Landsat 8 mosaic of the State of Tocantins, Brazil, for the 2014-2015 growing season. Two variants of the approach are compared: an hyperclustering approach, and a landscape-clustering approach involving a previous stratification of the study area into landscape units on which the clustering is then performed. The main cropping systems of Tocantins, characterized by the crop types and cropping patterns, were efficiently mapped with the landscape-clustering approach. Results show that stratification prior to clustering significantly improves the classification accuracies for underrepresented and sparsely distributed cropping systems. This study illustrates the potential of unsupervised classification for large area cropping systems' mapping and contributes to the development of generic tools for supporting large-scale agricultural monitoring across regions.

  12. Prospective and participatory integrated assessment of agricultural systems from farm to regional scales: Comparison of three modeling approaches.

    Science.gov (United States)

    Delmotte, Sylvestre; Lopez-Ridaura, Santiago; Barbier, Jean-Marc; Wery, Jacques

    2013-11-15

    Evaluating the impacts of the development of alternative agricultural systems, such as organic or low-input cropping systems, in the context of an agricultural region requires the use of specific tools and methodologies. They should allow a prospective (using scenarios), multi-scale (taking into account the field, farm and regional level), integrated (notably multicriteria) and participatory assessment, abbreviated PIAAS (for Participatory Integrated Assessment of Agricultural System). In this paper, we compare the possible contribution to PIAAS of three modeling approaches i.e. Bio-Economic Modeling (BEM), Agent-Based Modeling (ABM) and statistical Land-Use/Land Cover Change (LUCC) models. After a presentation of each approach, we analyze their advantages and drawbacks, and identify their possible complementarities for PIAAS. Statistical LUCC modeling is a suitable approach for multi-scale analysis of past changes and can be used to start discussion about the futures with stakeholders. BEM and ABM approaches have complementary features for scenarios assessment at different scales. While ABM has been widely used for participatory assessment, BEM has been rarely used satisfactorily in a participatory manner. On the basis of these results, we propose to combine these three approaches in a framework targeted to PIAAS. Copyright © 2013 Elsevier Ltd. All rights reserved.

  13. Measurement Invariance of the Passion Scale across Three Samples: An ESEM Approach

    Science.gov (United States)

    Schellenberg, Benjamin J. I.; Gunnell, Katie E.; Mosewich, Amber D.; Bailis, Daniel S.

    2014-01-01

    Sport and exercise psychology researchers rely on the Passion Scale to assess levels of harmonious and obsessive passion for many different types of activities (Vallerand, 2010). However, this practice assumes that items from the Passion Scale are interpreted with the same meaning across all activity types. Using exploratory structural equation…

  14. A pragmatic approach to modelling soil and water conservation measures with a cathment scale erosion model.

    NARCIS (Netherlands)

    Hessel, R.; Tenge, A.J.M.

    2008-01-01

    To reduce soil erosion, soil and water conservation (SWC) methods are often used. However, no method exists to model beforehand how implementing such measures will affect erosion at catchment scale. A method was developed to simulate the effects of SWC measures with catchment scale erosion models.

  15. Multi-scale modeling with cellular automata: The complex automata approach

    NARCIS (Netherlands)

    Hoekstra, A.G.; Falcone, J.-L.; Caiazzo, A.; Chopard, B.

    2008-01-01

    Cellular Automata are commonly used to describe complex natural phenomena. In many cases it is required to capture the multi-scale nature of these phenomena. A single Cellular Automata model may not be able to efficiently simulate a wide range of spatial and temporal scales. It is our goal to

  16. The stokes number approach to support scale-up and technology transfer of a mixing process

    NARCIS (Netherlands)

    Willemsz, T.A.; Hooijmaijers, R.; Rubingh, C.M.; Frijlink, H.W.; Vromans, H.; Voort Maarschalk, K. van der

    2012-01-01

    Transferring processes between different scales and types of mixers is a common operation in industry. Challenges within this operation include the existence of considerable differences in blending conditions between mixer scales and types. Obtaining the correct blending conditions is crucial for

  17. The Stokes number approach to support scale-up and technology transfer of a mixing process

    NARCIS (Netherlands)

    Willemsz, Tofan A; Hooijmaijers, Ricardo; Rubingh, Carina M; Frijlink, Henderik W; Vromans, Herman; van der Voort Maarschalk, Kees

    Transferring processes between different scales and types of mixers is a common operation in industry. Challenges within this operation include the existence of considerable differences in blending conditions between mixer scales and types. Obtaining the correct blending conditions is crucial for

  18. A study of flame spread in engineered cardboard fuelbeds: Part II: Scaling law approach

    Science.gov (United States)

    Brittany A. Adam; Nelson K. Akafuah; Mark Finney; Jason Forthofer; Kozo Saito

    2013-01-01

    In this second part of a two part exploration of dynamic behavior observed in wildland fires, time scales differentiating convective and radiative heat transfer is further explored. Scaling laws for the two different types of heat transfer considered: Radiation-driven fire spread, and convection-driven fire spread, which can both occur during wildland fires. A new...

  19. Optimization Approach for Multi-scale Segmentation of Remotely Sensed Imagery under k-means Clustering Guidance

    Directory of Open Access Journals (Sweden)

    WANG Huixian

    2015-05-01

    Full Text Available In order to adapt different scale land cover segmentation, an optimized approach under the guidance of k-means clustering for multi-scale segmentation is proposed. At first, small scale segmentation and k-means clustering are used to process the original images; then the result of k-means clustering is used to guide objects merging procedure, in which Otsu threshold method is used to automatically select the impact factor of k-means clustering; finally we obtain the segmentation results which are applicable to different scale objects. FNEA method is taken for an example and segmentation experiments are done using a simulated image and a real remote sensing image from GeoEye-1 satellite, qualitative and quantitative evaluation demonstrates that the proposed method can obtain high quality segmentation results.

  20. The computation of fixed points and applications

    CERN Document Server

    Todd, Michael J

    1976-01-01

    Fixed-point algorithms have diverse applications in economics, optimization, game theory and the numerical solution of boundary-value problems. Since Scarf's pioneering work [56,57] on obtaining approximate fixed points of continuous mappings, a great deal of research has been done in extending the applicability and improving the efficiency of fixed-point methods. Much of this work is available only in research papers, although Scarf's book [58] gives a remarkably clear exposition of the power of fixed-point methods. However, the algorithms described by Scarf have been super~eded by the more sophisticated restart and homotopy techniques of Merrill [~8,~9] and Eaves and Saigal [1~,16]. To understand the more efficient algorithms one must become familiar with the notions of triangulation and simplicial approxi- tion, whereas Scarf stresses the concept of primitive set. These notes are intended to introduce to a wider audience the most recent fixed-point methods and their applications. Our approach is therefore ...

  1. Fixed point theory in metric type spaces

    CERN Document Server

    Agarwal, Ravi P; O’Regan, Donal; Roldán-López-de-Hierro, Antonio Francisco

    2015-01-01

    Written by a team of leading experts in the field, this volume presents a self-contained account of the theory, techniques and results in metric type spaces (in particular in G-metric spaces); that is, the text approaches this important area of fixed point analysis beginning from the basic ideas of metric space topology. The text is structured so that it leads the reader from preliminaries and historical notes on metric spaces (in particular G-metric spaces) and on mappings, to Banach type contraction theorems in metric type spaces, fixed point theory in partially ordered G-metric spaces, fixed point theory for expansive mappings in metric type spaces, generalizations, present results and techniques in a very general abstract setting and framework. Fixed point theory is one of the major research areas in nonlinear analysis. This is partly due to the fact that in many real world problems fixed point theory is the basic mathematical tool used to establish the existence of solutions to problems which arise natur...

  2. Experience of Integrated Safeguards Approach for Large-scale Hot Cell Laboratory

    International Nuclear Information System (INIS)

    Miyaji, N.; Kawakami, Y.; Koizumi, A.; Otsuji, A.; Sasaki, K.

    2010-01-01

    The Japan Atomic Energy Agency (JAEA) has been operating a large-scale hot cell laboratory, the Fuels Monitoring Facility (FMF), located near the experimental fast reactor Joyo at the Oarai Research and Development Center (JNC-2 site). The FMF conducts post irradiation examinations (PIE) of fuel assemblies irradiated in Joyo. The assemblies are disassembled and non-destructive examinations, such as X-ray computed tomography tests, are carried out. Some of the fuel pins are cut into specimens and destructive examinations, such as ceramography and X-ray micro analyses, are performed. Following PIE, the tested material, in the form of a pin or segments, is shipped back to a Joyo spent fuel pond. In some cases, after reassembly of the examined irradiated fuel pins is completed, the fuel assemblies are shipped back to Joyo for further irradiation. For the IAEA to apply the integrated safeguards approach (ISA) to the FMF, a new verification system on material shipping and receiving process between Joyo and the FMF has been established by the IAEA under technical collaboration among the Japan Safeguard Office (JSGO) of MEXT, the Nuclear Material Control Center (NMCC) and the JAEA. The main concept of receipt/shipment verification under the ISA for JNC-2 site is as follows: under the IS, the FMF is treated as a Joyo-associated facility in terms of its safeguards system because it deals with the same spent fuels. Verification of the material shipping and receiving process between Joyo and the FMF can only be applied to the declared transport routes and transport casks. The verification of the nuclear material contained in the cask is performed with the method of gross defect at the time of short notice random interim inspections (RIIs) by measuring the surface neutron dose rate of the cask, filled with water to reduce radiation. The JAEA performed a series of preliminary tests with the IAEA, the JSGO and the NMCC, and confirmed from the standpoint of the operator that this

  3. Multidisciplinary approach and multi-scale elemental analysis and separation chemistry

    International Nuclear Information System (INIS)

    Mariet, Clarisse

    2014-01-01

    The development of methods for the analysis of trace elements is an important component of my research activities either for a radiometric measure or mass spectrometric detection. Many studies raise the question of the chemical signature of a sample or a process: eruptive behavior of a volcano, indicator of pollution, ion exchange in vectors vesicles of active principles,... Each time, highly sensitive analytical procedures, accurate and multi-elementary as well as the development of specific protocols were needed. Neutron activation analysis has often been used as reference procedure and allowed to validate the chemical lixiviation and the measurement by ICP-MS. Analysis of radioactive samples requires skills in analysis of trace but also separation chemistry. Two separation methods occupy an important place in the separation chemistry of radionuclides: chromatography and liquid-liquid extraction. The study of extraction of Lanthanide (III) by the oxide octyl (phenyl)-n, N-diisobutyl-carbamoylmethyl phosphine (CMPO) and a calixarene-CMPO led to better understand and quantify the influence of operating conditions on their performance of extraction and selectivity. The high concentration of salts in aqueous solutions required to reason in terms of thermodynamic activities in relying on a comprehensive approach to quantification of deviations from ideality. In order to reduce the amount of waste generated and costs, alternatives to the hydrometallurgical extraction processes were considered using ionic liquids at low temperatures as alternative solvents in biphasic processes. Remaining in this logic of effluent reduction, miniaturization of the liquid-liquid extraction is also study so as to exploit the characteristics of microscopic scale (very large specific surface, short diffusion distances). The miniaturization of chromatographic separations carries the same ambitions of gain of volumes of wastes and reagents. The miniaturization of the separation Uranium

  4. Fixing the Leak

    DEFF Research Database (Denmark)

    Dlugosz, Stephan; Stephan, Gesine; Wilke, Ralf A.

    2014-01-01

    We present first empirical results on the effects of a large scale reduction in unemployment benefit entitlement lengths on unemployment inflows in Germany. We show that this highly disputed element of the Hartz-Reforms in 2006 induced a rush of workers and firms to take advantage of the previous...

  5. General Biology and Current Management Approaches of Soft Scale Pests (Hemiptera: Coccidae).

    Science.gov (United States)

    Camacho, Ernesto Robayo; Chong, Juang-Horng

    We summarize the economic importance, biology, and management of soft scales, focusing on pests of agricultural, horticultural, and silvicultural crops in outdoor production systems and urban landscapes. We also provide summaries on voltinism, crawler emergence timing, and predictive models for crawler emergence to assist in developing soft scale management programs. Phloem-feeding soft scale pests cause direct (e.g., injuries to plant tissues and removal of nutrients) and indirect damage (e.g., reduction in photosynthesis and aesthetic value by honeydew and sooty mold). Variations in life cycle, reproduction, fecundity, and behavior exist among congenerics due to host, environmental, climatic, and geographical variations. Sampling of soft scale pests involves sighting the insects or their damage, and assessing their abundance. Crawlers of most univoltine species emerge in the spring and the summer. Degree-day models and plant phenological indicators help determine the initiation of sampling and treatment against crawlers (the life stage most vulnerable to contact insecticides). The efficacy of cultural management tactics, such as fertilization, pruning, and irrigation, in reducing soft scale abundance is poorly documented. A large number of parasitoids and predators attack soft scale populations in the field; therefore, natural enemy conservation by using selective insecticides is important. Systemic insecticides provide greater flexibility in application method and timing, and have longer residual longevity than contact insecticides. Application timing of contact insecticides that coincides with crawler emergence is most effective in reducing soft scale abundance.

  6. Evolution of feeding specialization in Tanganyikan scale-eating cichlids: a molecular phylogenetic approach

    Directory of Open Access Journals (Sweden)

    Nishida Mutsumi

    2007-10-01

    Full Text Available Abstract Background Cichlid fishes in Lake Tanganyika exhibit remarkable diversity in their feeding habits. Among them, seven species in the genus Perissodus are known for their unique feeding habit of scale eating with specialized feeding morphology and behaviour. Although the origin of the scale-eating habit has long been questioned, its evolutionary process is still unknown. In the present study, we conducted interspecific phylogenetic analyses for all nine known species in the tribe Perissodini (seven Perissodus and two Haplotaxodon species using amplified fragment length polymorphism (AFLP analyses of the nuclear DNA. On the basis of the resultant phylogenetic frameworks, the evolution of their feeding habits was traced using data from analyses of stomach contents, habitat depths, and observations of oral jaw tooth morphology. Results AFLP analyses resolved the phylogenetic relationships of the Perissodini, strongly supporting monophyly for each species. The character reconstruction of feeding ecology based on the AFLP tree suggested that scale eating evolved from general carnivorous feeding to highly specialized scale eating. Furthermore, scale eating is suggested to have evolved in deepwater habitats in the lake. Oral jaw tooth shape was also estimated to have diverged in step with specialization for scale eating. Conclusion The present evolutionary analyses of feeding ecology and morphology based on the obtained phylogenetic tree demonstrate for the first time the evolutionary process leading from generalised to highly specialized scale eating, with diversification in feeding morphology and behaviour among species.

  7. A GIS-based approach to prevent contamination of groundwater at regional scale

    Science.gov (United States)

    Balderacchi, M.; Vischetti, C.; di Guardo, A.; Trevisan, M.

    2009-04-01

    Sustainable development is a fundamental objective of the European Union. Since 1991, the use of numerical models has been used to assess the environmental fate of pesticides (directive 91/414 EC). Since then, new approaches to assess pesticide contamination have been developed. This is an ongoing process, with approaches getting increasingly close to reality. Actually, there is a new challenge to integrate the most advanced and cost-effective monitoring strategies with simulation models so that reliable indicators of unsaturated flow and transport can be suitably mapped and coupled with other indicators related to productivity and sustainability. The most relevant role of GIS in the analysis of pesticide fate in soil is its application to process together input data and the results of distribution model based simulations of pesticide transport. FitoMarche is a GIS-based software tool that estimates pesticide movement in the unsaturated zone using MACRO 5 and it is able to simulate complex and real crop rotations at the regional scale. Crop rotation involves the sequential production of different plant species on the same land, every crop is characterized by different agricultural practices that involve the use of different pesticides at different doses. FitoMarche extracts MACRO input data from a series of geographic data sets (shapefiles) and an internal database, writes input files for MACRO, executes the simulation and extracts solute and water fluxes from MACRO output files. The study has been performed in the Marche region, located in central Italy along the Adriatic coast. Soil, climate, land use shapefiles were provided from public authorities, crop rotation schemes were estimated from ISTAT (the national statistics institute) 5th agricultural census database using a municipality detail and agricultural practices following the local customs. Two herbicides have been tested: "A" is employed on maize crop, and "B" on maize, sunflower and sugarbeet. In the

  8. A REGION-BASED MULTI-SCALE APPROACH FOR OBJECT-BASED IMAGE ANALYSIS

    Directory of Open Access Journals (Sweden)

    T. Kavzoglu

    2016-06-01

    Full Text Available Within the last two decades, object-based image analysis (OBIA considering objects (i.e. groups of pixels instead of pixels has gained popularity and attracted increasing interest. The most important stage of the OBIA is image segmentation that groups spectrally similar adjacent pixels considering not only the spectral features but also spatial and textural features. Although there are several parameters (scale, shape, compactness and band weights to be set by the analyst, scale parameter stands out the most important parameter in segmentation process. Estimating optimal scale parameter is crucially important to increase the classification accuracy that depends on image resolution, image object size and characteristics of the study area. In this study, two scale-selection strategies were implemented in the image segmentation process using pan-sharped Qickbird-2 image. The first strategy estimates optimal scale parameters for the eight sub-regions. For this purpose, the local variance/rate of change (LV-RoC graphs produced by the ESP-2 tool were analysed to determine fine, moderate and coarse scales for each region. In the second strategy, the image was segmented using the three candidate scale values (fine, moderate, coarse determined from the LV-RoC graph calculated for whole image. The nearest neighbour classifier was applied in all segmentation experiments and equal number of pixels was randomly selected to calculate accuracy metrics (overall accuracy and kappa coefficient. Comparison of region-based and image-based segmentation was carried out on the classified images and found that region-based multi-scale OBIA produced significantly more accurate results than image-based single-scale OBIA. The difference in classification accuracy reached to 10% in terms of overall accuracy.

  9. An advanced online monitoring approach to study the scaling behavior in direct contact membrane distillation

    KAUST Repository

    Lee, Jung Gil; Jang, Yongsun; Fortunato, Luca; Jeong, Sanghyun; Lee, Sangho; Leiknes, TorOve; Ghaffour, NorEddine

    2017-01-01

    scaling was performed by using various analytical methods, especially an in-situ monitoring technique using an optical coherence tomography (OCT) to observe the cross-sectional view on the membrane surface during operation. Different concentrations of Ca

  10. Understanding protected area resilience: a multi-scale, social-ecological approach

    Science.gov (United States)

    Cumming, Graeme S.; Allen, Craig R.; Ban, Natalie C.; Biggs, Duan; Biggs, Harry C.; Cumming, David H.M; De Vos, Alta; Epstein, Graham; Etienne, Michel; Maciejewski, Kristine; Mathevet, Raphael; Moore, Christine; Nenadovic, Mateja; Schoon, Michael

    2015-01-01

    Protected areas (PAs) remain central to the conservation of biodiversity. Classical PAs were conceived as areas that would be set aside to maintain a natural state with minimal human influence. However, global environmental change and growing cross-scale anthropogenic influences mean that PAs can no longer be thought of as ecological islands that function independently of the broader social-ecological system in which they are located. For PAs to be resilient (and to contribute to broader social-ecological resilience), they must be able to adapt to changing social and ecological conditions over time in a way that supports the long-term persistence of populations, communities, and ecosystems of conservation concern. We extend Ostrom's social-ecological systems framework to consider the long-term persistence of PAs, as a form of land use embedded in social-ecological systems, with important cross-scale feedbacks. Most notably, we highlight the cross-scale influences and feedbacks on PAs that exist from the local to the global scale, contextualizing PAs within multi-scale social-ecological functional landscapes. Such functional landscapes are integral to understand and manage individual PAs for long-term sustainability. We illustrate our conceptual contribution with three case studies that highlight cross-scale feedbacks and social-ecological interactions in the functioning of PAs and in relation to regional resilience. Our analysis suggests that while ecological, economic, and social processes are often directly relevant to PAs at finer scales, at broader scales, the dominant processes that shape and alter PA resilience are primarily social and economic.

  11. Solid-state electrochemistry on the nanometer and atomic scales: the scanning probe microscopy approach

    Science.gov (United States)

    Strelcov, Evgheni; Yang, Sang Mo; Jesse, Stephen; Balke, Nina; Vasudevan, Rama K.; Kalinin, Sergei V.

    2016-01-01

    Energy technologies of the 21st century require understanding and precise control over ion transport and electrochemistry at all length scales – from single atoms to macroscopic devices. This short review provides a summary of recent works dedicated to methods of advanced scanning probe microscopy for probing electrochemical transformations in solids at the meso-, nano- and atomic scales. Discussion presents advantages and limitations of several techniques and a wealth of examples highlighting peculiarities of nanoscale electrochemistry. PMID:27146961

  12. A simplified, data-constrained approach to estimate the permafrost carbon-climate feedback: The PCN Incubation-Panarctic Thermal (PInc-PanTher) Scaling Approach

    Science.gov (United States)

    Koven, C. D.; Schuur, E.; Schaedel, C.; Bohn, T. J.; Burke, E.; Chen, G.; Chen, X.; Ciais, P.; Grosse, G.; Harden, J. W.; Hayes, D. J.; Hugelius, G.; Jafarov, E. E.; Krinner, G.; Kuhry, P.; Lawrence, D. M.; MacDougall, A.; Marchenko, S. S.; McGuire, A. D.; Natali, S.; Nicolsky, D.; Olefeldt, D.; Peng, S.; Romanovsky, V. E.; Schaefer, K. M.; Strauss, J.; Treat, C. C.; Turetsky, M. R.

    2015-12-01

    We present an approach to estimate the feedback from large-scale thawing of permafrost soils using a simplified, data-constrained model that combines three elements: soil carbon (C) maps and profiles to identify the distribution and type of C in permafrost soils; incubation experiments to quantify the rates of C lost after thaw; and models of soil thermal dynamics in response to climate warming. We call the approach the Permafrost Carbon Network Incubation-Panarctic Thermal scaling approach (PInc-PanTher). The approach assumes that C stocks do not decompose at all when frozen, but once thawed follow set decomposition trajectories as a function of soil temperature. The trajectories are determined according to a 3-pool decomposition model fitted to incubation data using parameters specific to soil horizon types. We calculate litterfall C inputs required to maintain steady-state C balance for the current climate, and hold those inputs constant. Soil temperatures are taken from the soil thermal modules of ecosystem model simulations forced by a common set of future climate change anomalies under two warming scenarios over the period 2010 to 2100.

  13. An Autonomous Sensor Tasking Approach for Large Scale Space Object Cataloging

    Science.gov (United States)

    Linares, R.; Furfaro, R.

    The field of Space Situational Awareness (SSA) has progressed over the last few decades with new sensors coming online, the development of new approaches for making observations, and new algorithms for processing them. Although there has been success in the development of new approaches, a missing piece is the translation of SSA goals to sensors and resource allocation; otherwise known as the Sensor Management Problem (SMP). This work solves the SMP using an artificial intelligence approach called Deep Reinforcement Learning (DRL). Stable methods for training DRL approaches based on neural networks exist, but most of these approaches are not suitable for high dimensional systems. The Asynchronous Advantage Actor-Critic (A3C) method is a recently developed and effective approach for high dimensional systems, and this work leverages these results and applies this approach to decision making in SSA. The decision space for the SSA problems can be high dimensional, even for tasking of a single telescope. Since the number of SOs in space is relatively high, each sensor will have a large number of possible actions at a given time. Therefore, efficient DRL approaches are required when solving the SMP for SSA. This work develops a A3C based method for DRL applied to SSA sensor tasking. One of the key benefits of DRL approaches is the ability to handle high dimensional data. For example DRL methods have been applied to image processing for the autonomous car application. For example, a 256x256 RGB image has 196608 parameters (256*256*3=196608) which is very high dimensional, and deep learning approaches routinely take images like this as inputs. Therefore, when applied to the whole catalog the DRL approach offers the ability to solve this high dimensional problem. This work has the potential to, for the first time, solve the non-myopic sensor tasking problem for the whole SO catalog (over 22,000 objects) providing a truly revolutionary result.

  14. Fast and accurate approaches for large-scale, automated mapping of food diaries on food composition tables

    DEFF Research Database (Denmark)

    Lamarine, Marc; Hager, Jörg; Saris, Wim H M

    2018-01-01

    the EuroFIR resource. Two approaches were tested: the first was based solely on food name similarity (fuzzy matching). The second used a machine learning approach (C5.0 classifier) combining both fuzzy matching and food energy. We tested mapping food items using their original names and also an English...... not lead to any improvements compared to the fuzzy matching. However, it could increase substantially the recall rate for food items without any clear equivalent in the FCTs (+7 and +20% when mapping items using their original or English-translated names). Our approaches have been implemented as R packages...... and are freely available from GitHub. Conclusion: This study is the first to provide automated approaches for large-scale food item mapping onto FCTs. We demonstrate that both high precision and recall can be achieved. Our solutions can be used with any FCT and do not require any programming background...

  15. Development and Initial Validation of the Need Satisfaction and Need Support at Work Scales: A Validity-Focused Approach

    Directory of Open Access Journals (Sweden)

    Susanne Tafvelin

    2018-01-01

    Full Text Available Although the relevance of employee need satisfaction and manager need support have been examined, the integration of self-determination theory (SDT into work and organizational psychology has been hampered by the lack of validated measures. The purpose of the current study was to develop and validate measures of employees’ perception of need satisfaction (NSa-WS and need support (NSu-WS at work that were grounded in SDT. We used three Swedish samples (total 'N' = 1,430 to develop and validate our scales. We used a confirmatory approach including expert panels to assess item content relevance, confirmatory factor analysis for factorial validity, and associations with theoretically warranted outcomes to assess criterion-related validity. Scale reliability was also assessed. We found evidence of content, factorial, and criterion-related validity of our two scales of need satisfaction and need support at work. Further, the scales demonstrated high internal consistency. Our newly developed scales may be used in research and practice to further our understanding regarding how satisfaction and support of employee basic needs influence employee motivation, performance, and well-being. Our study makes a contribution to the current literature by providing (1 scales that are specifically designed for the work context, (2 an example of how expert panels can be used to assess content validity, and (3 testing of theoretically derived hypotheses that, although SDT is built on them, have not been examined before.

  16. Climate change, livelihoods and the multiple determinants of water adequacy: two approaches at regional to global scale

    Science.gov (United States)

    Lissner, Tabea; Reusser, Dominik

    2015-04-01

    Inadequate access to water is already a problem in many regions of the world and processes of global change are expected to further exacerbate the situation. Many aspects determine the adequacy of water resources: beside actual physical water stress, where the resource itself is limited, economic and social water stress can be experienced if access to resource is limited by inadequate infrastructure, political or financial constraints. To assess the adequacy of water availability for human use, integrated approaches are needed that allow to view the multiple determinants in conjunction and provide sound results as a basis for informed decisions. This contribution proposes two parts of an integrated approach to look at the multiple dimensions of water scarcity at regional to global scale. These were developed in a joint project with the German Development Agency (GIZ). It first outlines the AHEAD approach to measure Adequate Human livelihood conditions for wEll-being And Development, implemented at global scale and at national resolution. This first approach allows viewing impacts of climate change, e.g. changes in water availability, within the wider context of AHEAD conditions. A specific focus lies on the uncertainties in projections of climate change and future water availability. As adequate water access is not determined by water availability alone, in a second step we develop an approach to assess the water requirements for different sectors in more detail, including aspects of quantity, quality as well as access, in an integrated way. This more detailed approach is exemplified at region-scale in Indonesia and South Africa. Our results show that in many regions of the world, water scarcity is a limitation to AHEAD conditions in many countries, regardless of differing modelling output. The more detailed assessments highlight the relevance of additional aspects to assess the adequacy of water for human use, showing that in many regions, quality and

  17. THE PROBLEMS OF FIXED ASSETS CLASSIFICATION FOR ACCOUNTING

    Directory of Open Access Journals (Sweden)

    Sophiia Kafka

    2016-06-01

    Full Text Available This article provides a critical analysis of research in accounting of fixed assets; the basic issues of fixed assets accounting that have been developed by the Ukrainian scientists during 1999-2016 have been determined. It is established that the problems of non-current assets taxation and their classification are the most noteworthy. In the dissertations the issues of fixed assets classification are of exclusively particular branch nature, so its improvement is important. The purpose of the article is developing science-based classification of fixed assets for accounting purposes since their composition is quite diverse. The classification of fixed assets for accounting purposes have been summarized and developed in Figure 1 according to the results of the research. The accomplished analysis of existing approaches to classification of fixed assets has made it possible to specify its basic types and justify the classification criteria of fixed assets for the main objects of fixed assets. Key words: non-current assets, fixed assets, accounting, valuation, classification of the fixed assets. JEL:G M41  

  18. Droplet digital PCR (ddPCR) vs quantitative real-time PCR (qPCR) approach for detection and quantification of Merkel cell polyomavirus (MCPyV) DNA in formalin fixed paraffin embedded (FFPE) cutaneous biopsies.

    Science.gov (United States)

    Arvia, Rosaria; Sollai, Mauro; Pierucci, Federica; Urso, Carmelo; Massi, Daniela; Zakrzewska, Krystyna

    2017-08-01

    Merkel cell polyomavirus (MCPyV) is associated with Merkel cell carcinoma and high viral load in the skin was proposed as a risk factor for the occurrence of this tumour. MCPyV DNA was detected, with lower frequency, in different skin cancers but since the viral load was usually low, the real prevalence of viral DNA could be underestimated. To evaluate the performance of two assays (qPCR and ddPCR) for MCPyV detection and quantification in formalin fixed paraffin embedded (FFPE) tissue samples. Both assays were designed to simultaneous detection and quantification of both MCPyV as well as house-keeping DNA in clinical samples. The performance of MCPyV quantification was investigated using serial dilutions of cloned target DNA. We also evaluated the applicability of both tests for the analysis of 76 FFPE cutaneous biopsies. The two approaches resulted equivalent with regard to the reproducibility and repeatability and showed a high degree of linearity in the dynamic range tested in the present study. Moreover, qPCR was able to quantify ≥10 5 copies per reaction, while the upper limit of ddPCR was 10 4 copies. There was not significant difference between viral load measured by the two methods The detection limit of both tests was 0,15 copies per reaction, however, the number of positive samples obtained by ddPCR was higher than that obtained by qPCR (45% and 37% respectively). The ddPCR represents a better method for detection of MCPyV in FFPE biopsies, mostly these containing low copies number of viral genome. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Design of a non-scaling FFAG accelerator for proton therapy

    International Nuclear Information System (INIS)

    Trbojevic, D.; Ruggiero, A.G.; Keil, E.; Neskovic, N.; Belgrade, Vinca; Sessler, A.

    2005-01-01

    In recent years there has been a revival of interest in Fixed Field Alternating Gradient (FFAG) accelerators. In Japan a number have been built, or are under construction. A new non-scaling approach to the FFAG reduces the required orbit offsets during acceleration and the size of the required aperture, while maintaining the advantage of the low cost magnets associated with fixed fields. An advantage of the non-scaling FFAG accelerator, with respect to synchrotrons, is the fixed field and hence the possibility of high current and high repetition rate for spot scanning. There are possible advantages of the nonscaling design with respect to fixed-field cyclotrons. The non-scaling FFAG allows strong focusing and hence smaller aperture requirements compared to scaling designs, thus leading to very low losses and better control over the beam. We present, here, a non-scaling FFAG designed to be used for proton therapy

  20. Economies of scale in the Korean district heating system: A variable cost function approach

    International Nuclear Information System (INIS)

    Park, Sun-Young; Lee, Kyoung-Sil; Yoo, Seung-Hoon

    2016-01-01

    This paper aims to investigate the cost efficiency of South Korea’s district heating (DH) system by using a variable cost function and cost-share equation. We employ a seemingly unrelated regression model, with quarterly time-series data from the Korea District Heating Corporation (KDHC)—a public utility that covers about 59% of the DH system market in South Korea—over the 1987–2011 period. The explanatory variables are price of labor, price of material, capital cost, and production level. The results indicate that economies of scale are present and statistically significant. Thus, expansion of its DH business would allow KDHC to obtain substantial economies of scale. According to our forecasts vis-à-vis scale economies, the KDHC will enjoy cost efficiency for some time yet. To ensure a socially efficient supply of DH, it is recommended that the KDHC expand its business proactively. With regard to informing policy or regulations, our empirical results could play a significant role in decision-making processes. - Highlights: • We examine economies of scale in the South Korean district heating sector. • We focus on Korea District Heating Corporation (KDHC), a public utility. • We estimate a translog cost function, using a variable cost function. • We found economies of scale to be present and statistically significant. • KDHC will enjoy cost efficiency and expanding its supply is socially efficient.

  1. A comparative study of modern and fossil cone scales and seeds of conifers: A geochemical approach

    Science.gov (United States)

    Artur, Stankiewicz B.; Mastalerz, Maria; Kruge, M.A.; Van Bergen, P. F.; Sadowska, A.

    1997-01-01

    Modern cone scales and seeds of Pinus strobus and Sequoia sempervirens, and their fossil (Upper Miocene, c. 6 Mar) counterparts Pinus leitzii and Sequoia langsdorfi have been studied using pyrolysis-gas chromatography/mass spectrometry (Py-GC/MS), electron-microprobe and scanning electron microscopy. Microscopic observations revealed only minor microbial activity and high-quality structural preservation of the fossil material. The pyrolysates of both modern genera showed the presence of ligno-cellulose characteristic of conifers. However, the abundance of (alkylated)phenols and 1,2-benzenediols in modern S. sempervirens suggests the presence of non-hydrolysable tannins or abundant polyphenolic moieties not previously reported in modern conifers. The marked differences between the pyrolysis products of both modern genera are suggested to be of chemosystematic significance. The fossil samples also contained ligno-cellulose which exhibited only partial degradation, primarily of the carbohydrate constituents. Comparison between the fossil cone scale and seed pyrolysates indicated that the ligno-cellulose complex present in the seeds is chemically more resistant than that in the cone scales. Principal component analysis (PCA) of the pyrolysis data allowed for the determination of the discriminant functions used to assess the extent of degradation and the chemosystematic differences between both genera and between cone scales and seeds. Elemental composition (C, O, S), obtained using electron-microprobe, corroborated the pyrolysis results. Overall, the combination of chemical, microscopic and statistical methods allowed for a detailed characterization and chemosystematic interpretations of modern and fossil conifer cone scales and seeds.

  2. Scaling of cratering experiments: an analytical and heuristic approach to the phenomenology

    International Nuclear Information System (INIS)

    Killian, B.G.; Germain, L.S.

    1977-01-01

    The phenomenology of cratering can be thought of as consisting of two phases. The first phase, where the effects of gravity are negligible, consists of the energy source dynamically imparting its energy to the surroundings, rock and air. As illustrated in this paper, the first phase can be scaled if: radiation effects are negligible, experiments are conducted in the same rock material, time and distance use the same scaling factor, and distances scale as the cube root of the energy. The second phase of cratering consists of the rock, with its already developed velocity field, being thrown out. It is governed by the ballistics equation, and gravity is of primary importance. This second phase of cratering is examined heuristically by examples of the ballistics equation which illustrate the basic phenomena in crater formation. When gravity becomes significant, in addition to the conditions for scaling imposed in the first phase, distances must scale inversely as the ratio of gravities. A qualitative relationship for crater radius is derived and compared with calculations and experimental data over a wide range of energy sources and gravities

  3. Multi-scale approach to radiation damage induced by ion beams: complex DNA damage and effects of thermal spikes

    International Nuclear Information System (INIS)

    Surdutovich, E.; Yakubovich, A.V.; Solov'yov, A.V.; Surdutovich, E.; Yakubovich, A.V.; Solov'yov, A.V.

    2010-01-01

    We present the latest advances of the multi-scale approach to radiation damage caused by irradiation of a tissue with energetic ions and report the calculations of complex DNA damage and the effects of thermal spikes on biomolecules. The multi-scale approach aims to quantify the most important physical, chemical, and biological phenomena taking place during and following irradiation with ions and provide a better means for clinically-necessary calculations with adequate accuracy. We suggest a way of quantifying the complex clustered damage, one of the most important features of the radiation damage caused by ions. This quantification allows the studying of how the clusterization of DNA lesions affects the lethality of damage. We discuss the first results of molecular dynamics simulations of ubiquitin in the environment of thermal spikes, predicted to occur in tissue for a short time after an ion's passage in the vicinity of the ions' tracks. (authors)

  4. Validation of a plant-wide phosphorus modelling approach with minerals precipitation in a full-scale WWTP

    DEFF Research Database (Denmark)

    Mbamba, Christian Kazadi; Flores Alsina, Xavier; Batstone, Damien John

    2016-01-01

    approach describing ion speciation and ion pairing with kinetic multiple minerals precipitation. Model performance is evaluated against data sets from a full-scale wastewater treatment plant, assessing capability to describe water and sludge lines across the treatment process under steady-state operation...... plant. Dynamic influent profiles were generated using a calibrated influent generator and were used to study the effect of long-term influent dynamics on plant performance. Model-based analysis shows that minerals precipitation strongly influences composition in the anaerobic digesters, but also impacts......The focus of modelling in wastewater treatment is shifting from single unit to plant-wide scale. Plant wide modelling approaches provide opportunities to study the dynamics and interactions of different transformations in water and sludge streams. Towards developing more general and robust...

  5. Applying the global RCP-SSP-SPA scenario framework at sub-national scale: A multi-scale and participatory scenario approach.

    Science.gov (United States)

    Kebede, Abiy S; Nicholls, Robert J; Allan, Andrew; Arto, Iñaki; Cazcarro, Ignacio; Fernandes, Jose A; Hill, Chris T; Hutton, Craig W; Kay, Susan; Lázár, Attila N; Macadam, Ian; Palmer, Matthew; Suckall, Natalie; Tompkins, Emma L; Vincent, Katharine; Whitehead, Paul W

    2018-09-01

    To better anticipate potential impacts of climate change, diverse information about the future is required, including climate, society and economy, and adaptation and mitigation. To address this need, a global RCP (Representative Concentration Pathways), SSP (Shared Socio-economic Pathways), and SPA (Shared climate Policy Assumptions) (RCP-SSP-SPA) scenario framework has been developed by the Intergovernmental Panel on Climate Change Fifth Assessment Report (IPCC-AR5). Application of this full global framework at sub-national scales introduces two key challenges: added complexity in capturing the multiple dimensions of change, and issues of scale. Perhaps for this reason, there are few such applications of this new framework. Here, we present an integrated multi-scale hybrid scenario approach that combines both expert-based and participatory methods. The framework has been developed and applied within the DECCMA 1 project with the purpose of exploring migration and adaptation in three deltas across West Africa and South Asia: (i) the Volta delta (Ghana), (ii) the Mahanadi delta (India), and (iii) the Ganges-Brahmaputra-Meghna (GBM) delta (Bangladesh/India). Using a climate scenario that encompasses a wide range of impacts (RCP8.5) combined with three SSP-based socio-economic scenarios (SSP2, SSP3, SSP5), we generate highly divergent and challenging scenario contexts across multiple scales against which robustness of the human and natural systems within the deltas are tested. In addition, we consider four distinct adaptation policy trajectories: Minimum intervention, Economic capacity expansion, System efficiency enhancement, and System restructuring, which describe alternative future bundles of adaptation actions/measures under different socio-economic trajectories. The paper highlights the importance of multi-scale (combined top-down and bottom-up) and participatory (joint expert-stakeholder) scenario methods for addressing uncertainty in adaptation decision

  6. Presenting an Approach for Conducting Knowledge Architecture within Large-Scale Organizations.

    Science.gov (United States)

    Varaee, Touraj; Habibi, Jafar; Mohaghar, Ali

    2015-01-01

    Knowledge architecture (KA) establishes the basic groundwork for the successful implementation of a short-term or long-term knowledge management (KM) program. An example of KA is the design of a prototype before a new vehicle is manufactured. Due to a transformation to large-scale organizations, the traditional architecture of organizations is undergoing fundamental changes. This paper explores the main strengths and weaknesses in the field of KA within large-scale organizations and provides a suitable methodology and supervising framework to overcome specific limitations. This objective was achieved by applying and updating the concepts from the Zachman information architectural framework and the information architectural methodology of enterprise architecture planning (EAP). The proposed solution may be beneficial for architects in knowledge-related areas to successfully accomplish KM within large-scale organizations. The research method is descriptive; its validity is confirmed by performing a case study and polling the opinions of KA experts.

  7. Presenting an Approach for Conducting Knowledge Architecture within Large-Scale Organizations

    Science.gov (United States)

    Varaee, Touraj; Habibi, Jafar; Mohaghar, Ali

    2015-01-01

    Knowledge architecture (KA) establishes the basic groundwork for the successful implementation of a short-term or long-term knowledge management (KM) program. An example of KA is the design of a prototype before a new vehicle is manufactured. Due to a transformation to large-scale organizations, the traditional architecture of organizations is undergoing fundamental changes. This paper explores the main strengths and weaknesses in the field of KA within large-scale organizations and provides a suitable methodology and supervising framework to overcome specific limitations. This objective was achieved by applying and updating the concepts from the Zachman information architectural framework and the information architectural methodology of enterprise architecture planning (EAP). The proposed solution may be beneficial for architects in knowledge-related areas to successfully accomplish KM within large-scale organizations. The research method is descriptive; its validity is confirmed by performing a case study and polling the opinions of KA experts. PMID:25993414

  8. Evaluation model of project complexity for large-scale construction projects in Iran - A Fuzzy ANP approach

    Directory of Open Access Journals (Sweden)

    Aliyeh Kazemi

    2016-09-01

    Full Text Available Construction projects have always been complex. By growing trend of this complexity, implementations of large-scale constructions become harder. Hence, evaluating and understanding these complexities are critical. Correct evaluation of a project complication can provide executives and managers with good source to use. Fuzzy analytic network process (ANP is a logical and systematic approach toward defining, evaluation, and grading. This method allows for analyzing complex systems, and determining complexity of them. In this study, by taking advantage of fuzzy ANP, effective indexes for development of complications in large-scale construction projects in Iran have been determined and prioritized. The results show socio-political, project system interdependencies, and technological complexity indexes ranked top to three. Furthermore, in comparison of three main huge projects: commercial-administrative, hospital, and skyscrapers, the hospital project had been evaluated as the most complicated. This model is beneficial for professionals in managing large-scale projects.

  9. A micromechanical approach of suffusion based on a length scale analysis of the grain detachment and grain transport processes.

    Science.gov (United States)

    Wautier, Antoine; Bonelli, Stéphane; Nicot, François

    2017-06-01

    Suffusion is the selective erosion of the finest particles of a soil subjected to an internal flow. Among the four types of internal erosion and piping identified today, suffusion is the least understood. Indeed, there is a lack of micromechanical approaches for identifying the critical microstructural parameters responsible for this process. Based on a discrete element modeling of non cohesive granular assemblies, specific micromechanical tools are developed in a unified framework to account for the two first steps of suffusion, namely the grain detachment and the grain transport processes. Thanks to the use of an enhanced force chain definition and autocorrelation functions the typical lengths scales associated with grain detachment are characterized. From the definition of transport paths based on a graph description of the pore space the typical lengths scales associated with grain transport are recovered. For a uniform grain size distribution, a separation of scales between these two processes exists for the finest particles of a soil

  10. An advanced online monitoring approach to study the scaling behavior in direct contact membrane distillation

    KAUST Repository

    Lee, Jung Gil

    2017-10-12

    One of the major challenges in membrane distillation (MD) desalination is scaling, mainly CaSO4 and CaCO3. In this study, in order to achieve a better understanding and establish a strategy for controlling scaling, a detailed investigation on the MD scaling was performed by using various analytical methods, especially an in-situ monitoring technique using an optical coherence tomography (OCT) to observe the cross-sectional view on the membrane surface during operation. Different concentrations of CaSO4, CaCO3, as well as NaCl were tested separately and in different mixed feed solutions. Results showed that when CaSO4 alone was employed in the feed solution, the mean permeate flux (MPF) has significantly dropped at lower volume concentration factor (VCF) compared to other feed solutions and this critical point was observed to be influenced by the solubility changes of CaSO4 resulting from the various inlet feed temperatures. Although the inlet feed and permeate flow rates could contribute to the initial MPF value, the VCF, which showed a sharp MPF decline, was not affected. It was clearly observed that the scaling on the membrane surface due to crystal growth in the bulk and the deposition of aggregated crystals on the membrane surface abruptly appeared close to the critical point of VCF by using OCT observation in a real time. On the other hand, NaCl + CaSO4 mixed feed solution resulted in a linear MPF decline as VCF increases and delayed the critical point to higher VCF values. In addition, CaCO3 alone in feed solution did not affect the scaling, however, when CaSO4 was added to CaCO3, the initial MPF decline and VCF met the critical point earlier. In summary, calcium scaling crystal formed at different conditions influenced the filtration dynamics and MD performances.

  11. Optimal unit sizing for small-scale integrated energy systems using multi-objective interval optimization and evidential reasoning approach

    International Nuclear Information System (INIS)

    Wei, F.; Wu, Q.H.; Jing, Z.X.; Chen, J.J.; Zhou, X.X.

    2016-01-01

    This paper proposes a comprehensive framework including a multi-objective interval optimization model and evidential reasoning (ER) approach to solve the unit sizing problem of small-scale integrated energy systems, with uncertain wind and solar energies integrated. In the multi-objective interval optimization model, interval variables are introduced to tackle the uncertainties of the optimization problem. Aiming at simultaneously considering the cost and risk of a business investment, the average and deviation of life cycle cost (LCC) of the integrated energy system are formulated. In order to solve the problem, a novel multi-objective optimization algorithm, MGSOACC (multi-objective group search optimizer with adaptive covariance matrix and chaotic search), is developed, employing adaptive covariance matrix to make the search strategy adaptive and applying chaotic search to maintain the diversity of group. Furthermore, ER approach is applied to deal with multiple interests of an investor at the business decision making stage and to determine the final unit sizing solution from the Pareto-optimal solutions. This paper reports on the simulation results obtained using a small-scale direct district heating system (DH) and a small-scale district heating and cooling system (DHC) optimized by the proposed framework. The results demonstrate the superiority of the multi-objective interval optimization model and ER approach in tackling the unit sizing problem of integrated energy systems considering the integration of uncertian wind and solar energies. - Highlights: • Cost and risk of investment in small-scale integrated energy systems are considered. • A multi-objective interval optimization model is presented. • A novel multi-objective optimization algorithm (MGSOACC) is proposed. • The evidential reasoning (ER) approach is used to obtain the final optimal solution. • The MGSOACC and ER can tackle the unit sizing problem efficiently.

  12. Creation of Nuclear Data Base up to 150 MeV and corresponding scaling approach for ADS

    International Nuclear Information System (INIS)

    Shubin, Y. N.; Gai, E. V.; Ignatyuk, A. V.; Lunev, V. P.

    1997-01-01

    The status of nuclear data in the energy region up to 150 MeV is outlined. The specific physical reasons for the detailed investigations of nuclear structure effects is noted out. The necessity of the development of Nuclear Data System for ADS is stressed. The program for the creation of nuclear data base up to 150 MeV and corresponding scaling approach for ADS is proposed. (Author) 14 refs

  13. Correct approach to consideration of experimental resolution in parametric analysis of scaling violation in deep inelastic lepton-nucleon interaction

    International Nuclear Information System (INIS)

    Ammosov, V.V.; Usubov, Z.U.; Zhigunov, V.P.

    1990-01-01

    A problem of parametric analysis of the scaling violation in deep inelastic lepton-nucleon interactions in the framework of quantum chromodynamics (QCD) is considered. For a correct consideration of the experimental resolution we use the χ 2 -method, which is demonstrated by numeric experiments and analysis of the 15-foot bubble chamber neutrino experimental data. The model parameters obtained in this approach differ noticeably from those obtained earlier. (orig.)

  14. A comparative analysis of ecosystem services valuation approaches for application at the local scale and in data scarce regions

    OpenAIRE

    Pandeya, B.; Buytaert, W.; Zulkafli, Z.; Karpouzoglou, T.; Mao, F.; Hannah, D.M.

    2016-01-01

    Despite significant advances in the development of the ecosystem services concept across the science and policy arenas, the valuation of ecosystem services to guide sustainable development remains challenging, especially at a local scale and in data scarce regions. In this paper, we review and compare major past and current valuation approaches and discuss their key strengths and weaknesses for guiding policy decisions. To deal with the complexity of methods used in different valuation approa...

  15. Fixed points of quantum gravity

    OpenAIRE

    Litim, D F

    2003-01-01

    Euclidean quantum gravity is studied with renormalisation group methods. Analytical results for a non-trivial ultraviolet fixed point are found for arbitrary dimensions and gauge fixing parameter in the Einstein-Hilbert truncation. Implications for quantum gravity in four dimensions are discussed.

  16. A Remote Sensing Approach for Regional-Scale Mapping of Agricultural Land-Use Systems Based on NDVI Time Series

    Directory of Open Access Journals (Sweden)

    Beatriz Bellón

    2017-06-01

    Full Text Available In response to the need for generic remote sensing tools to support large-scale agricultural monitoring, we present a new approach for regional-scale mapping of agricultural land-use systems (ALUS based on object-based Normalized Difference Vegetation Index (NDVI time series analysis. The approach consists of two main steps. First, to obtain relatively homogeneous land units in terms of phenological patterns, a principal component analysis (PCA is applied to an annual MODIS NDVI time series, and an automatic segmentation is performed on the resulting high-order principal component images. Second, the resulting land units are classified into the crop agriculture domain or the livestock domain based on their land-cover characteristics. The crop agriculture domain land units are further classified into different cropping systems based on the correspondence of their NDVI temporal profiles with the phenological patterns associated with the cropping systems of the study area. A map of the main ALUS of the Brazilian state of Tocantins was produced for the 2013–2014 growing season with the new approach, and a significant coherence was observed between the spatial distribution of the cropping systems in the final ALUS map and in a reference map extracted from the official agricultural statistics of the Brazilian Institute of Geography and Statistics (IBGE. This study shows the potential of remote sensing techniques to provide valuable baseline spatial information for supporting agricultural monitoring and for large-scale land-use systems analysis.

  17. Ultrasonic imaging with a fixed instrument configuration

    Energy Technology Data Exchange (ETDEWEB)

    Witten, A.; Tuggle, J.; Waag, R.C.

    1988-07-04

    Diffraction tomography is a technique based on an inversion of the wave equation which has been proposed for high-resolution ultrasonic imaging. While this approach has been considered for diagnostic medical applications, it has, until recently, been limited by practical limitations on the speed of data acquisition associated with instrument motions. This letter presents the results of an experimental study directed towards demonstrating tomography utilizing a fixed instrument configuration.

  18. A new approach to ductile tearing assessment of pipelines under large-scale yielding

    Energy Technology Data Exchange (ETDEWEB)

    Ostby, Erling [SINTEF Materials and Chemistry, N-7465, Trondheim (Norway)]. E-mail: Erling.Obstby@sintef.no; Thaulow, Christian [Norwegian University of Science and Technology, N-7491, Trondheim (Norway); Nyhus, Bard [SINTEF Materials and Chemistry, N-7465, Trondheim (Norway)

    2007-06-15

    In this paper we focus on the issue of ductile tearing assessment for cases with global plasticity, relevant for example to strain-based design of pipelines. A proposal for a set of simplified strain-based driving force equations is used as a basis for calculation of ductile tearing. We compare the traditional approach using the tangency criterion to predict unstable tearing, with a new alternative approach for ductile tearing calculations. A criterion to determine the CTOD at maximum load carrying capacity in the crack ligament is proposed, and used as the failure criterion in the new approach. Compared to numerical reference simulations, the tangency criterion predicts conservative results with regard to the strain capacity. The new approach yields results in better agreement with the reference numerical simulations.

  19. Scaling Watershed Models: Modern Approaches to Science Computation with MapReduce, Parallelization, and Cloud Optimization

    Science.gov (United States)

    Environmental models are products of the computer architecture and software tools available at the time of development. Scientifically sound algorithms may persist in their original state even as system architectures and software development approaches evolve and progress. Dating...

  20. Biogem: an effective tool based approach for scaling up open source software development in bioinformatics

    NARCIS (Netherlands)

    Bonnal, R.J.P.; Smant, G.; Prins, J.C.P.

    2012-01-01

    Biogem provides a software development environment for the Ruby programming language, which encourages community-based software development for bioinformatics while lowering the barrier to entry and encouraging best practices. Biogem, with its targeted modular and decentralized approach, software

  1. Large-scale identification of polymorphic microsatellites using an in silico approach

    NARCIS (Netherlands)

    Tang, J.; Baldwin, S.J.; Jacobs, J.M.E.; Linden, van der C.G.; Voorrips, R.E.; Leunissen, J.A.M.; Eck, van H.J.; Vosman, B.

    2008-01-01

    Background - Simple Sequence Repeat (SSR) or microsatellite markers are valuable for genetic research. Experimental methods to develop SSR markers are laborious, time consuming and expensive. In silico approaches have become a practicable and relatively inexpensive alternative during the last

  2. A moni-modelling approach to manage groundwater risk to pesticide leaching at regional scale.

    Science.gov (United States)

    Di Guardo, Andrea; Finizio, Antonio

    2016-03-01

    Historically, the approach used to manage risk of chemical contamination of water bodies is based on the use of monitoring programmes, which provide a snapshot of the presence/absence of chemicals in water bodies. Monitoring is required in the current EU regulations, such as the Water Framework Directive (WFD), as a tool to record temporal variation in the chemical status of water bodies. More recently, a number of models have been developed and used to forecast chemical contamination of water bodies. These models combine information of chemical properties, their use, and environmental scenarios. Both approaches are useful for risk assessors in decision processes. However, in our opinion, both show flaws and strengths when taken alone. This paper proposes an integrated approach (moni-modelling approach) where monitoring data and modelling simulations work together in order to provide a common decision framework for the risk assessor. This approach would be very useful, particularly for the risk management of pesticides at a territorial level. It fulfils the requirement of the recent Sustainable Use of Pesticides Directive. In fact, the moni-modelling approach could be used to identify sensible areas where implement mitigation measures or limitation of use of pesticides, but even to effectively re-design future monitoring networks or to better calibrate the pedo-climatic input data for the environmental fate models. A case study is presented, where the moni-modelling approach is applied in Lombardy region (North of Italy) to identify groundwater vulnerable areas to pesticides. The approach has been applied to six active substances with different leaching behaviour, in order to highlight the advantages in using the proposed methodology. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. Optimal control for power-off landing of a small-scale helicopter : a pseudospectral approach

    NARCIS (Netherlands)

    Taamallah, S.; Bombois, X.; Hof, Van den P.M.J.

    2012-01-01

    We derive optimal power-off landing trajectories, for the case of a small-scale helicopter UAV. These open-loop optimal trajectories represent the solution to the minimization of a cost objective, given system dynamics, controls and states equality and inequality constraints. The plant dynamics

  4. Relaxing the weak scale: A new approach to the hierarchy problem

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    Recently, a new mechanism to generate a naturally small electroweak scale has been proposed. This is based on the idea that a dynamical evolution during the early universe can drive the Higgs mass to a value much smaller than the UV cutoff of the SM. In this talk I will present this idea, its explicit realizations, potential problems, and experimental consequences.

  5. Iterative approach to modeling subsurface stormflow based on nonlinear, hillslope-scale physics

    NARCIS (Netherlands)

    Spaaks, J.H.; Bouten, W.; McDonnell, J.J.

    2009-01-01

    Soil water transport in small, humid, upland catchments is often dominated by subsurface stormflow. Recent studies of this process suggest that at the plot scale, generation of transient saturation may be governed by threshold behavior, and that transient saturation is a prerequisite for lateral

  6. A multiple-time-scale approach to the control of ITBs on JET

    Energy Technology Data Exchange (ETDEWEB)

    Laborde, L.; Mazon, D.; Moreau, D. [EURATOM-CEA Association (DSM-DRFC), CEA Cadarache, 13 - Saint Paul lez Durance (France); Moreau, D. [Culham Science Centre, EFDA-JET, Abingdon, OX (United Kingdom); Ariola, M. [EURATOM/ENEA/CREATE Association, Univ. Napoli Federico II, Napoli (Italy); Cordoliani, V. [Ecole Polytechnique, 91 - Palaiseau (France); Tala, T. [EURATOM-Tekes Association, VTT Processes (Finland)

    2005-07-01

    The simultaneous real-time control of the current and temperature gradient profiles could lead to the steady state sustainment of an internal transport barrier (ITB) and so to a stationary optimized plasma regime. Recent experiments in JET have demonstrated significant progress in achieving such a control: different current and temperature gradient target profiles have been reached and sustained for several seconds using a controller based on a static linear model. It's worth noting that the inverse safety factor profile evolves on a slow time scale (resistive time) while the normalized electron temperature gradient reacts on a faster one (confinement time). Moreover these experiments have shown that the controller was sensitive to rapid plasma events such as transient ITBs during the safety factor profile evolution or MHD instabilities which modify the pressure profiles on the confinement time scale. In order to take into account the different dynamics of the controlled profiles and to better react to rapid plasma events the control technique is being improved by using a multiple-time-scale approximation. The paper describes the theoretical analysis and closed-loop simulations using a control algorithm based on two-time-scale state-space model. These closed-loop simulations using the full dynamic but linear model used for the controller design to simulate the plasma response have demonstrated that this new controller allows the normalized electron temperature gradient target profile to be reached faster than the one used in previous experiments. (A.C.)

  7. A Multidimensional Scaling Approach to Developmental Dimensions in Object Permanence and Tracking Stimuli.

    Science.gov (United States)

    Townes-Rosenwein, Linda

    This paper discusses a longitudinal, exploratory study of developmental dimensions related to object permanence theory and explains how multidimensional scaling techniques can be used to identify developmental dimensions. Eighty infants, randomly assigned to one of four experimental groups and one of four counterbalanced orders of stimuli, were…

  8. A multiple-time-scale approach to the control of ITBs on JET

    International Nuclear Information System (INIS)

    Laborde, L.; Mazon, D.; Moreau, D.; Moreau, D.; Ariola, M.; Cordoliani, V.; Tala, T.

    2005-01-01

    The simultaneous real-time control of the current and temperature gradient profiles could lead to the steady state sustainment of an internal transport barrier (ITB) and so to a stationary optimized plasma regime. Recent experiments in JET have demonstrated significant progress in achieving such a control: different current and temperature gradient target profiles have been reached and sustained for several seconds using a controller based on a static linear model. It's worth noting that the inverse safety factor profile evolves on a slow time scale (resistive time) while the normalized electron temperature gradient reacts on a faster one (confinement time). Moreover these experiments have shown that the controller was sensitive to rapid plasma events such as transient ITBs during the safety factor profile evolution or MHD instabilities which modify the pressure profiles on the confinement time scale. In order to take into account the different dynamics of the controlled profiles and to better react to rapid plasma events the control technique is being improved by using a multiple-time-scale approximation. The paper describes the theoretical analysis and closed-loop simulations using a control algorithm based on two-time-scale state-space model. These closed-loop simulations using the full dynamic but linear model used for the controller design to simulate the plasma response have demonstrated that this new controller allows the normalized electron temperature gradient target profile to be reached faster than the one used in previous experiments. (A.C.)

  9. A Heuristic Approach to Author Name Disambiguation in Bibliometrics Databases for Large-scale Research Assessments

    NARCIS (Netherlands)

    D'Angelo, C.A.; Giuffrida, C.; Abramo, G.

    2011-01-01

    National exercises for the evaluation of research activity by universities are becoming regular practice in ever more countries. These exercises have mainly been conducted through the application of peer-review methods. Bibliometrics has not been able to offer a valid large-scale alternative because

  10. Received signal strength in large-scale wireless relay sensor network: a stochastic ray approach

    NARCIS (Netherlands)

    Hu, L.; Chen, Y.; Scanlon, W.G.

    2011-01-01

    The authors consider a point percolation lattice representation of a large-scale wireless relay sensor network (WRSN) deployed in a cluttered environment. Each relay sensor corresponds to a grid point in the random lattice and the signal sent by the source is modelled as an ensemble of photons that

  11. A high-level and scalable approach for generating scale-free graphs using active objects

    NARCIS (Netherlands)

    K. Azadbakht (Keyvan); N. Bezirgiannis (Nikolaos); F.S. de Boer (Frank); Aliakbary, S. (Sadegh)

    2016-01-01

    textabstractThe Barabasi-Albert model (BA) is designed to generate scale-free networks using the preferential attachment mechanism. In the preferential attachment (PA) model, new nodes are sequentially introduced to the network and they attach preferentially to existing nodes. PA is a classical

  12. Assessing heterogeneity in soil nitrogen cycling: a plot-scale approach

    Science.gov (United States)

    Peter Baas; Jacqueline E. Mohan; David Markewitz; Jennifer D. Knoepp

    2014-01-01

    The high level of spatial and temporal heterogeneity in soil N cycling processes hinders our ability to develop an ecosystem-wide understanding of this cycle. This study examined how incorporating an intensive assessment of spatial variability for soil moisture, C, nutrients, and soil texture can better explain ecosystem N cycling at the plot scale. Five sites...

  13. Scale-up of a mixer-settler extractor using a unit operations approach

    International Nuclear Information System (INIS)

    Lindholm, D.C.; Bautista, R.G.

    1976-01-01

    The results of scale-up studies on a continuous, multistage horizontal mixer-settler extractor are presented. The chemical and mechanical system involves the separation of lanthanum from a mixture of rare earth chlorides using di(2-ethylhexyl) phosphoric acid as the solvent and dilute HCl as a scrub solution in a bench scale extractor. Each stage has a hold-up of 2.6 l. A single stage unit is utilized for scale-up studies. Results are obtained on four sizes of geometrically similar units, the largest being six times the volume of the original bench size. A unit operations technique is chosen so that mixing and settling can be examined independently. Variables examined include type of continuous phase, flow rate of inlet streams, and power input to the mixer. Inlet flow-rate ratios are kept constant for all tests. Two potential methods of unbaffled pump-mixer scale-up are explored; the maintenance of constant impeller tip speed and constant power input. For the settler, the previously successful method of basing design on constant flow-rate per unit cross-sectional area is used

  14. Review of broad-scale drought monitoring of forests: Toward an integrated data mining approach

    Science.gov (United States)

    Steve Norman; Frank H. Koch; William W. Hargrove

    2016-01-01

    Efforts to monitor the broad-scale impacts of drought on forests often come up short. Drought is a direct stressor of forests as well as a driver of secondary disturbance agents, making a full accounting of drought impacts challenging. General impacts  can be inferred from moisture deficits quantified using precipitation and temperature measurements. However,...

  15. Predicting ecosystem functioning from plant traits: Results from a multi-scale ecophsiological modeling approach

    NARCIS (Netherlands)

    Wijk, van M.T.

    2007-01-01

    Ecosystem functioning is the result of processes working at a hierarchy of scales. The representation of these processes in a model that is mathematically tractable and ecologically meaningful is a big challenge. In this paper I describe an individual based model (PLACO¿PLAnt COmpetition) that

  16. The Classroom Process Scale (CPS): An Approach to the Measurement of Teaching Effectiveness.

    Science.gov (United States)

    Anderson, Lorin W.; Scott, Corinne C.

    The purpose of this presentation is to describe the Classroom Process Scale (CPS) and its usefulness for the assessment of teaching effectiveness. The CPS attempts to ameliorate weaknesses in existing classroom process measures by including a coding of student involvement in learning, objectives being pursued, and methods used to pursue attainment…

  17. Crowd counting via scale-adaptive convolutional neural network

    OpenAIRE

    Zhang, Lu; Shi, Miaojing; Chen, Qiaobo

    2017-01-01

    The task of crowd counting is to automatically estimate the pedestrian number in crowd images. To cope with the scale and perspective changes that commonly exist in crowd images, state-of-the-art approaches employ multi-column CNN architectures to regress density maps of crowd images. Multiple columns have different receptive fields corresponding to pedestrians (heads) of different scales. We instead propose a scale-adaptive CNN (SaCNN) architecture with a backbone of fixed small receptive fi...

  18. A comparative study of two approaches to analyse groundwater recharge, travel times and nitrate storage distribution at a regional scale

    Science.gov (United States)

    Turkeltaub, T.; Ascott, M.; Gooddy, D.; Jia, X.; Shao, M.; Binley, A. M.

    2017-12-01

    Understanding deep percolation, travel time processes and nitrate storage in the unsaturated zone at a regional scale is crucial for sustainable management of many groundwater systems. Recently, global hydrological models have been developed to quantify the water balance at such scales and beyond. However, the coarse spatial resolution of the global hydrological models can be a limiting factor when analysing regional processes. This study compares simulations of water flow and nitrate storage based on regional and global scale approaches. The first approach was applied over the Loess Plateau of China (LPC) to investigate the water fluxes and nitrate storage and travel time to the LPC groundwater system. Using raster maps of climate variables, land use data and soil parameters enabled us to determine fluxes by employing Richards' equation and the advection - dispersion equation. These calculations were conducted for each cell on the raster map in a multiple 1-D column approach. In the second approach, vadose zone travel times and nitrate storage were estimated by coupling groundwater recharge (PCR-GLOBWB) and nitrate leaching (IMAGE) models with estimates of water table depth and unsaturated zone porosity. The simulation results of the two methods indicate similar spatial groundwater recharge, nitrate storage and travel time distribution. Intensive recharge rates are located mainly at the south central and south west parts of the aquifer's outcrops. Particularly low recharge rates were simulated in the top central area of the outcrops. However, there are significant discrepancies between the simulated absolute recharge values, which might be related to the coarse scale that is used in the PCR-GLOBWB model, leading to smoothing of the recharge estimations. Both models indicated large nitrate inventories in the south central and south west parts of the aquifer's outcrops and the shortest travel times in the vadose zone are in the south central and east parts of the

  19. Importance of ecohydrological modelling approaches in the prediction of plant behaviour and water balance at different scales

    Science.gov (United States)

    García-Arias, Alicia; Ruiz-Pérez, Guiomar; Francés, Félix

    2017-04-01

    Vegetation plays a main role in the water balance of most hydrological systems. However, in the past it has been barely considered the effect of the interception and evapotranspiration for hydrological modelling purposes. During the last years many authors have recognised and supported ecohydrological approaches instead of traditional strategies. This contribution is aimed to demonstrate the pivotal role of the vegetation in ecohydrological models and that a better understanding of the hydrological systems can be achieved by considering the appropriate processes related to plants. The study is performed in two scales: the plot scale and the reach scale. At plot scale, only zonal vegetation was considered while at reach scale both zonal and riparian were taken into account. In order to assure the main role of the water on the vegetation development, semiarid environments have been selected for the case studies. Results show an increase of the capabilities to predict plant behaviour and water balance when interception and evapotranspiration are taken into account in the soil water balance

  20. Deep learning-based subdivision approach for large scale macromolecules structure recovery from electron cryo tomograms.

    Science.gov (United States)

    Xu, Min; Chai, Xiaoqi; Muthakana, Hariank; Liang, Xiaodan; Yang, Ge; Zeev-Ben-Mordehai, Tzviya; Xing, Eric P

    2017-07-15

    Cellular Electron CryoTomography (CECT) enables 3D visualization of cellular organization at near-native state and in sub-molecular resolution, making it a powerful tool for analyzing structures of macromolecular complexes and their spatial organizations inside single cells. However, high degree of structural complexity together with practical imaging limitations makes the systematic de novo discovery of structures within cells challenging. It would likely require averaging and classifying millions of subtomograms potentially containing hundreds of highly heterogeneous structural classes. Although it is no longer difficult to acquire CECT data containing such amount of subtomograms due to advances in data acquisition automation, existing computational approaches have very limited scalability or discrimination ability, making them incapable of processing such amount of data. To complement existing approaches, in this article we propose a new approach for subdividing subtomograms into smaller but relatively homogeneous subsets. The structures in these subsets can then be separately recovered using existing computation intensive methods. Our approach is based on supervised structural feature extraction using deep learning, in combination with unsupervised clustering and reference-free classification. Our experiments show that, compared with existing unsupervised rotation invariant feature and pose-normalization based approaches, our new approach achieves significant improvements in both discrimination ability and scalability. More importantly, our new approach is able to discover new structural classes and recover structures that do not exist in training data. Source code freely available at http://www.cs.cmu.edu/∼mxu1/software . mxu1@cs.cmu.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  1. A study of safeguards approach for the area of plutonium evaporator in a large scale reprocessing plant

    International Nuclear Information System (INIS)

    Sakai, Hirotada; Ikawa, Koji

    1994-01-01

    A preliminary study on a safeguards approach for the chemical processing area in a large scale reprocessing plant has been carried out. In this approach, plutonium inventory at the plutonium evaporator will not be taken, but containment and surveillance (C/S) measures will be applied to ensure the integrity of an area specifically defined to include the plutonium evaporator. The plutonium evaporator area consists of the evaporator itself and two accounting points, i.e., one before the plutonium evaporator and the other after the plutonium evaporator. For newly defined accounting points, two alternative measurement methods, i.e., accounting vessels with high accuracy and flow meters, were examined. Conditions to provide the integrity of the plutonium evaporator area were also examined as well as other technical aspects associated with this approach. The results showed that an appropriate combination of NRTA and C/S measures would be essential to realize a cost effective safeguards approach to be applied for a large scale reprocessing plant. (author)

  2. No quick fixes!

    DEFF Research Database (Denmark)

    Frimann, Søren; Hansen, Lone Hersted

    This article presents an approach to leadership development emerging from practice as a dialogical collaborative reflexive activity within the frame of action learning. In the beginning of the paper we clarify our understanding of central ideas concerning leadership, reflexivity and action learning....... Furthermore, we present a design for leadership development in high schools and vocational schools, in which leaders facilitate learning, action and change based on available empirical data through reflexive dialogues in collaborative teams of teachers. The overall process and the outcomes have been...

  3. Measurement and Comparison of Variance in the Performance of Algerian Universities using models of Returns to Scale Approach

    Directory of Open Access Journals (Sweden)

    Imane Bebba

    2017-08-01

    Full Text Available This study aimed to measure and compare the performance of forty-seven Algerian universities, using models of returns to Scale approach, which is based primarily on the Data Envelopment Analysis  method. In order to achieve the objective of the study, a set of variables was chosen to represent the dimension of teaching. The variables consisted of three input variables, which were:  the total number of students  in the undergraduate level, students in the post graduate level and the number of permanent professors. On the other hand, the output variable was represented by the total number of students holding degrees of the two levels. Four basic models for data envelopment analysis method were applied. These were: (Scale Returns, represented by input-oriented and output-oriented constant returns and input-oriented and output-oriented  variable returns. After the analysis of data, results revealed that eight universities achieved full efficiency according to constant returns to scale in both input and output orientations. Seventeen universities achieved full efficiency according to the model of input-oriented returns to scale variable. Sixteen universities achieved full efficiency according to the model of output-oriented  returns to scale variable. Therefore, during the performance measurement, the size of the university, competition, financial and infrastructure constraints, and the process of resource allocation within the university  should be taken into consideration. Also, multiple input and output variables reflecting the dimensions of teaching, research, and community service should be included while measuring and assessing the performance of Algerian universities, rather than using two variables which do not reflect the actual performance of these universities. Keywords: Performance of Algerian Universities, Data envelopment analysis method , Constant returns to scale, Variable returns to scale, Input-orientation, Output-orientation.

  4. [Dimensional approach of emotion in psychiatry: validation of the Positive and Negative Emotionality scale (EPN-31)].

    Science.gov (United States)

    Pélissolo, A; Rolland, J-P; Perez-Diaz, F; Jouvent, R; Allilaire, J-F

    2007-01-01

    This paper reports the first validation study of the EPN-31 scale (Positive and Negative Emotionality scale, 31 items) in a French psychiatric sample. This questionnaire has been adapted by Rolland from an emotion inventory developed by Diener, and is also in accordance with Watson and Clark's tripartite model of affects. Respondents were asked to rate the frequency with which they had experienced each affect (31 basic emotional states) during the last month. The answer format was a 7-point scale, ranging from 1 "Not experienced at all" to 7 "Experienced this affect several times each day". Three main scores were calculated (positive affects, negative affects, and surprise affects), as well as six sub-scores (joy, tenderness, anger, fear, sadness, shame). Four hundred psychiatric patients were included in this study, and completed the EPN-31 scale and the Hospital Anxiety and Depression (HAD) scale. The Global Assessment of Functioning (GAF) scale was rated, as well as DSM IV diagnostic criteria. We performed a principal component analysis, with Varimax orthogonal transformation, and explored the factorial structure of the questionnaire, the internal consistency of each dimension, and the correlations between EPN-31 scores and HAD scores. The factorial structure of the EPN-31 was well-defined as expected, with a three-factor (positive, negative and surprise affects) solution accounting for 58.2% of the variance of the questionnaire. No correlation was obtained between positive and negative affects EPN-31 scores (r=0.006). All alpha Cronbach coefficients were between 0.80 and 0.95 for main scores, and between 0.72 and 0.90 for sub-scores. GAF scores were significantly correlated with EPN-31 positive affects scores (r=0.21; p=0.001) and with EPN-31 negative affects scores (r=- 0.45; p=0.001). We obtained significant correlations between positive affects score and HAD depression score (r=- 0.45; pemotionality. Significantly higher EPN-31 positive affect mean scores

  5. Synergistic soil moisture observation - an interdisciplinary multi-sensor approach to yield improved estimates across scales

    Science.gov (United States)

    Schrön, M.; Fersch, B.; Jagdhuber, T.

    2017-12-01

    The representative determination of soil moisture across different spatial ranges and scales is still an important challenge in hydrology. While in situ measurements are trusted methods at the profile- or point-scale, cosmic-ray neutron sensors (CRNS) are renowned for providing volume averages for several hectares and tens of decimeters depth. On the other hand, airborne remote-sensing enables the coverage of regional scales, however limited to the top few centimeters of the soil.Common to all of these methods is a challenging data processing part, often requiring calibration with independent data. We investigated the performance and potential of three complementary observational methods for the determination of soil moisture below grassland in an alpine front-range river catchment (Rott, 55 km2) of southern Germany.We employ the TERENO preAlpine soil moisture monitoring network, along with additional soil samples taken throughout the catchment. Spatial soil moisture products have been generated using surveys of a car-mounted mobile CRNS (rover), and an aerial acquisition of the polarimetric synthetic aperture radar (F-SAR) of DLR.The study assesses (1) the viability of the different methods to estimate soil moisture for their respective scales and extents, and (2) how either method could support an improvement of the others. We found that in situ data can provide valuable information to calibrate the CRNS rover and to train the vegetation removal part of the polarimetric SAR (PolSAR) retrieval algorithm. Vegetation correction is mandatory to obtain the sub-canopy soil moisture patterns. While CRNS rover surveys can be used to evaluate the F-SAR product across scales, vegetation-related PolSAR products in turn can support the spatial correction of CRNS products for biomass water. Despite the different physical principles, the synthesis of the methods can provide reasonable soil moisture information by integrating from the plot to the landscape scale. The

  6. Scaling approach in predicting the seatbelt loading and kinematics of vulnerable occupants: How far can we go?

    Science.gov (United States)

    Nie, Bingbing; Forman, Jason L; Joodaki, Hamed; Wu, Taotao; Kent, Richard W

    2016-09-01

    Occupants with extreme body size and shape, such as the small female or the obese, were reported to sustain high risk of injury in motor vehicle crashes (MVCs). Dimensional scaling approaches are widely used in injury biomechanics research based on the assumption of geometrical similarity. However, its application scope has not been quantified ever since. The objective of this study is to demonstrate the valid range of scaling approaches in predicting the impact response of the occupants with focus on the vulnerable populations. The present analysis was based on a data set consisting of 60 previously reported frontal crash tests in the same sled buck representing a typical mid-size passenger car. The tests included two categories of human surrogates: 9 postmortem human surrogates (PMHS) of different anthropometries (stature range: 147-189 cm; weight range: 27-151 kg) and 5 anthropomorphic test devices (ATDs). The impact response was considered including the restraint loads and the kinematics of multiple body segments. For each category of the human surrogates, a mid-size occupant was selected as a baseline and the impact response was scaled specifically to another subject based on either the body mass (body shape) or stature (the overall body size). To identify the valid range of the scaling approach, the scaled response was compared to the experimental results using assessment scores on the peak value, peak timing (the time when the peak value occurred), and the overall curve shape ranging from 0 (extremely poor) to 1 (perfect match). Scores of 0.7 to 0.8 and 0.8 to 1.0 indicate fair and acceptable prediction. For both ATDs and PMHS, the scaling factor derived from body mass proved an overall good predictor of the peak timing for the shoulder belt (0.868, 0.829) and the lap belt (0.858, 0.774) and for the peak value of the lap belt force (0.796, 0.869). Scaled kinematics based on body stature provided fair or acceptable prediction on the overall head

  7. Breaking the theoretical scaling limit for predicting quasiparticle energies: the stochastic GW approach.

    Science.gov (United States)

    Neuhauser, Daniel; Gao, Yi; Arntsen, Christopher; Karshenas, Cyrus; Rabani, Eran; Baer, Roi

    2014-08-15

    We develop a formalism to calculate the quasiparticle energy within the GW many-body perturbation correction to the density functional theory. The occupied and virtual orbitals of the Kohn-Sham Hamiltonian are replaced by stochastic orbitals used to evaluate the Green function G, the polarization potential W, and, thereby, the GW self-energy. The stochastic GW (sGW) formalism relies on novel theoretical concepts such as stochastic time-dependent Hartree propagation, stochastic matrix compression, and spatial or temporal stochastic decoupling techniques. Beyond the theoretical interest, the formalism enables linear scaling GW calculations breaking the theoretical scaling limit for GW as well as circumventing the need for energy cutoff approximations. We illustrate the method for silicon nanocrystals of varying sizes with N_{e}>3000 electrons.

  8. A systems approach to predict oncometabolites via context-specific genome-scale metabolic networks.

    Directory of Open Access Journals (Sweden)

    Hojung Nam

    2014-09-01

    Full Text Available Altered metabolism in cancer cells has been viewed as a passive response required for a malignant transformation. However, this view has changed through the recently described metabolic oncogenic factors: mutated isocitrate dehydrogenases (IDH, succinate dehydrogenase (SDH, and fumarate hydratase (FH that produce oncometabolites that competitively inhibit epigenetic regulation. In this study, we demonstrate in silico predictions of oncometabolites that have the potential to dysregulate epigenetic controls in nine types of cancer by incorporating massive scale genetic mutation information (collected from more than 1,700 cancer genomes, expression profiling data, and deploying Recon 2 to reconstruct context-specific genome-scale metabolic models. Our analysis predicted 15 compounds and 24 substructures of potential oncometabolites that could result from the loss-of-function and gain-of-function mutations of metabolic enzymes, respectively. These results suggest a substantial potential for discovering unidentified oncometabolites in various forms of cancers.

  9. On the Construct Validity of the Academic Motivation Scale: a CFA and Rasch Analysis approach

    DEFF Research Database (Denmark)

    Andersen, Martin Stolpe; Nielsen, Tine

    subscales measuring Extrinsic Motivation (EM) and one scale measuring Amotivation (AM), each with 4 items. The AMS was translated into Danish and data was collected from psychology students (N = 607) at two Danish universities in 6 different study terms. The construct validity of the seven scales was first...... investigated using confirmatory factor analysis with mixed results of some acceptable and some non-acceptable fit indices for the model. Secondly, Rasch analyses were conducted for each of the seven subscales, using the partial credit model (PCM) and graphical loglinear rasch models (GLLRM). This resulted...... in fit to the PCM in the case of IM to Accomplish (retaining three out of four items), and fit to GLLRMs in two cases: 1) IM to know with evidence of local dependence between all four items. 2) AM (retaining three out of four items) with evidence of gender-based differential item functioning, which...

  10. The average carbon-stock approach for small-scale CDM AR projects

    Energy Technology Data Exchange (ETDEWEB)

    Garcia Quijano, J.F.; Muys, B. [Katholieke Universiteit Leuven, Laboratory for Forest, Nature and Landscape Research, Leuven (Belgium); Schlamadinger, B. [Joanneum Research Forschungsgesellschaft mbH, Institute for Energy Research, Graz (Austria); Emmer, I. [Face Foundation, Arnhem (Netherlands); Somogyi, Z. [Forest Research Institute, Budapest (Hungary); Bird, D.N. [Woodrising Consulting Inc., Belfountain, Ontario (Canada)

    2004-06-15

    In many afforestation and reforestation (AR) projects harvesting with stand regeneration forms an integral part of the silvicultural system and satisfies local timber and/or fuelwood demand. Especially clear-cut harvesting will lead to an abrupt and significant reduction of carbon stocks. The smaller the project, the more significant the fluctuations of the carbon stocks may be. In the extreme case a small-scale project could consist of a single forest stand. In such case, all accounted carbon may be removed during a harvesting operation and the time-path of carbon stocks will typically look as in the hypothetical example presented in the report. For the aggregate of many such small-scale projects there will be a constant benefit to the atmosphere during the projects, due to averaging effects.

  11. A Multi-scale, Multi-disciplinary Approach for Assessing the Technological, Economic, and Environmental Performance of Bio-based Chemicals

    DEFF Research Database (Denmark)

    Herrgard, Markus; Sukumara, Sumesh; Campodonico Alt, Miguel Angel

    2015-01-01

    , the Multi-scale framework for Sustainable Industrial Chemicals (MuSIC) was introduced to address this issue by integrating modelling approaches at different scales ranging from cellular to ecological scales. This framework can be further extended by incorporating modelling of the petrochemical value chain...... towards a sustainable chemical industry....

  12. Partitioned based approach for very large scale database in Indian nuclear power plants

    International Nuclear Information System (INIS)

    Tiwari, Sachin; Upadhyay, Pushp; Sengupta, Nabarun; Bhandarkar, S.G.; Agilandaeswari

    2012-01-01

    This paper presents a partition based approach for handling very large tables with size running in giga-bytes to tera-bytes. The scheme is developed from our experience in handling large signal storage which is required in various computer based data acquisition and control room operator information systems such as Distribution Recording System (DRS) and Computerised Operator Information System (COIS). Whenever there is a disturbance in an operating nuclear power plant, it triggers an action where a large volume of data from multiple sources is generated and this data needs to be stored. Concurrency issues as data is from multiple sources and very large amount of data are the problems which are addressed in this paper by applying partition based approach. Advantages of partition based approach with other techniques are discussed. (author)

  13. Coastal Foredune Evolution, Part 2: Modeling Approaches for Meso-Scale Morphologic Evolution

    Science.gov (United States)

    2017-03-01

    for Meso-Scale Morphologic Evolution by Margaret L. Palmsten1, Katherine L. Brodie2, and Nicholas J. Spore2 PURPOSE: This Coastal and Hydraulics ...managers because foredunes provide ecosystem services and can reduce storm damages to coastal infrastructure, both of which increase the resiliency...MS 2 U.S. Army Engineer Research and Development Center, Coastal and Hydraulics Laboratory, Duck, NC ERDC/CHL CHETN-II-57 March 2017 2 models of

  14. The Location-Scale Mixture Exponential Power Distribution: A Bayesian and Maximum Likelihood Approach

    OpenAIRE

    Rahnamaei, Z.; Nematollahi, N.; Farnoosh, R.

    2012-01-01

    We introduce an alternative skew-slash distribution by using the scale mixture of the exponential power distribution. We derive the properties of this distribution and estimate its parameter by Maximum Likelihood and Bayesian methods. By a simulation study we compute the mentioned estimators and their mean square errors, and we provide an example on real data to demonstrate the modeling strength of the new distribution.

  15. The Location-Scale Mixture Exponential Power Distribution: A Bayesian and Maximum Likelihood Approach

    Directory of Open Access Journals (Sweden)

    Z. Rahnamaei

    2012-01-01

    Full Text Available We introduce an alternative skew-slash distribution by using the scale mixture of the exponential power distribution. We derive the properties of this distribution and estimate its parameter by Maximum Likelihood and Bayesian methods. By a simulation study we compute the mentioned estimators and their mean square errors, and we provide an example on real data to demonstrate the modeling strength of the new distribution.

  16. Perturbation approach to scaled type Markov renewal processes with infinite mean

    OpenAIRE

    Pajor-Gyulai, Zsolt; Szász, Domokos

    2010-01-01

    Scaled type Markov renewal processes generalize classical renewal processes: renewal times come from a one parameter family of probability laws and the sequence of the parameters is the trajectory of an ergodic Markov chain. Our primary interest here is the asymptotic distribution of the Markovian parameter at time t \\to \\infty. The limit, of course, depends on the stationary distribution of the Markov chain. The results, however, are essentially different depending on whether the expectation...

  17. Electrodisintegration of few body systems at SLAC and the Y scaling approach

    International Nuclear Information System (INIS)

    Meziani, Z.E.

    1986-10-01

    It is proposed that extraction of the scaling function F(y) from the transverse and longitudinal response functions in inclusive quasi-elastic electron scattering from 3 He and 4 He is a powerful method to either study the validity regime of the impulse approximation by allowing the access to the high nucleon momentum components in these nuclei, or the electromagnetic properties of bound nucleons. 19 refs., 4 figs

  18. Development of a new body image assessment scale in urban Cameroon: an anthropological approach.

    Science.gov (United States)

    Cohen, Emmanuel; Pasquet, Patrick

    2011-01-01

    Develop and validate body image scales (BIS) presenting real human bodies adapted to the macroscopic phenotype of urban Cameroonian populations. Quantitative and qualitative analysis. Yaoundé, capital city of Cameroon. Four samples with balanced sex-ratio: the first (n=16) aged 18 to 65 years (qualitative study), the second (n=30) aged 25 to 40 years (photo database), the third (n=47) and fourth (n=181), > or =18 years (validation study). Construct validity, test retest reliability, concurrent and convergent validity of BIS. Body image scales present six Cameroonians of each sex arranged according to main body mass index (BMI) categories: underweight ( or =40 kg/m2). Test-retest reliability correlations for current body size (CBS), desired body size and current desirable discrepancy (body self-satisfaction index) on BIS were never below .90. Plus, for the concurrent validity, we observed a significant correlation (r=0.67, Pbody size perceptions, is acceptable. Body image scales are adapted to the phenotypic characteristics of urban Cameroonian populations. They are reliable and valid to assess body size perceptions and culturally adapted to the Cameroonian context.

  19. Evaluation of Scaling Approaches for the Oceanic Dissipation Rate of Turbulent Kinetic Energy in the Surface Ocean

    Science.gov (United States)

    Esters, L. T.; Ward, B.; Sutherland, G.; Ten Doeschate, A.; Landwehr, S.; Bell, T. G.; Christensen, K. H.

    2016-02-01

    The air-sea exchange of heat, gas and momentum plays an important role for the Earth's weather and global climate. The exchange processes between ocean and atmosphere are influenced by the prevailing surface ocean dynamics. This surface ocean is a highly turbulent region where there is enhanced production of turbulent kinetic energy (TKE). The dissipation rate of TKE (ɛ) in the surface ocean is an important process for governing the depth of both the mixing and mixed layers, which are important length-scales for many aspects of ocean research. However, there exist very limited observations of ɛ under open ocean conditions and consequently our understanding of how to model the dissipation profile is very limited. The approaches to model profiles of ɛ that exist, differ by orders of magnitude depending on their underlying theoretical assumption and included physical processes. Therefore, scaling ɛ is not straight forward and requires open ocean measurements of ɛ to validate the respective scaling laws. This validated scaling of ɛ, is for example required to produce accurate mixed layer depths in global climate models. Errors in the depth of the ocean surface boundary layer can lead to biases in sea surface temperature. Here, we present open ocean measurements of ɛ from the Air-Sea Interaction Profiler (ASIP) collected during several cruises in different ocean basins. ASIP is an autonomous upwardly rising microstructure profiler allowing undisturbed profiling up to the ocean surface. These direct measurements of ɛ under various types of atmospheric and oceanic conditions along with measurements of atmospheric fluxes and wave conditions allow us to make a unique assessment of several scaling approaches based on wind, wave and buoyancy forcing. This will allow us to best assess the most appropriate ɛ-based parameterisation for air-sea exchange.

  20. A multi-scale qualitative approach to assess the impact of urbanization on natural habitats and their connectivity

    Energy Technology Data Exchange (ETDEWEB)

    Scolozzi, Rocco, E-mail: rocco.scolozzi@fmach.it [Sustainable Agro-ecosystems and Bioresources Department, IASMA Research and Innovation Centre, Fondazione Edmund Mach, Via E. Mach 1, 38010 San Michele all& #x27; Adige, (Italy); Geneletti, Davide, E-mail: geneletti@ing.unitn.it [Department of Civil and Environmental Engineering, University of Trento, Trento (Italy)

    2012-09-15

    Habitat loss and fragmentation are often concurrent to land conversion and urbanization. Simple application of GIS-based landscape pattern indicators may be not sufficient to support meaningful biodiversity impact assessment. A review of the literature reveals that habitat definition and habitat fragmentation are frequently inadequately considered in environmental assessment, notwithstanding the increasing number of tools and approaches reported in the landscape ecology literature. This paper presents an approach for assessing impacts on habitats on a local scale, where availability of species data is often limited, developed for an alpine valley in northern Italy. The perspective of the methodology is multiple scale and species-oriented, and provides both qualitative and quantitative definitions of impact significance. A qualitative decision model is used to assess ecological values in order to support land-use decisions at the local level. Building on recent studies in the same region, the methodology integrates various approaches, such as landscape graphs, object-oriented rule-based habitat assessment and expert knowledge. The results provide insights into future habitat loss and fragmentation caused by land-use changes, and aim at supporting decision-making in planning and suggesting possible ecological compensation. - Highlights: Black-Right-Pointing-Pointer Many environmental assessments inadequately consider habitat loss and fragmentation. Black-Right-Pointing-Pointer Species-perspective for defining habitat quality and connectivity is claimed. Black-Right-Pointing-Pointer Species-based tools are difficult to be applied with limited availability of data. Black-Right-Pointing-Pointer We propose a species-oriented and multiple scale-based qualitative approach. Black-Right-Pointing-Pointer Advantages include being species-oriented and providing value-based information.

  1. A multi-scale qualitative approach to assess the impact of urbanization on natural habitats and their connectivity

    International Nuclear Information System (INIS)

    Scolozzi, Rocco; Geneletti, Davide

    2012-01-01

    Habitat loss and fragmentation are often concurrent to land conversion and urbanization. Simple application of GIS-based landscape pattern indicators may be not sufficient to support meaningful biodiversity impact assessment. A review of the literature reveals that habitat definition and habitat fragmentation are frequently inadequately considered in environmental assessment, notwithstanding the increasing number of tools and approaches reported in the landscape ecology literature. This paper presents an approach for assessing impacts on habitats on a local scale, where availability of species data is often limited, developed for an alpine valley in northern Italy. The perspective of the methodology is multiple scale and species-oriented, and provides both qualitative and quantitative definitions of impact significance. A qualitative decision model is used to assess ecological values in order to support land-use decisions at the local level. Building on recent studies in the same region, the methodology integrates various approaches, such as landscape graphs, object-oriented rule-based habitat assessment and expert knowledge. The results provide insights into future habitat loss and fragmentation caused by land-use changes, and aim at supporting decision-making in planning and suggesting possible ecological compensation. - Highlights: ► Many environmental assessments inadequately consider habitat loss and fragmentation. ► Species-perspective for defining habitat quality and connectivity is claimed. ► Species-based tools are difficult to be applied with limited availability of data. ► We propose a species-oriented and multiple scale-based qualitative approach. ► Advantages include being species-oriented and providing value-based information.

  2. The climate-smart village approach: framework of an integrative strategy for scaling up adaptation options in agriculture

    Directory of Open Access Journals (Sweden)

    Pramod K. Aggarwal

    2018-03-01

    Full Text Available Increasing weather risks threaten agricultural production systems and food security across the world. Maintaining agricultural growth while minimizing climate shocks is crucial to building a resilient food production system and meeting developmental goals in vulnerable countries. Experts have proposed several technological, institutional, and policy interventions to help farmers adapt to current and future weather variability and to mitigate greenhouse gas (GHG emissions. This paper presents the climate-smart village (CSV approach as a means of performing agricultural research for development that robustly tests technological and institutional options for dealing with climatic variability and climate change in agriculture using participatory methods. It aims to scale up and scale out the appropriate options and draw out lessons for policy makers from local to global levels. The approach incorporates evaluation of climate-smart technologies, practices, services, and processes relevant to local climatic risk management and identifies opportunities for maximizing adaptation gains from synergies across different interventions and recognizing potential maladaptation and trade-offs. It ensures that these are aligned with local knowledge and link into development plans. This paper describes early results in Asia, Africa, and Latin America to illustrate different examples of the CSV approach in diverse agroecological settings. Results from initial studies indicate that the CSV approach has a high potential for scaling out promising climate-smart agricultural technologies, practices, and services. Climate analog studies indicate that the lessons learned at the CSV sites would be relevant to adaptation planning in a large part of global agricultural land even under scenarios of climate change. Key barriers and opportunities for further work are also discussed.

  3. Multi-scale Modeling Approach for Design and Optimization of Oleochemical Processes

    DEFF Research Database (Denmark)

    Jones, Mark Nicholas; Forero-Hernandez, Hector Alexander; Sarup, Bent

    2017-01-01

    The primary goal of this work is to present a systematic methodology and software frameworkfor a multi-level approach ranging from process synthesis and modeling throughproperty prediction, to sensitivity analysis, property parameter tuning and optimization.This framework is applied to the follow...

  4. An approach to large scale identification of non-obvious structural similarities between proteins

    Science.gov (United States)

    Cherkasov, Artem; Jones, Steven JM

    2004-01-01

    Background A new sequence independent bioinformatics approach allowing genome-wide search for proteins with similar three dimensional structures has been developed. By utilizing the numerical output of the sequence threading it establishes putative non-obvious structural similarities between proteins. When applied to the testing set of proteins with known three dimensional structures the developed approach was able to recognize structurally similar proteins with high accuracy. Results The method has been developed to identify pathogenic proteins with low sequence identity and high structural similarity to host analogues. Such protein structure relationships would be hypothesized to arise through convergent evolution or through ancient horizontal gene transfer events, now undetectable using current sequence alignment techniques. The pathogen proteins, which could mimic or interfere with host activities, would represent candidate virulence factors. The developed approach utilizes the numerical outputs from the sequence-structure threading. It identifies the potential structural similarity between a pair of proteins by correlating the threading scores of the corresponding two primary sequences against the library of the standard folds. This approach allowed up to 64% sensitivity and 99.9% specificity in distinguishing protein pairs with high structural similarity. Conclusion Preliminary results obtained by comparison of the genomes of Homo sapiens and several strains of Chlamydia trachomatis have demonstrated the potential usefulness of the method in the identification of bacterial proteins with known or potential roles in virulence. PMID:15147578

  5. National-scale strategic approaches for managing introduced plants: insights from Australian acacias in South Africa

    CSIR Research Space (South Africa)

    van Wilgen, BW

    2011-09-01

    Full Text Available A range of approaches and philosophies underpin national-level strategies for managing invasive alien plants. This study presents a strategy for the management of taxa that both have value and do harm. Insights were derived from examining Australian...

  6. An agent-based approach to model land-use change at a regional scale

    NARCIS (Netherlands)

    Valbuena, D.F.; Verburg, P.H.; Bregt, A.K.; Ligtenberg, A.

    2010-01-01

    Land-use/cover change (LUCC) is a complex process that includes actors and factors at different social and spatial levels. A common approach to analyse and simulate LUCC as the result of individual decisions is agent-based modelling (ABM). However, ABM is often applied to simulate processes at local

  7. An approach to large scale identification of non-obvious structural similarities between proteins

    Directory of Open Access Journals (Sweden)

    Cherkasov Artem

    2004-05-01

    Full Text Available Abstract Background A new sequence independent bioinformatics approach allowing genome-wide search for proteins with similar three dimensional structures has been developed. By utilizing the numerical output of the sequence threading it establishes putative non-obvious structural similarities between proteins. When applied to the testing set of proteins with known three dimensional structures the developed approach was able to recognize structurally similar proteins with high accuracy. Results The method has been developed to identify pathogenic proteins with low sequence identity and high structural similarity to host analogues. Such protein structure relationships would be hypothesized to arise through convergent evolution or through ancient horizontal gene transfer events, now undetectable using current sequence alignment techniques. The pathogen proteins, which could mimic or interfere with host activities, would represent candidate virulence factors. The developed approach utilizes the numerical outputs from the sequence-structure threading. It identifies the potential structural similarity between a pair of proteins by correlating the threading scores of the corresponding two primary sequences against the library of the standard folds. This approach allowed up to 64% sensitivity and 99.9% specificity in distinguishing protein pairs with high structural similarity. Conclusion Preliminary results obtained by comparison of the genomes of Homo sapiens and several strains of Chlamydia trachomatis have demonstrated the potential usefulness of the method in the identification of bacterial proteins with known or potential roles in virulence.

  8. A practical approach to compute short-wave irradiance interacting with subgrid-scale buildings

    Energy Technology Data Exchange (ETDEWEB)

    Sievers, Uwe; Frueh, Barbara [Deutscher Wetterdienst, Offenbach am Main (Germany)

    2012-08-15

    A numerical approach for the calculation of short-wave irradiances at the ground as well as the walls and roofs of buildings in an environment with unresolved built-up is presented. In this radiative parameterization scheme the properties of the unresolved built-up are assigned to settlement types which are characterized by mean values of the volume density of the buildings and their wall area density. Therefore it is named wall area approach. In the vertical direction the range of building heights may be subdivided into several layers. In the case of non-uniform building heights the shadowing of the lower roofs by the taller buildings is taken into account. The method includes the approximate calculation of sky view and sun view factors. For an idealized building arrangement it is shown that the obtained approximate factors are in good agreement with exact calculations just as for the comparison of the calculated and measured effective albedo values. For arrangements with isolated single buildings the presented wall area approach yields a better agreement with the observations than similar methods where the unresolved built-up is characterized by the aspect ratio of a representative street canyon (aspect ratio approach). In the limiting case where the built-up is well represented by an ensemble of idealized street canyons both approaches become equivalent. The presented short-wave radiation scheme is part of the microscale atmospheric model MUKLIMO 3 where it contributes to the calculation of surface temperatures on the basis of energy-flux equilibrium conditions. (orig.)

  9. Modelling of Sub-daily Hydrological Processes Using Daily Time-Step Models: A Distribution Function Approach to Temporal Scaling

    Science.gov (United States)

    Kandel, D. D.; Western, A. W.; Grayson, R. B.

    2004-12-01

    Mismatches in scale between the fundamental processes, the model and supporting data are a major limitation in hydrologic modelling. Surface runoff generation via infiltration excess and the process of soil erosion are fundamentally short time-scale phenomena and their average behaviour is mostly determined by the short time-scale peak intensities of rainfall. Ideally, these processes should be simulated using time-steps of the order of minutes to appropriately resolve the effect of rainfall intensity variations. However, sub-daily data support is often inadequate and the processes are usually simulated by calibrating daily (or even coarser) time-step models. Generally process descriptions are not modified but rather effective parameter values are used to account for the effect of temporal lumping, assuming that the effect of the scale mismatch can be counterbalanced by tuning the parameter values at the model time-step of interest. Often this results in parameter values that are difficult to interpret physically. A similar approach is often taken spatially. This is problematic as these processes generally operate or interact non-linearly. This indicates a need for better techniques to simulate sub-daily processes using daily time-step models while still using widely available daily information. A new method applicable to many rainfall-runoff-erosion models is presented. The method is based on temporal scaling using statistical distributions of rainfall intensity to represent sub-daily intensity variations in a daily time-step model. This allows the effect of short time-scale nonlinear processes to be captured while modelling at a daily time-step, which is often attractive due to the wide availability of daily forcing data. The approach relies on characterising the rainfall intensity variation within a day using a cumulative distribution function (cdf). This cdf is then modified by various linear and nonlinear processes typically represented in hydrological and

  10. Look-up-table approach for leaf area index retrieval from remotely sensed data based on scale information

    Science.gov (United States)

    Zhu, Xiaohua; Li, Chuanrong; Tang, Lingli

    2018-03-01

    Leaf area index (LAI) is a key structural characteristic of vegetation and plays a significant role in global change research. Several methods and remotely sensed data have been evaluated for LAI estimation. This study aimed to evaluate the suitability of the look-up-table (LUT) approach for crop LAI retrieval from Satellite Pour l'Observation de la Terre (SPOT)-5 data and establish an LUT approach for LAI inversion based on scale information. The LAI inversion result was validated by in situ LAI measurements, indicating that the LUT generated based on the PROSAIL (PROSPECT+SAIL: properties spectra + scattering by arbitrarily inclined leaves) model was suitable for crop LAI estimation, with a root mean square error (RMSE) of ˜0.31m2 / m2 and determination coefficient (R2) of 0.65. The scale effect of crop LAI was analyzed based on Taylor expansion theory, indicating that when the SPOT data aggregated by 200 × 200 pixel, the relative error is significant with 13.7%. Finally, an LUT method integrated with scale information was proposed in this article, improving the inversion accuracy with RMSE of 0.20 m2 / m2 and R2 of 0.83.

  11. Thermo-mechanical behaviour modelling of particle fuels using a multi-scale approach

    International Nuclear Information System (INIS)

    Blanc, V.

    2009-12-01

    Particle fuels are made of a few thousand spheres, one millimeter diameter large, compound of uranium oxide coated by confinement layers which are embedded in a graphite matrix to form the fuel element. The aim of this study is to develop a new simulation tool for thermo-mechanical behaviour of those fuels under radiations which is able to predict finely local loadings on the particles. We choose to use the square finite element method, in which two different discretization scales are used: a macroscopic homogeneous structure whose properties in each integration point are computed on a second heterogeneous microstructure, the Representative Volume Element (RVE). First part of this works is concerned by the definition of this RVE. A morphological indicator based in the minimal distance between spheres centers permit to select random sets of microstructures. The elastic macroscopic response of RVE, computed by finite element has been compared to an analytical model. Thermal and mechanical representativeness indicators of local loadings has been built from the particle failure modes. A statistical study of those criteria on a hundred of RVE showed the significance of choose a representative microstructure. In this perspective, a empirical model binding morphological indicator to mechanical indicator has been developed. Second part of the work deals with the two transition scale method which are based on the periodic homogenization. Considering a linear thermal problem with heat source in permanent condition, one showed that the heterogeneity of the heat source involve to use a second order method to localized finely the thermal field. The mechanical non-linear problem has been treats by using the iterative Cast3M algorithm, substituting to integration of the behavior law a finite element computation on the RVE. This algorithm has been validated, and coupled with thermal resolution in order to compute a radiation loading. A computation on a complete fuel element

  12. A cross-scale approach to understand drought-induced variability of sagebrush ecosystem productivity

    Science.gov (United States)

    Assal, T.; Anderson, P. J.

    2016-12-01

    Sagebrush (Artemisia spp.) mortality has recently been reported in the Upper Green River Basin (Wyoming, USA) of the sagebrush steppe of western North America. Numerous causes have been suggested, but recent drought (2012-13) is the likely mechanism of mortality in this water-limited ecosystem which provides critical habitat for many species of wildlife. An understanding of the variability in patterns of productivity with respect to climate is essential to exploit landscape scale remote sensing for detection of subtle changes associated with mortality in this sparse, uniformly vegetated ecosystem. We used the standardized precipitation index to characterize drought conditions and Moderate Resolution Imaging Spectroradiometer (MODIS) satellite imagery (250-m resolution) to characterize broad characteristics of growing season productivity. We calculated per-pixel growing season anomalies over a 16-year period (2000-2015) to identify the spatial and temporal variability in productivity. Metrics derived from Landsat satellite imagery (30-m resolution) were used to further investigate trends within anomalous areas at local scales. We found evidence to support an initial hypothesis that antecedent winter drought was most important in explaining reduced productivity. The results indicate drought effects were inconsistent over space and time. MODIS derived productivity deviated by more than four standard deviations in heavily impacted areas, but was well within the interannual variability in other areas. Growing season anomalies highlighted dramatic declines in productivity during the 2012 and 2013 growing seasons. However, large negative anomalies persisted in other areas during the 2014 growing season, indicating lag effects of drought. We are further investigating if the reduction in productivity is mediated by local biophysical properties. Our analysis identified spatially explicit patterns of ecosystem properties altered by severe drought which are consistent with

  13. A compact to revitalise large-scale irrigation systems: A ‘theory of change’ approach

    Directory of Open Access Journals (Sweden)

    Bruce A. Lankford

    2016-02-01

    Full Text Available In countries with transitional economies such as those found in South Asia, large-scale irrigation systems (LSIS with a history of public ownership account for about 115 million ha (Mha or approximately 45% of their total area under irrigation. In terms of the global area of irrigation (320 Mha for all countries, LSIS are estimated at 130 Mha or 40% of irrigated land. These systems can potentially deliver significant local, regional and global benefits in terms of food, water and energy security, employment, economic growth and ecosystem services. For example, primary crop production is conservatively valued at about US$355 billion. However, efforts to enhance these benefits and reform the sector have been costly and outcomes have been underwhelming and short-lived. We propose the application of a 'theory of change' (ToC as a foundation for promoting transformational change in large-scale irrigation centred upon a 'global irrigation compact' that promotes new forms of leadership, partnership and ownership (LPO. The compact argues that LSIS can change by switching away from the current channelling of aid finances controlled by government irrigation agencies. Instead it is for irrigators, closely partnered by private, public and NGO advisory and regulatory services, to develop strong leadership models and to find new compensatory partnerships with cities and other river basin neighbours. The paper summarises key assumptions for change in the LSIS sector including the need to initially test this change via a handful of volunteer systems. Our other key purpose is to demonstrate a ToC template by which large-scale irrigation policy can be better elaborated and discussed.

  14. "Non-cold" dark matter at small scales: a general approach

    Science.gov (United States)

    Murgia, R.; Merle, A.; Viel, M.; Totzauer, M.; Schneider, A.

    2017-11-01

    Structure formation at small cosmological scales provides an important frontier for dark matter (DM) research. Scenarios with small DM particle masses, large momenta or hidden interactions tend to suppress the gravitational clustering at small scales. The details of this suppression depend on the DM particle nature, allowing for a direct link between DM models and astrophysical observations. However, most of the astrophysical constraints obtained so far refer to a very specific shape of the power suppression, corresponding to thermal warm dark matter (WDM), i.e., candidates with a Fermi-Dirac or Bose-Einstein momentum distribution. In this work we introduce a new analytical fitting formula for the power spectrum, which is simple yet flexible enough to reproduce the clustering signal of large classes of non-thermal DM models, which are not at all adequately described by the oversimplified notion of WDM . We show that the formula is able to fully cover the parameter space of sterile neutrinos (whether resonantly produced or from particle decay), mixed cold and warm models, fuzzy dark matter, as well as other models suggested by effective theory of structure formation (ETHOS). Based on this fitting formula, we perform a large suite of N-body simulations and we extract important nonlinear statistics, such as the matter power spectrum and the halo mass function. Finally, we present first preliminary astrophysical constraints, based on linear theory, from both the number of Milky Way satellites and the Lyman-α forest. This paper is a first step towards a general and comprehensive modeling of small-scale departures from the standard cold DM model.

  15. An Automated Approach to Map Winter Cropped Area of Smallholder Farms across Large Scales Using MODIS Imagery

    Directory of Open Access Journals (Sweden)

    Meha Jain

    2017-06-01

    Full Text Available Fine-scale agricultural statistics are an important tool for understanding trends in food production and their associated drivers, yet these data are rarely collected in smallholder systems. These statistics are particularly important for smallholder systems given the large amount of fine-scale heterogeneity in production that occurs in these regions. To overcome the lack of ground data, satellite data are often used to map fine-scale agricultural statistics. However, doing so is challenging for smallholder systems because of (1 complex sub-pixel heterogeneity; (2 little to no available calibration data; and (3 high amounts of cloud cover as most smallholder systems occur in the tropics. We develop an automated method termed the MODIS Scaling Approach (MSA to map smallholder cropped area across large spatial and temporal scales using MODIS Enhanced Vegetation Index (EVI satellite data. We use this method to map winter cropped area, a key measure of cropping intensity, across the Indian subcontinent annually from 2000–2001 to 2015–2016. The MSA defines a pixel as cropped based on winter growing season phenology and scales the percent of cropped area within a single MODIS pixel based on observed EVI values at peak phenology. We validated the result with eleven high-resolution scenes (spatial scale of 5 × 5 m2 or finer that we classified into cropped versus non-cropped maps using training data collected by visual inspection of the high-resolution imagery. The MSA had moderate to high accuracies when validated using these eleven scenes across India (R2 ranging between 0.19 and 0.89 with an overall R2 of 0.71 across all sites. This method requires no calibration data, making it easy to implement across large spatial and temporal scales, with 100% spatial coverage due to the compositing of EVI to generate cloud-free data sets. The accuracies found in this study are similar to those of other studies that map crop production using automated methods

  16. Multiscale approach including microfibril scale to assess elastic constants of cortical bone based on neural network computation and homogenization method.

    Science.gov (United States)

    Barkaoui, Abdelwahed; Chamekh, Abdessalem; Merzouki, Tarek; Hambli, Ridha; Mkaddem, Ali

    2014-03-01

    The complexity and heterogeneity of bone tissue require a multiscale modeling to understand its mechanical behavior and its remodeling mechanisms. In this paper, a novel multiscale hierarchical approach including microfibril scale based on hybrid neural network (NN) computation and homogenization equations was developed to link nanoscopic and macroscopic scales to estimate the elastic properties of human cortical bone. The multiscale model is divided into three main phases: (i) in step 0, the elastic constants of collagen-water and mineral-water composites are calculated by averaging the upper and lower Hill bounds; (ii) in step 1, the elastic properties of the collagen microfibril are computed using a trained NN simulation. Finite element calculation is performed at nanoscopic levels to provide a database to train an in-house NN program; and (iii) in steps 2-10 from fibril to continuum cortical bone tissue, homogenization equations are used to perform the computation at the higher scales. The NN outputs (elastic properties of the microfibril) are used as inputs for the homogenization computation to determine the properties of mineralized collagen fibril. The mechanical and geometrical properties of bone constituents (mineral, collagen, and cross-links) as well as the porosity were taken in consideration. This paper aims to predict analytically the effective elastic constants of cortical bone by modeling its elastic response at these different scales, ranging from the nanostructural to mesostructural levels. Our findings of the lowest scale's output were well integrated with the other higher levels and serve as inputs for the next higher scale modeling. Good agreement was obtained between our predicted results and literature data. Copyright © 2013 John Wiley & Sons, Ltd.

  17. Systems approach to monitoring and evaluation guides scale up of the Standard Days Method of family planning in Rwanda

    Science.gov (United States)

    Igras, Susan; Sinai, Irit; Mukabatsinda, Marie; Ngabo, Fidele; Jennings, Victoria; Lundgren, Rebecka

    2014-01-01

    There is no guarantee that a successful pilot program introducing a reproductive health innovation can also be expanded successfully to the national or regional level, because the scaling-up process is complex and multilayered. This article describes how a successful pilot program to integrate the Standard Days Method (SDM) of family planning into existing Ministry of Health services was scaled up nationally in Rwanda. Much of the success of the scale-up effort was due to systematic use of monitoring and evaluation (M&E) data from several sources to make midcourse corrections. Four lessons learned illustrate this crucially important approach. First, ongoing M&E data showed that provider training protocols and client materials that worked in the pilot phase did not work at scale; therefore, we simplified these materials to support integration into the national program. Second, triangulation of ongoing monitoring data with national health facility and population-based surveys revealed serious problems in supply chain mechanisms that affected SDM (and the accompanying CycleBeads client tool) availability and use; new procedures for ordering supplies and monitoring stockouts were instituted at the facility level. Third, supervision reports and special studies revealed that providers were imposing unnecessary medical barriers to SDM use; refresher training and revised supervision protocols improved provider practices. Finally, informal environmental scans, stakeholder interviews, and key events timelines identified shifting political and health policy environments that influenced scale-up outcomes; ongoing advocacy efforts are addressing these issues. The SDM scale-up experience in Rwanda confirms the importance of monitoring and evaluating programmatic efforts continuously, using a variety of data sources, to improve program outcomes. PMID:25276581

  18. Sub-bottom profiling for large-scale maritime archaeological survey An experience-based approach

    DEFF Research Database (Denmark)

    Grøn, Ole; Boldreel, Lars Ole

    2013-01-01

    and wrecks partially or wholly embedded in the sea-floor sediments demands the application of highresolution sub-bottom profilers. This paper presents a strategy for the cost-effective large-scale mapping of unknown sedimentembedded sites such as submerged Stone Age settlements or wrecks, based on sub...... of the submerged cultural heritage. Elements such as archaeological wreck sites exposed on the sea floor are mapped using side-scan and multi-beam techniques. These can also provide information on bathymetric patterns representing potential Stone Age settlements, whereas the detection of such archaeological sites...

  19. National Radiological Fixed Lab Data

    Data.gov (United States)

    U.S. Environmental Protection Agency — The National Radiological Fixed Laboratory Data Asset includes data produced in support of various clients such as other EPA offices, EPA Regional programs, DOE,...

  20. Can mushrooms fix atmospheric nitrogen?

    Indian Academy of Sciences (India)

    Unknown

    Introduction. Rhizobium is a genus of symbiotic N2-fixing soil bacteria that induce ... To produce biofilm cultures, a 2 × 2 cm yeast manitol agar. (YMA) slab was .... determination of antibiotic susceptibilities of bacterial biofilms;. J. Clin. Microbiol.

  1. Elevated Fixed Platform Test Facility

    Data.gov (United States)

    Federal Laboratory Consortium — The Elevated Fixed Platform (EFP) is a helicopter recovery test facility located at Lakehurst, NJ. It consists of a 60 by 85 foot steel and concrete deck built atop...

  2. Authormagic – An Approach to Author Disambiguation in Large-Scale Digital Libraries

    CERN Document Server

    Weiler, Henning; Mele, Salvatore

    2011-01-01

    A collaboration of leading research centers in the field of High Energy Physics (HEP) has built INSPIRE, a novel information infrastructure, which comprises the entire corpus of about one million documents produced within the discipline, including a rich set of metadata, citation information and half a million full-text documents, and offers a unique opportunity for author disambiguation strategies. The presented approach features extended metadata comparison metrics and a three-step unsupervised graph clustering technique. The algorithm aided in identifying 200'000 individuals from 6'500'000 author signatures. Preliminary tests based on knowledge of external experts and a pilot of a crowd-sourcing system show a success rate of more than 96% within the selected test cases. The obtained author clusters serve as a recommendation for INSPIRE users to further clean the publication list in a crowd-sourced approach.

  3. Modeling of scale-dependent bacterial growth by chemical kinetics approach.

    Science.gov (United States)

    Martínez, Haydee; Sánchez, Joaquín; Cruz, José-Manuel; Ayala, Guadalupe; Rivera, Marco; Buhse, Thomas

    2014-01-01

    We applied the so-called chemical kinetics approach to complex bacterial growth patterns that were dependent on the liquid-surface-area-to-volume ratio (SA/V) of the bacterial cultures. The kinetic modeling was based on current experimental knowledge in terms of autocatalytic bacterial growth, its inhibition by the metabolite CO2, and the relief of inhibition through the physical escape of the inhibitor. The model quantitatively reproduces kinetic data of SA/V-dependent bacterial growth and can discriminate between differences in the growth dynamics of enteropathogenic E. coli, E. coli JM83, and Salmonella typhimurium on one hand and Vibrio cholerae on the other hand. Furthermore, the data fitting procedures allowed predictions about the velocities of the involved key processes and the potential behavior in an open-flow bacterial chemostat, revealing an oscillatory approach to the stationary states.

  4. Modeling of Scale-Dependent Bacterial Growth by Chemical Kinetics Approach

    Directory of Open Access Journals (Sweden)

    Haydee Martínez

    2014-01-01

    Full Text Available We applied the so-called chemical kinetics approach to complex bacterial growth patterns that were dependent on the liquid-surface-area-to-volume ratio (SA/V of the bacterial cultures. The kinetic modeling was based on current experimental knowledge in terms of autocatalytic bacterial growth, its inhibition by the metabolite CO2, and the relief of inhibition through the physical escape of the inhibitor. The model quantitatively reproduces kinetic data of SA/V-dependent bacterial growth and can discriminate between differences in the growth dynamics of enteropathogenic E. coli, E. coli  JM83, and Salmonella typhimurium on one hand and Vibrio cholerae on the other hand. Furthermore, the data fitting procedures allowed predictions about the velocities of the involved key processes and the potential behavior in an open-flow bacterial chemostat, revealing an oscillatory approach to the stationary states.

  5. Geoscience Meets Social Science: A Flexible Data Driven Approach for Developing High Resolution Population Datasets at Global Scale

    Science.gov (United States)

    Rose, A.; McKee, J.; Weber, E.; Bhaduri, B. L.

    2017-12-01

    Leveraging decades of expertise in population modeling, and in response to growing demand for higher resolution population data, Oak Ridge National Laboratory is now generating LandScan HD at global scale. LandScan HD is conceived as a 90m resolution population distribution where modeling is tailored to the unique geography and data conditions of individual countries or regions by combining social, cultural, physiographic, and other information with novel geocomputation methods. Similarities among these areas are exploited in order to leverage existing training data and machine learning algorithms to rapidly scale development. Drawing on ORNL's unique set of capabilities, LandScan HD adapts highly mature population modeling methods developed for LandScan Global and LandScan USA, settlement mapping research and production in high-performance computing (HPC) environments, land use and neighborhood mapping through image segmentation, and facility-specific population density models. Adopting a flexible methodology to accommodate different geographic areas, LandScan HD accounts for the availability, completeness, and level of detail of relevant ancillary data. Beyond core population and mapped settlement inputs, these factors determine the model complexity for an area, requiring that for any given area, a data-driven model could support either a simple top-down approach, a more detailed bottom-up approach, or a hybrid approach.

  6. Segmenting healthcare terminology users: a strategic approach to large scale evolutionary development.

    Science.gov (United States)

    Price, C; Briggs, K; Brown, P J

    1999-01-01

    Healthcare terminologies have become larger and more complex, aiming to support a diverse range of functions across the whole spectrum of healthcare activity. Prioritization of development, implementation and evaluation can be achieved by regarding the "terminology" as an integrated system of content-based and functional components. Matching these components to target segments within the healthcare community, supports a strategic approach to evolutionary development and provides essential product differentiation to enable terminology providers and systems suppliers to focus on end-user requirements.

  7. Multi-Scale Approach to Understanding Source-Sink Dynamics of Amphibians

    Science.gov (United States)

    2015-12-01

    spotted salamander, A. maculatum) at Fort Leonard Wood (FLW), Missouri. We used a multi-faceted approach in which we combined ecological , genetic...spotted salamander, A. maculatum) at Fort Leonard Wood , Missouri through a combination of intensive ecological field studies, genetic analyses, and...spatial demographic networks to identify optimal locations for wetland construction and restoration. Ecological Applications. Walls, S. C., Ball, L. C

  8. Approaches for Scaling Back the Defense Department’s Budget Plans

    Science.gov (United States)

    2013-03-01

    of an overall strat- egy for curtailing defense costs, or some variation of that approach could be adopted instead. (Ways in which the general... tempo (activities such as steaming days for Navy ships and flying hours for the ser- vices’ aviation components) of the units that remained in the...Mosher and Matthew S. Goldberg . Adam Talaber analyzed the costs to operate individual military units. David Berteau of the Center for Strategic and

  9. MURI: An Integrated Multi-Scale Approach for Understanding Ion Transport in Complex Heterogeneous Organic Materials

    Science.gov (United States)

    2017-09-30

    Thomas A. Witten,f Matthew W. Liberatore,a and Andrew M. Herring,a,* a Department of Chemical and Biological Engineering and bDepartment of Chemistry ...2) To fundamentally understand, with combined experimental and computational approaches, the interplay of chemistry , processing, and morphology on...Society, The International Society of Electrochemistry and The American Institute of Chemical Engineers to give oral and poster presentations. In

  10. Constructivist and Behaviorist Approaches: Development and Initial Evaluation of a Teaching Practice Scale for Introductory Statistics at the College Level

    Directory of Open Access Journals (Sweden)

    Rossi A. Hassad

    2011-07-01

    Full Text Available This study examined the teaching practices of 227 college instructors of introductory statistics from the health and behavioral sciences. Using primarily multidimensional scaling (MDS techniques, a two-dimensional, 10-item teaching-practice scale, TISS (Teaching of Introductory Statistics Scale, was developed. The two dimensions (subscales are characterized as constructivist and behaviorist; they are orthogonal. Criterion validity of the TISS was established in relation to instructors’ attitude toward teaching, and acceptable levels of reliability were obtained. A significantly higher level of behaviorist practice (less reform-oriented was reported by instructors from the U.S., as well as instructors with academic degrees in mathematics and engineering, whereas those with membership in professional organizations, tended to be more reform-oriented (or constructivist. The TISS, thought to be the first of its kind, will allow the statistics education community to empirically assess and describe the pedagogical approach (teaching practice of instructors of introductory statistics in the health and behavioral sciences, at the college level, and determine what learning outcomes result from the different teaching-practice orientations. Further research is required in order to be conclusive about the structural and psychometric properties of this scale, including its stability over time.

  11. New approach to small scale power could light up much of the developing world

    Energy Technology Data Exchange (ETDEWEB)

    Brooks, J.

    2011-01-15

    The modern conveniences requiring electricity have been out of reach for almost half of the world's population because they live too far from the grid. Innovative technology combined with creative new business models could significantly improve the quality of life for millions of people. This article discussed a small scale renewable energy system that could ensure that villages all over the world have access to radios, lights, refrigeration and other critical technologies. The article also noted the potential implications in terms of health, education and the general standard of living for millions of people. The basic model involves setting up small solar panels in a good location in a village or on a farm. The panels can be used to charge up equipment that is either on-site or portable. This article described how to achieve economies of scale through mass production of many similar units. The project has been tested in Brazil and a donation to the project of $100,000 will be used to install a solar-powered public infrastructure comprised of water pumping, school and an Internet station. The funds will also be used to provide 70 solar lanterns for children living in two villages on the Rio Tapajos, a tributary to the Amazon near Santarem. 1 fig.

  12. Introduction of an energy efficiency tool for small scale biomass gasifiers – A thermodynamic approach

    International Nuclear Information System (INIS)

    Vakalis, S.; Patuzzi, F.; Baratieri, M.

    2017-01-01

    Highlights: • Analysis of plants for electricity, heat and materials production. • Thermodynamic analysis by using exergy, entransy and statistical entropy. • Extrapolation of a single efficiency index by combining the thermodynamic parameters. • Application of methodology for two monitored small scale gasifiers. - Abstract: Modern gasification plants, should be treated as poly-generation facilities because, alongside the production of electricity and heat, valuable or waste materials streams are generated. Thus, integrated methods should be introduced in order to account for the full range and the nature of the products. Application of conventional hybrid indicators that convert the output into monetary units or CO_2 equivalents are a source of bias because of the inconsistency of the conversion factors and unreliability of the available data. Therefore, this study introduces a novel thermodynamic-based method for assessing gasification plants performance by means of exergy, entransy and statistical entropy. A monitoring campaign has been implemented on two small scale gasifiers and the results have been applied on the proposed method. The energy plants are compared in respect to their individual thermodynamic parameters for energy production and materials distribution. In addition, the method returns one single value which is a resultant of all the investigated parameters and is a characteristic value of the overall performance of an energy plant.

  13. Iterative learning-based decentralized adaptive tracker for large-scale systems: a digital redesign approach.

    Science.gov (United States)

    Tsai, Jason Sheng-Hong; Du, Yan-Yi; Huang, Pei-Hsiang; Guo, Shu-Mei; Shieh, Leang-San; Chen, Yuhua

    2011-07-01

    In this paper, a digital redesign methodology of the iterative learning-based decentralized adaptive tracker is proposed to improve the dynamic performance of sampled-data linear large-scale control systems consisting of N interconnected multi-input multi-output subsystems, so that the system output will follow any trajectory which may not be presented by the analytic reference model initially. To overcome the interference of each sub-system and simplify the controller design, the proposed model reference decentralized adaptive control scheme constructs a decoupled well-designed reference model first. Then, according to the well-designed model, this paper develops a digital decentralized adaptive tracker based on the optimal analog control and prediction-based digital redesign technique for the sampled-data large-scale coupling system. In order to enhance the tracking performance of the digital tracker at specified sampling instants, we apply the iterative learning control (ILC) to train the control input via continual learning. As a result, the proposed iterative learning-based decentralized adaptive tracker not only has robust closed-loop decoupled property but also possesses good tracking performance at both transient and steady state. Besides, evolutionary programming is applied to search for a good learning gain to speed up the learning process of ILC. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.

  14. Structural health monitoring using DOG multi-scale space: an approach for analyzing damage characteristics

    Science.gov (United States)

    Guo, Tian; Xu, Zili

    2018-03-01

    Measurement noise is inevitable in practice; thus, it is difficult to identify defects, cracks or damage in a structure while suppressing noise simultaneously. In this work, a novel method is introduced to detect multiple damage in noisy environments. Based on multi-scale space analysis for discrete signals, a method for extracting damage characteristics from the measured displacement mode shape is illustrated. Moreover, the proposed method incorporates a data fusion algorithm to further eliminate measurement noise-based interference. The effectiveness of the method is verified by numerical and experimental methods applied to different structural types. The results demonstrate that there are two advantages to the proposed method. First, damage features are extracted by the difference of the multi-scale representation; this step is taken such that the interference of noise amplification can be avoided. Second, a data fusion technique applied to the proposed method provides a global decision, which retains the damage features while maximally eliminating the uncertainty. Monte Carlo simulations are utilized to validate that the proposed method has a higher accuracy in damage detection.

  15. The use of scale-invariance feature transform approach to recognize and retrieve incomplete shoeprints.

    Science.gov (United States)

    Wei, Chia-Hung; Li, Yue; Gwo, Chih-Ying

    2013-05-01

    Shoeprints left at the crime scene provide valuable information in criminal investigation due to the distinctive patterns in the sole. Those shoeprints are often incomplete and noisy. In this study, scale-invariance feature transform is proposed and evaluated for recognition and retrieval of partial and noisy shoeprint images. The proposed method first constructs different scale spaces to detect local extrema in the underlying shoeprint images. Those local extrema are considered as useful key points in the image. Next, the features of those key points are extracted to represent their local patterns around key points. Then, the system computes the cross-correlation between the query image and each shoeprint image in the database. Experimental results show that full-size prints and prints from the toe area perform best among all shoeprints. Furthermore, this system also demonstrates its robustness against noise because there is a very slight difference in comparison between original shoeprints and noisy shoeprints. © 2013 American Academy of Forensic Sciences.

  16. Developing and validating the Youth Conduct Problems Scale-Rwanda: a mixed methods approach.

    Directory of Open Access Journals (Sweden)

    Lauren C Ng

    Full Text Available This study developed and validated the Youth Conduct Problems Scale-Rwanda (YCPS-R. Qualitative free listing (n = 74 and key informant interviews (n = 47 identified local conduct problems, which were compared to existing standardized conduct problem scales and used to develop the YCPS-R. The YCPS-R was cognitive tested by 12 youth and caregiver participants, and assessed for test-retest and inter-rater reliability in a sample of 64 youth. Finally, a purposive sample of 389 youth and their caregivers were enrolled in a validity study. Validity was assessed by comparing YCPS-R scores to conduct disorder, which was diagnosed with the Mini International Neuropsychiatric Interview for Children, and functional impairment scores on the World Health Organization Disability Assessment Schedule Child Version. ROC analyses assessed the YCPS-R's ability to discriminate between youth with and without conduct disorder. Qualitative data identified a local presentation of youth conduct problems that did not match previously standardized measures. Therefore, the YCPS-R was developed solely from local conduct problems. Cognitive testing indicated that the YCPS-R was understandable and required little modification. The YCPS-R demonstrated good reliability, construct, criterion, and discriminant validity, and fair classification accuracy. The YCPS-R is a locally-derived measure of Rwandan youth conduct problems that demonstrated good psychometric properties and could be used for further research.

  17. Qualitative approaches to large scale studies and students' achievements in Science and Mathematics - An Australian and Nordic Perspective

    DEFF Research Database (Denmark)

    Davidsson, Eva; Sørensen, Helene

    Large scale studies play an increasing role in educational politics and results from surveys such as TIMSS and PISA are extensively used in medial debates about students' knowledge in science and mathematics. Although this debate does not usually shed light on the more extensive quantitative...... analyses, there is a lack of investigations which aim at exploring what is possible to conclude or not to conclude from these analyses. There is also a need for more detailed discussions about what trends could be discern concerning students' knowledge in science and mathematics. The aim of this symposium...... is therefore to highlight and discuss different approaches to how data from large scale studies could be used for additional analyses in order to increase our understanding of students' knowledge in science and mathematics, but also to explore possible longitudinal trends, hidden in the data material...

  18. Fixed point of the parabolic renormalization operator

    CERN Document Server

    Lanford III, Oscar E

    2014-01-01

    This monograph grew out of the authors' efforts to provide a natural geometric description for the class of maps invariant under parabolic renormalization and for the Inou-Shishikura fixed point itself as well as to carry out a computer-assisted study of the parabolic renormalization operator. It introduces a renormalization-invariant class of analytic maps with a maximal domain of analyticity and rigid covering properties and presents a numerical scheme for computing parabolic renormalization of a germ, which is used to compute the Inou-Shishikura renormalization fixed point.   Inside, readers will find a detailed introduction into the theory of parabolic bifurcation,  Fatou coordinates, Écalle-Voronin conjugacy invariants of parabolic germs, and the definition and basic properties of parabolic renormalization.   The systematic view of parabolic renormalization developed in the book and the numerical approach to its study will be interesting to both experts in the field as well as graduate students wishi...

  19. Emotion regulation in patients with rheumatic diseases: validity and responsiveness of the Emotional Approach Coping Scale (EAC

    Directory of Open Access Journals (Sweden)

    Mowinckel Petter

    2009-09-01

    Full Text Available Abstract Background Chronic rheumatic diseases are painful conditions which are not entirely controllable and can place high emotional demands on individuals. Increasing evidence has shown that emotion regulation in terms of actively processing and expressing disease-related emotions are likely to promote positive adjustment in patients with chronic diseases. The Emotional Approach Coping Scale (EAC measures active attempts to acknowledge, understand, and express emotions. Although tested in other clinical samples, the EAC has not been validated for patients with rheumatic diseases. This study evaluated the data quality, internal consistency reliability, validity and responsiveness of the Norwegian version of the EAC for this group of patients. Methods 220 patients with different rheumatic diseases were included in a cross-sectional study in which data quality and internal consistency were assessed. Construct validity was assessed through comparisons with the Brief Approach/Avoidance Coping Questionnaire (BACQ and the General Health Questionnaire (GHQ-20. Responsiveness was tested in a longitudinal pretest-posttest study of two different coping interventions, the Vitality Training Program (VTP and a Self-Management Program (SMP. Results The EAC had low levels of missing data. Results from principal component analysis supported two subscales, Emotional Expression and Emotional Processing, which had high Cronbach's alphas of 0.90 and 0.92, respectively. The EAC had correlations with approach-oriented items in the BACQ in the range 0.17-0.50. The EAC Expression scale had a significant negative correlation with the GHQ-20 of -0.13. As hypothesized, participation in the VTP significantly improved EAC scores, indicating responsiveness to change. Conclusion The EAC is an acceptable and valid instrument for measuring emotional processing and expression in patients with rheumatic diseases. The EAC scales were responsive to change in an intervention

  20. A refined regional modeling approach for the Corn Belt - Experiences and recommendations for large-scale integrated modeling

    Science.gov (United States)

    Panagopoulos, Yiannis; Gassman, Philip W.; Jha, Manoj K.; Kling, Catherine L.; Campbell, Todd; Srinivasan, Raghavan; White, Michael; Arnold, Jeffrey G.

    2015-05-01

    Nonpoint source pollution from agriculture is the main source of nitrogen and phosphorus in the stream systems of the Corn Belt region in the Midwestern US. This region is comprised of two large river basins, the intensely row-cropped Upper Mississippi River Basin (UMRB) and Ohio-Tennessee River Basin (OTRB), which are considered the key contributing areas for the Northern Gulf of Mexico hypoxic zone according to the US Environmental Protection Agency. Thus, in this area it is of utmost importance to ensure that intensive agriculture for food, feed and biofuel production can coexist with a healthy water environment. To address these objectives within a river basin management context, an integrated modeling system has been constructed with the hydrologic Soil and Water Assessment Tool (SWAT) model, capable of estimating river basin responses to alternative cropping and/or management strategies. To improve modeling performance compared to previous studies and provide a spatially detailed basis for scenario development, this SWAT Corn Belt application incorporates a greatly refined subwatershed structure based on 12-digit hydrologic units or 'subwatersheds' as defined by the US Geological Service. The model setup, calibration and validation are time-demanding and challenging tasks for these large systems, given the scale intensive data requirements, and the need to ensure the reliability of flow and pollutant load predictions at multiple locations. Thus, the objectives of this study are both to comprehensively describe this large-scale modeling approach, providing estimates of pollution and crop production in the region as well as to present strengths and weaknesses of integrated modeling at such a large scale along with how it can be improved on the basis of the current modeling structure and results. The predictions were based on a semi-automatic hydrologic calibration approach for large-scale and spatially detailed modeling studies, with the use of the Sequential

  1. MEASURING GROCERY STORES SERVICE QUALITY IN INDONESIA: A RETAIL SERVICE QUALITY SCALE APPROACH

    Directory of Open Access Journals (Sweden)

    Leonnard Leonnard

    2017-12-01

    Full Text Available The growing number of modern grocery stores in Indonesia is a challenge for each grocery store to maintain and increase their number of consumers. The success of maintaining and improving service quality will affect long-term profitability and business sustainability. Therefore, in this study, we examined consumer perceptions of service quality in one of modern grocery stores in Indonesia. Data were collected from 387 consumers of grocery stores in Jakarta, Bogor, Depok, Bekasi, Cibubur, and Subang. Structural Equation Modeling (SEM through Maximum likelihood and Bayesian estimation was employed to analyze the data. The finding indicated that the five indicators of the retail service quality scale consisting of physical aspects, reliability, personal interactions, problem solving and policies provided  valid multi-item instruments in measuring consumer perceptions of service quality in grocery stores.

  2. Reexamining the domain of hypochondriasis: comparing the Illness Attitudes Scale to other approaches.

    Science.gov (United States)

    Fergus, Thomas A; Valentiner, David P

    2009-08-01

    The present study examined utility of the Illness Attitudes Scale (IAS; [Kellner, R. (1986). Somatization and hypochondriasis. New York: Praeger Publishers]) in a non-clinical college sample (N=235). Relationships among five recently identified IAS dimensions (fear of illness and pain, symptom effects, treatment experience, disease conviction, and health habits) and self-report measures of several anxiety-related constructs (health anxiety, body vigilance, intolerance of uncertainty, anxiety sensitivity, and non-specific anxiety symptoms) were examined. In addition, this study investigated the incremental validity of the IAS dimensions in predicting medical utilization. The fear of illness and pain dimension and the symptom effects dimension consistently shared stronger relations with the anxiety-related constructs compared to the other three IAS dimensions. The symptom effects dimension, the disease conviction dimension, and the health habits dimension showed incremental validity over the anxiety-related constructs in predicting medical utilization. Implications for the IAS and future conceptualizations of HC are discussed.

  3. Mechanical properties of granular materials: A variational approach to grain-scale simulations

    Energy Technology Data Exchange (ETDEWEB)

    Holtzman, R.; Silin, D.B.; Patzek, T.W.

    2009-01-15

    The mechanical properties of cohesionless granular materials are evaluated from grain-scale simulations. A three-dimensional pack of spherical grains is loaded by incremental displacements of its boundaries. The deformation is described as a sequence of equilibrium configurations. Each configuration is characterized by a minimum of the total potential energy. This minimum is computed using a modification of the conjugate gradient algorithm. Our simulations capture the nonlinear, path-dependent behavior of granular materials observed in experiments. Micromechanical analysis provides valuable insight into phenomena such as hysteresis, strain hardening and stress-induced anisotropy. Estimates of the effective bulk modulus, obtained with no adjustment of material parameters, are in agreement with published experimental data. The model is applied to evaluate the effects of hydrate dissociation in marine sediments. Weakening of the sediment is quantified as a reduction in the effective elastic moduli.

  4. Decommissioning of nuclear reprocessing plants French past experience and approach to future large scale operations

    International Nuclear Information System (INIS)

    Jean Jacques, M.; Maurel, J.J.; Maillet, J.

    1994-01-01

    Over the years, France has built up significant experience in dismantling nuclear fuel reprocessing facilities or various types of units representative of a modern reprocessing plant. However, only small or medium scale operations have been carried out so far. To prepare the future decommissioning of large size industrial facilities such as UP1 (Marcoule) and UP2 (La Hague), new technologies must be developed to maximize waste recycling and optimize direct operations by operators, taking the integrated dose and cost aspects into account. The decommissioning and dismantling methodology comprises: a preparation phase for inventory, choice and installation of tools and arrangement of working areas, a dismantling phase with decontamination, and a final contamination control phase. Detailed description of dismantling operations of the MA Pu finishing facility (La Hague) and of the RM2 radio metallurgical laboratory (CEA-Fontenay-aux-Roses) are given as examples. (J.S.). 3 tabs

  5. National, holistic, watershed-scale approach to understand the sources, transport, and fate of agricultural chemicals

    Science.gov (United States)

    Capel, P.D.; McCarthy, K.A.; Barbash, J.E.

    2008-01-01

    This paper is an introduction to the following series of papers that report on in-depth investigations that have been conducted at five agricultural study areas across the United States in order to gain insights into how environmental processes and agricultural practices interact to determine the transport and fate of agricultural chemicals in the environment. These are the first study areas in an ongoing national study. The study areas were selected, based on the combination of cropping patterns and hydrologic setting, as representative of nationally important agricultural settings to form a basis for extrapolation to unstudied areas. The holistic, watershed-scale study design that involves multiple environmental compartments and that employs both field observations and simulation modeling is presented. This paper introduces the overall study design and presents an overview of the hydrology of the five study areas. Copyright ?? 2008 by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America. All rights reserved.

  6. Modeling and Control of a Large Nuclear Reactor A Three-Time-Scale Approach

    CERN Document Server

    Shimjith, S R; Bandyopadhyay, B

    2013-01-01

    Control analysis and design of large nuclear reactors requires a suitable mathematical model representing the steady state and dynamic behavior of the reactor with reasonable accuracy. This task is, however, quite challenging because of several complex dynamic phenomena existing in a reactor. Quite often, the models developed would be of prohibitively large order, non-linear and of complex structure not readily amenable for control studies. Moreover, the existence of simultaneously occurring dynamic variations at different speeds makes the mathematical model susceptible to numerical ill-conditioning, inhibiting direct application of standard control techniques. This monograph introduces a technique for mathematical modeling of large nuclear reactors in the framework of multi-point kinetics, to obtain a comparatively smaller order model in standard state space form thus overcoming these difficulties. It further brings in innovative methods for controller design for systems exhibiting multi-time-scale property,...

  7. Subjective evaluation with FAA criteria: A multidimensional scaling approach. [ground track control management

    Science.gov (United States)

    Kreifeldt, J. G.; Parkin, L.; Wempe, T. E.; Huff, E. F.

    1975-01-01

    Perceived orderliness in the ground tracks of five A/C during their simulated flights was studied. Dynamically developing ground tracks for five A/C from 21 separate runs were reproduced from computer storage and displayed on CRTS to professional pilots and controllers for their evaluations and preferences under several criteria. The ground tracks were developed in 20 seconds as opposed to the 5 minutes of simulated flight using speedup techniques for display. Metric and nonmetric multidimensional scaling techniques are being used to analyze the subjective responses in an effort to: (1) determine the meaningfulness of basing decisions on such complex subjective criteria; (2) compare pilot/controller perceptual spaces; (3) determine the dimensionality of the subjects' perceptual spaces; and thereby (4) determine objective measures suitable for comparing alternative traffic management simulations.

  8. AC losses in superconductors: a multi-scale approach for the design of high current cables

    International Nuclear Information System (INIS)

    Escamez, Guillaume

    2016-01-01

    The work reported in this PhD deals with AC losses in superconducting material for large scale applications such as cables or magnets. Numerical models involving FEM or integral methods have been developed to solve the time transient electromagnetic distributions of field and current densities with the peculiarity of the superconducting constitutive E-J equation. Two main conductors have been investigated. First, REBCO superconductors for applications operating at 77 K are studied and a new architecture of conductor (round wires) for 3 kA cables. Secondly, for very high current cables, 3-D simulations on MgB_2 wires are built and solved using FEM modeling. The following chapter introduced new development used for the calculation of AC losses in DC cables with ripples. The thesis ends with the use of the developed numerical model on a practical example in the european BEST-PATHS project: a 10 kA MgB_2 demonstrator [fr

  9. Contamination effects on fixed-bias Langmuir probes

    Energy Technology Data Exchange (ETDEWEB)

    Steigies, C. T. [Institut fuer Experimentelle und Angewandte Physik, Christian-Albrechts-Universitaet zu Kiel, 24098 Kiel (Germany); Barjatya, A. [Department of Physical Sciences, Embry-Riddle Aeronautical University, Daytona Beach, Florida 32114 (United States)

    2012-11-15

    Langmuir probes are standard instruments for plasma density measurements on many sounding rockets. These probes can be operated in swept-bias as well as in fixed-bias modes. In swept-bias Langmuir probes, contamination effects are frequently visible as a hysteresis between consecutive up and down voltage ramps. This hysteresis, if not corrected, leads to poorly determined plasma densities and temperatures. With a properly chosen sweep function, the contamination parameters can be determined from the measurements and correct plasma parameters can then be determined. In this paper, we study the contamination effects on fixed-bias Langmuir probes, where no hysteresis type effect is seen in the data. Even though the contamination is not evident from the measurements, it does affect the plasma density fluctuation spectrum as measured by the fixed-bias Langmuir probe. We model the contamination as a simple resistor-capacitor circuit between the probe surface and the plasma. We find that measurements of small scale plasma fluctuations (meter to sub-meter scale) along a rocket trajectory are not affected, but the measured amplitude of large scale plasma density variation (tens of meters or larger) is attenuated. From the model calculations, we determine amplitude and cross-over frequency of the contamination effect on fixed-bias probes for different contamination parameters. The model results also show that a fixed bias probe operating in the ion-saturation region is affected less by contamination as compared to a fixed bias probe operating in the electron saturation region.

  10. Approaches to modeling landscape-scale drought-induced forest mortality

    Science.gov (United States)

    Gustafson, Eric J.; Shinneman, Douglas

    2015-01-01

    Drought stress is an important cause of tree mortality in forests, and drought-induced disturbance events are projected to become more common in the future due to climate change. Landscape Disturbance and Succession Models (LDSM) are becoming widely used to project climate change impacts on forests, including potential interactions with natural and anthropogenic disturbances, and to explore the efficacy of alternative management actions to mitigate negative consequences of global changes on forests and ecosystem services. Recent studies incorporating drought-mortality effects into LDSMs have projected significant potential changes in forest composition and carbon storage, largely due to differential impacts of drought on tree species and interactions with other disturbance agents. In this chapter, we review how drought affects forest ecosystems and the different ways drought effects have been modeled (both spatially and aspatially) in the past. Building on those efforts, we describe several approaches to modeling drought effects in LDSMs, discuss advantages and shortcomings of each, and include two case studies for illustration. The first approach features the use of empirically derived relationships between measures of drought and the loss of tree biomass to drought-induced mortality. The second uses deterministic rules of species mortality for given drought events to project changes in species composition and forest distribution. A third approach is more mechanistic, simulating growth reductions and death caused by water stress. Because modeling of drought effects in LDSMs is still in its infancy, and because drought is expected to play an increasingly important role in forest health, further development of modeling drought-forest dynamics is urgently needed.

  11. Sustainable Competitive Advantage (SCA Analysis of Furniture Manufacturers in Malaysia: Normalized Scaled Critical Factor Index (NSCFI Approach

    Directory of Open Access Journals (Sweden)

    Tasmin Rosmaini

    2016-06-01

    Full Text Available The purpose of this paper is to investigate Malaysian furniture industry via Sustainable competitive advantages (SCA approach. In this case study, sense and respond method and Normalized Scaled Critical Factor Index (NSCFI are used to specify the distribution of companies’ resources for different criteria and detect the attributes which are critical based on expectation and experience of companies’ employs. Moreover, this study evaluates Malaysian furniture business strategy according to manufacturing strategy in terms of analyzer, prospector and defender. Finally, SCA risk levels are presented to show how much company’s resource allocations support their business strategy.

  12. Decoupling of parity- and SU(2)/sub R/-breaking scales: A new approach to left-right symmetric models

    International Nuclear Information System (INIS)

    Chang, D.; Mohapatra, R.N.; Parida, M.K.

    1984-01-01

    A new approach to left-right symmetric models is proposed, where the left-right discrete-symmetry- and SU(2)/sub R/-breaking scales are decoupled from each other. This changes the spectrum of physical Higgs bosons which leads to different patterns for gauge hierarchies in SU(2)/sub L/xSU(2)/sub R/xSU(4)/sub C/ and SO(10) models. Most interesting are two SO(10) symmetry-breaking chains with an intermediate U(1)/sub R/ symmetry. These are such as to provide new motivation to search for ΔB = 2 and right-handed current effects at low energies

  13. EMAPS: An Efficient Multiscale Approach to Plasma Systems with Non-MHD Scale Effects

    Energy Technology Data Exchange (ETDEWEB)

    Omelchenko, Yuri A. [Trinum Research, Inc., San Diego, CA (United States)

    2016-08-08

    Global interactions of energetic ions with magnetoplasmas and neutral gases lie at the core of many space and laboratory plasma phenomena ranging from solar wind entry into and transport within planetary magnetospheres and exospheres to fast-ion driven instabilities in fusion devices to astrophysics-in-lab experiments. The ability of computational models to properly account for physical effects that underlie such interactions, namely ion kinetic, ion cyclotron, Hall, collisional and ionization processes is important for the success and planning of experimental research in plasma physics. Understanding the physics of energetic ions, in particular their nonlinear resonance interactions with Alfvén waves, is central to improving the heating performance of magnetically confined plasmas for future energy generation. Fluid models are not adequate for high-beta plasmas as they cannot fully capture ion kinetic and cyclotron physics (e.g., ion behavior in the presence of magnetic nulls, shock structures, plasma interpenetration, etc.). Recent results from global reconnection simulations show that even in a MHD-like regime there may be significant differences between kinetic and MHD simulations. Therefore, kinetic modeling becomes essential for meeting modern day challenges in plasma physics. The hybrid approximation is an intermediate approximation between the fluid and fully kinetic approximations. It eliminates light waves, removes the electron inertial temporal and spatial scales from the problem and enables full-orbit ion kinetics. As a result, hybrid codes have become effective tools for exploring ion-scale driven phenomena associated with ion beams, shocks, reconnection and turbulence that control the large-scale behavior of laboratory and space magnetoplasmas. A number of numerical issues, however, make three-dimensional (3D) large-scale hybrid simulations of inhomogeneous magnetized plasmas prohibitively expensive or even impossible. To resolve these difficulties

  14. Validation of Sustainable Development Practices Scale Using the Bayesian Approach to Item Response Theory

    Directory of Open Access Journals (Sweden)

    Martin Hernani Merino

    2014-12-01

    Full Text Available There has been growing recognition of the importance of creating performance measurement tools for the economic, social and environmental management of micro and small enterprise (MSE. In this context, this study aims to validate an instrument to assess perceptions of sustainable development practices by MSEs by means of a Graded Response Model (GRM with a Bayesian approach to Item Response Theory (IRT. The results based on a sample of 506 university students in Peru, suggest that a valid measurement instrument was achieved. At the end of the paper, methodological and managerial contributions are presented.

  15. Fixed-target physics at LHCb

    CERN Document Server

    Maurice, Emilie Amandine

    2017-01-01

    The LHCb experiment has the unique possibility, among the LHC experiments, to be operated in fixed target mode, using its internal gas target SMOG. The energy scale achievable at the LHC and the excellent detector capabilities for vertexing, tracking and particle identification allow a wealth of measurements of great interest for cosmic ray and heavy ions physics. We report the first measurements made in this configuration: the measurement of antiproton production in proton-helium collisions and the measurements of open and hidden charm production in proton-argon collisions at $\\sqrt{s_\\textrm{NN}} =$ 110 GeV.

  16. Hybrid discrete PSO and OPF approach for optimization of biomass fueled micro-scale energy system

    International Nuclear Information System (INIS)

    Gómez-González, M.; López, A.; Jurado, F.

    2013-01-01

    Highlights: ► Method to determine the optimal location and size of biomass power plants. ► The proposed approach is a hybrid of PSO algorithm and optimal power flow. ► Comparison among the proposed algorithm and other methods. ► Computational costs are enough lower than that required for exhaustive search. - Abstract: This paper addresses generation of electricity in the specific aspect of finding the best location and sizing of biomass fueled gas micro-turbine power plants, taking into account the variables involved in the problem, such as the local distribution of biomass resources, biomass transportation and extraction costs, operation and maintenance costs, power losses costs, network operation costs, and technical constraints. In this paper a hybrid method is introduced employing discrete particle swarm optimization and optimal power flow. The approach can be applied to search the best sites and capacities to connect biomass fueled gas micro-turbine power systems in a distribution network among a large number of potential combinations and considering the technical constraints of the network. A fair comparison among the proposed algorithm and other methods is performed.

  17. Multi-Contextual Segregation and Environmental Justice Research: Toward Fine-Scale Spatiotemporal Approaches

    Directory of Open Access Journals (Sweden)

    Yoo Min Park

    2017-10-01

    Full Text Available Many environmental justice studies have sought to examine the effect of residential segregation on unequal exposure to environmental factors among different social groups, but little is known about how segregation in non-residential contexts affects such disparity. Based on a review of the relevant literature, this paper discusses the limitations of traditional residence-based approaches in examining the association between socioeconomic or racial/ethnic segregation and unequal environmental exposure in environmental justice research. It emphasizes that future research needs to go beyond residential segregation by considering the full spectrum of segregation experienced by people in various geographic and temporal contexts of everyday life. Along with this comprehensive understanding of segregation, the paper also highlights the importance of assessing environmental exposure at a high spatiotemporal resolution in environmental justice research. The successful integration of a comprehensive concept of segregation, high-resolution data and fine-grained spatiotemporal approaches to assessing segregation and environmental exposure would provide more nuanced and robust findings on the associations between segregation and disparities in environmental exposure and their health impacts. Moreover, it would also contribute to significantly expanding the scope of environmental justice research.

  18. Large Scale Proteomic Data and Network-Based Systems Biology Approaches to Explore the Plant World.

    Science.gov (United States)

    Di Silvestre, Dario; Bergamaschi, Andrea; Bellini, Edoardo; Mauri, PierLuigi

    2018-06-03

    The investigation of plant organisms by means of data-derived systems biology approaches based on network modeling is mainly characterized by genomic data, while the potential of proteomics is largely unexplored. This delay is mainly caused by the paucity of plant genomic/proteomic sequences and annotations which are fundamental to perform mass-spectrometry (MS) data interpretation. However, Next Generation Sequencing (NGS) techniques are contributing to filling this gap and an increasing number of studies are focusing on plant proteome profiling and protein-protein interactions (PPIs) identification. Interesting results were obtained by evaluating the topology of PPI networks in the context of organ-associated biological processes as well as plant-pathogen relationships. These examples foreshadow well the benefits that these approaches may provide to plant research. Thus, in addition to providing an overview of the main-omic technologies recently used on plant organisms, we will focus on studies that rely on concepts of module, hub and shortest path, and how they can contribute to the plant discovery processes. In this scenario, we will also consider gene co-expression networks, and some examples of integration with metabolomic data and genome-wide association studies (GWAS) to select candidate genes will be mentioned.

  19. Multi-Contextual Segregation and Environmental Justice Research: Toward Fine-Scale Spatiotemporal Approaches.

    Science.gov (United States)

    Park, Yoo Min; Kwan, Mei-Po

    2017-10-10

    Many environmental justice studies have sought to examine the effect of residential segregation on unequal exposure to environmental factors among different social groups, but little is known about how segregation in non-residential contexts affects such disparity. Based on a review of the relevant literature, this paper discusses the limitations of traditional residence-based approaches in examining the association between socioeconomic or racial/ethnic segregation and unequal environmental exposure in environmental justice research. It emphasizes that future research needs to go beyond residential segregation by considering the full spectrum of segregation experienced by people in various geographic and temporal contexts of everyday life. Along with this comprehensive understanding of segregation, the paper also highlights the importance of assessing environmental exposure at a high spatiotemporal resolution in environmental justice research. The successful integration of a comprehensive concept of segregation, high-resolution data and fine-grained spatiotemporal approaches to assessing segregation and environmental exposure would provide more nuanced and robust findings on the associations between segregation and disparities in environmental exposure and their health impacts. Moreover, it would also contribute to significantly expanding the scope of environmental justice research.

  20. Probabilistic Approach to Enable Extreme-Scale Simulations under Uncertainty and System Faults. Final Technical Report

    Energy Technology Data Exchange (ETDEWEB)

    Knio, Omar [Duke Univ., Durham, NC (United States). Dept. of Mechanical Engineering and Materials Science

    2017-05-05

    The current project develops a novel approach that uses a probabilistic description to capture the current state of knowledge about the computational solution. To effectively spread the computational effort over multiple nodes, the global computational domain is split into many subdomains. Computational uncertainty in the solution translates into uncertain boundary conditions for the equation system to be solved on those subdomains, and many independent, concurrent subdomain simulations are used to account for this bound- ary condition uncertainty. By relying on the fact that solutions on neighboring subdomains must agree with each other, a more accurate estimate for the global solution can be achieved. Statistical approaches in this update process make it possible to account for the effect of system faults in the probabilistic description of the computational solution, and the associated uncertainty is reduced through successive iterations. By combining all of these elements, the probabilistic reformulation allows splitting the computational work over very many independent tasks for good scalability, while being robust to system faults.

  1. Understanding Greenland ice sheet hydrology using an integrated multi-scale approach

    International Nuclear Information System (INIS)

    Rennermalm, A K; Moustafa, S E; Mioduszewski, J; Robinson, D A; Chu, V W; Smith, L C; Forster, R R; Hagedorn, B; Harper, J T; Mote, T L; Shuman, C A; Tedesco, M

    2013-01-01

    Improved understanding of Greenland ice sheet hydrology is critically important for assessing its impact on current and future ice sheet dynamics and global sea level rise. This has motivated the collection and integration of in situ observations, model development, and remote sensing efforts to quantify meltwater production, as well as its phase changes, transport, and export. Particularly urgent is a better understanding of albedo feedbacks leading to enhanced surface melt, potential positive feedbacks between ice sheet hydrology and dynamics, and meltwater retention in firn. These processes are not isolated, but must be understood as part of a continuum of processes within an integrated system. This letter describes a systems approach to the study of Greenland ice sheet hydrology, emphasizing component interconnections and feedbacks, and highlighting research and observational needs. (letter)

  2. Multicontroller: an object programming approach to introduce advanced control algorithms for the GCS large scale project

    CERN Document Server

    Cabaret, S; Coppier, H; Rachid, A; Barillère, R; CERN. Geneva. IT Department

    2007-01-01

    The GCS (Gas Control System) project team at CERN uses a Model Driven Approach with a Framework - UNICOS (UNified Industrial COntrol System) - based on PLC (Programming Language Controller) and SCADA (Supervisory Control And Data Acquisition) technologies. The first' UNICOS versions were able to provide a PID (Proportional Integrative Derivative) controller whereas the Gas Systems required more advanced control strategies. The MultiController is a new UNICOS object which provides the following advanced control algorithms: Smith Predictor, PFC (Predictive Function Control), RST* and GPC (Global Predictive Control). Its design is based on a monolithic entity with a global structure definition which is able to capture the desired set of parameters of any specific control algorithm supported by the object. The SCADA system -- PVSS - supervises the MultiController operation. The PVSS interface provides users with supervision faceplate, in particular it links any MultiController with recipes: the GCS experts are ab...

  3. Critical dynamics a field theory approach to equilibrium and non-equilibrium scaling behavior

    CERN Document Server

    Täuber, Uwe C

    2014-01-01

    Introducing a unified framework for describing and understanding complex interacting systems common in physics, chemistry, biology, ecology, and the social sciences, this comprehensive overview of dynamic critical phenomena covers the description of systems at thermal equilibrium, quantum systems, and non-equilibrium systems. Powerful mathematical techniques for dealing with complex dynamic systems are carefully introduced, including field-theoretic tools and the perturbative dynamical renormalization group approach, rapidly building up a mathematical toolbox of relevant skills. Heuristic and qualitative arguments outlining the essential theory behind each type of system are introduced at the start of each chapter, alongside real-world numerical and experimental data, firmly linking new mathematical techniques to their practical applications. Each chapter is supported by carefully tailored problems for solution, and comprehensive suggestions for further reading, making this an excellent introduction to critic...

  4. Load-based approaches for modelling visual clarity in streams at regional scale.

    Science.gov (United States)

    Elliott, A H; Davies-Colley, R J; Parshotam, A; Ballantine, D

    2013-01-01

    Reduction of visual clarity in streams by diffuse sources of fine sediment is a cause of water quality impairment in New Zealand and internationally. In this paper we introduce the concept of a load of optical cross section (LOCS), which can be used for load-based management of light-attenuating substances and for water quality models that are based on mass accounting. In this approach, the beam attenuation coefficient (units of m(-1)) is estimated from the inverse of the visual clarity (units of m) measured with a black disc. This beam attenuation coefficient can also be considered as an optical cross section (OCS) per volume of water, analogous to a concentration. The instantaneous 'flux' of cross section is obtained from the attenuation coefficient multiplied by the water discharge, and this can be accumulated over time to give an accumulated 'load' of cross section (LOCS). Moreover, OCS is a conservative quantity, in the sense that the OCS of two combined water volumes is the sum of the OCS of the individual water volumes (barring effects such as coagulation, settling, or sorption). The LOCS can be calculated for a water quality station using rating curve methods applied to measured time series of visual clarity and flow. This approach was applied to the sites in New Zealand's National Rivers Water Quality Network (NRWQN). Although the attenuation coefficient follows roughly a power relation with flow at some sites, more flexible loess rating curves are required at other sites. The hybrid mechanistic-statistical catchment model SPARROW (SPAtially Referenced Regressions On Watershed attributes), which is based on a mass balance for mean annual load, was then applied to the NRWQN dataset. Preliminary results from this model are presented, highlighting the importance of factors related to erosion, such as rainfall, slope, hardness of catchment rock types, and the influence of pastoral development on the load of optical cross section.

  5. Influence of plant productivity over variability of soil respiration: a multi-scale approach

    Science.gov (United States)

    Curiel Yuste, J.

    2009-04-01

    To investigate the role of plant photosynthetic activity on the variations in soil respiration (SR), SR data obtained from manual sampling and automatic soil respiration chambers placed on eddy flux towers sites were used. Plant photosynthetic activity was represented as Gross Primary Production (GPP), calculated from the half hourly continuous measurements of Net Ecosystem Exchange (NEE). The role of plant photosynthetic activity over the variation in SR was investigated at different time-scales: data averaged hourly, daily and weekly were used to study the photosynthetic effect on SR dial variations (Hourly data), 15 days variations (Daily averages), monthly variations (daily and weekly averages) and seasonal variations (weekly data). Our results confirm the important role of plant photosynthetic activity on the variations of SR at each of the mentioned time-scales. The effect of photosynthetic activity on SR was high on hourly time-scale (dial variations of SR). At half of the studied ecosystems GPP was the best single predictor of dial variations of SR. However at most of the studied sites the combination of soil temperature and GPP was the best predictor of dial variations in SR. The effect of aboveground productivity over dial variations of SR lagged on the range of 5 to 15 hours, depending on the ecosystem. At daily to monthly time scale variations of SR were in general better explained with the combination of temperature and moisture variations. However, ‘jumps' in average weekly SR during the growing season yielded anomaly high values of Q10, in some cases above 1000, which probably reflects synoptic changes in photosynthates translocation from plant activity. Finally, although seasonal changes of SR were in general very well explained by temperature and soil moisture, seasonality of SR was better correlated to seasonality of GPP than to seasonality of soil temperature and/or soil moisture. Therefore the magnitude of the seasonal variation in SR was in

  6. Scaling-up permafrost thermal measurements in western Alaska using an ecotype approach

    Directory of Open Access Journals (Sweden)

    W. L. Cable

    2016-10-01

    Full Text Available Permafrost temperatures are increasing in Alaska due to climate change and in some cases permafrost is thawing and degrading. In areas where degradation has already occurred the effects can be dramatic, resulting in changing ecosystems, carbon release, and damage to infrastructure. However, in many areas we lack baseline data, such as subsurface temperatures, needed to assess future changes and potential risk areas. Besides climate, the physical properties of the vegetation cover and subsurface material have a major influence on the thermal state of permafrost. These properties are often directly related to the type of ecosystem overlaying permafrost. In this paper we demonstrate that classifying the landscape into general ecotypes is an effective way to scale up permafrost thermal data collected from field monitoring sites. Additionally, we find that within some ecotypes the absence of a moss layer is indicative of the absence of near-surface permafrost. As a proof of concept, we used the ground temperature data collected from the field sites to recode an ecotype land cover map into a map of mean annual ground temperature ranges at 1 m depth based on analysis and clustering of observed thermal regimes. The map should be useful for decision making with respect to land use and understanding how the landscape might change under future climate scenarios.

  7. Construct validity of the Beck Hopelessness Scale (BHS) among university students: A multitrait-multimethod approach.

    Science.gov (United States)

    Boduszek, Daniel; Dhingra, Katie

    2016-10-01

    There is considerable debate about the underlying factor structure of the Beck Hopelessness Scale (BHS) in the literature. An established view is that it reflects a unitary or bidimensional construct in nonclinical samples. There are, however, reasons to reconsider this conceptualization. Based on previous factor analytic findings from both clinical and nonclinical studies, the aim of the present study was to compare 16 competing models of the BHS in a large university student sample (N = 1, 733). Sixteen distinct factor models were specified and tested using conventional confirmatory factor analytic techniques, along with confirmatory bifactor modeling. A 3-factor solution with 2 method effects (i.e., a multitrait-multimethod model) provided the best fit to the data. The reliability of this conceptualization was supported by McDonald's coefficient omega and the differential relationships exhibited between the 3 hopelessness factors ("feelings about the future," "loss of motivation," and "future expectations") and measures of goal disengagement, brooding rumination, suicide ideation, and suicide attempt history. The results provide statistical support for a 3-trait and 2-method factor model, and hence the 3 dimensions of hopelessness theorized by Beck. The theoretical and methodological implications of these findings are discussed. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  8. A structural approach in the study of bones: fossil and burnt bones at nanosize scale

    Science.gov (United States)

    Piga, Giampaolo; Baró, Maria Dolors; Escobal, Irati Golvano; Gonçalves, David; Makhoul, Calil; Amarante, Ana; Malgosa, Assumpció; Enzo, Stefano; Garroni, Sebastiano

    2016-12-01

    We review the different factors affecting significantly mineral structure and composition of bones. Particularly, it is assessed that micro-nanostructural and chemical properties of skeleton bones change drastically during burning; the micro- and nanostructural changes attending those phases manifest themselves, amongst others, in observable alterations to the bones colour, morphology, microstructure, mechanical strength and crystallinity. Intense changes involving the structure and chemical composition of bones also occur during the fossilization process. Bioapatite material is contaminated by an heavy fluorination process which, on a long-time scale reduces sensibly the volume of the original unit cell, mainly the a-axis of the hexagonal P63/m space group. Moreover, the bioapatite suffers to a varying degree of extent by phase contamination from the nearby environment, to the point that rarely a fluorapatite single phase may be found in fossil bones here examined. TEM images supply precise and localized information, on apatite crystal shape and dimension, and on different processes that occur during thermal processes or fossilization of ancient bone, complementary to that given by X-ray diffraction and Attenuated Total Reflection Infrared spectroscopy. We are presenting a synthesis of XRD, ATR-IR and TEM results on the nanostructure of various modern, burned and palaeontological bones.

  9. Catalonia's energy metabolism: Using the MuSIASEM approach at different scales

    International Nuclear Information System (INIS)

    Ramos-Martin, Jesus; Canellas-Bolta, Silvia; Giampietro, Mario; Gamboa, Gonzalo

    2009-01-01

    This paper applies the so-called Multi-Scale Integrated Analysis of Societal and Ecosystem Metabolism (MuSIASEM), based on Georgescu-Roegen's fund-flow model, to the Spanish region of Catalonia. It arrives to the conclusion that within the context of the end of cheap oil, the current development model of the Catalan economy, based on the growth of low-productivity sectors such as services and construction, must be changed. The change is needed not only because of the increasing scarcity of affordable energy and the increasing environmental impact of present development, but also because of the aging population. Moreover, the situation experienced by Catalonia is similar to that of other European countries and many other developed countries. This implies that we can expect a wave of major structural changes in the economy of developed countries worldwide. To make things more challenging, according to current trends, the energy intensity and exosomatic energy metabolism of Catalonia will keep increasing in the near future. To avoid a reduction in the standard of living of Catalans due to a reduction in the available energy it is important that the Government of Catalonia implement major adjustments and conservation efforts in both the household and paid-work sectors.

  10. A Parametric Genetic Algorithm Approach to Assess Complementary Options of Large Scale Wind-solar Coupling

    Institute of Scientific and Technical Information of China (English)

    Tim; Mareda; Ludovic; Gaudard; Franco; Romerio

    2017-01-01

    The transitional path towards a highly renewable power system based on wind and solar energy sources is investigated considering their intermittent and spatially distributed characteristics. Using an extensive weather-driven simulation of hourly power mismatches between generation and load, we explore the interplay between geographical resource complementarity and energy storage strategies. Solar and wind resources are considered at variable spatial scales across Europe and related to the Swiss load curve, which serve as a typical demand side reference. The optimal spatial distribution of renewable units is further assessed through a parameterized optimization method based on a genetic algorithm. It allows us to explore systematically the effective potential of combined integration strategies depending on the sizing of the system, with a focus on how overall performance is affected by the definition of network boundaries. Upper bounds on integration schemes are provided considering both renewable penetration and needed reserve power capacity. The quantitative trade-off between grid extension, storage and optimal wind-solar mix is highlighted.This paper also brings insights on how optimal geographical distribution of renewable units evolves as a function of renewable penetration and grid extent.

  11. Transport of particles in liquid foams: a multi-scale approach

    International Nuclear Information System (INIS)

    Louvet, N.

    2009-11-01

    Foam is used for the decontamination of radioactive tanks since foam is a system that has a large surface for a low amount of liquid and as a consequence requires less water to be decontaminated. We study experimentally different particle transport configurations in fluid micro-channels network (Plateau borders) of aqueous foam. At first, foam permeability is measured at the scale of a single channel and of the whole foam network for 2 soap solutions known for their significant different interface mobility. Experimental data are well described by a model that takes into account the real geometry of the foam and by considering a constant value of the Boussinesq number of each soap solutions. Secondly, the velocity of one particle convected in a single foam channel is measured for different particle/channel aspect ratio. For small aspect ratio, a counterflow that is taking place at the channel's corners slows down the particle. A recirculation model in the channel foam films is developed to describe this effect. To do this, the Gibbs elasticity is introduced. Then, the threshold between trapped and released of one particle in liquid foam are carried out. This threshold is deduced from hydrodynamic and capillary forces equilibrium. Finally, the case of a clog foam node is addressed. (author)

  12. Efficient stochastic approaches for sensitivity studies of an Eulerian large-scale air pollution model

    Science.gov (United States)

    Dimov, I.; Georgieva, R.; Todorov, V.; Ostromsky, Tz.

    2017-10-01

    Reliability of large-scale mathematical models is an important issue when such models are used to support decision makers. Sensitivity analysis of model outputs to variation or natural uncertainties of model inputs is crucial for improving the reliability of mathematical models. A comprehensive experimental study of Monte Carlo algorithms based on Sobol sequences for multidimensional numerical integration has been done. A comparison with Latin hypercube sampling and a particular quasi-Monte Carlo lattice rule based on generalized Fibonacci numbers has been presented. The algorithms have been successfully applied to compute global Sobol sensitivity measures corresponding to the influence of several input parameters (six chemical reactions rates and four different groups of pollutants) on the concentrations of important air pollutants. The concentration values have been generated by the Unified Danish Eulerian Model. The sensitivity study has been done for the areas of several European cities with different geographical locations. The numerical tests show that the stochastic algorithms under consideration are efficient for multidimensional integration and especially for computing small by value sensitivity indices. It is a crucial element since even small indices may be important to be estimated in order to achieve a more accurate distribution of inputs influence and a more reliable interpretation of the mathematical model results.

  13. A multi-signature approach to low-scale sterile neutrino phenomenology

    CERN Document Server

    Ross-Lonergan, Mark

    2017-01-01

    Since the discovery of non-zero neutrino masses, through the observation of neutrino flavour oscillations, we had a plethora of successful experiments which have made increasingly precise measurements of the mixing angles and mass-differences that drive the phenomena. In this thesis we highlight the fact that there is still significant room for new physics, however, when one removes the assumption of unitarity of the 3x3 neutrino mixing matrix, an assumption inherent in the 3ν paradigm. We refit all global data to show just how much non-unitarity is currently allowed. The canonical way that such a non-unitarity is introduced to the 3x3 neutrino mixing matrix is by the addition of additional neutral fermions, singlets under the Standard Model gauge group. These “Sterile Neutrinos” have a wide range of the- oretical and phenomenological implications. Alongside the sensitivity non-unitarity measurements have to sterile neutrinos, in this thesis we will study in detail two additional signatures of low-scale ...

  14. Scale-dependent mechanisms of habitat selection for a migratory passerine: an experimental approach

    Science.gov (United States)

    Donovan, Therese M.; Cornell, Kerri L.

    2010-01-01

    Habitat selection theory predicts that individuals choose breeding habitats that maximize fitness returns on the basis of indirect environmental cues at multiple spatial scales. We performed a 3-year field experiment to evaluate five alternative hypotheses regarding whether individuals choose breeding territories in heterogeneous landscapes on the basis of (1) shrub cover within a site, (2) forest land-cover pattern surrounding a site, (3) conspecific song cues during prebreeding settlement periods, (4) a combination of these factors, and (5) interactions among these factors. We tested hypotheses with playbacks of conspecific song across a gradient of landscape pattern and shrub density and evaluated changes in territory occupancy patterns in a forest-nesting passerine, the Black-throated Blue Warbler (Dendroica caerulescens). Our results support the hypothesis that vegetation structure plays a primary role during presettlement periods in determining occupancy patterns in this species. Further, both occupancy rates and territory turnover were affected by an interaction between local shrub density and amount of forest in the surrounding landscape, but not by interactions between habitat cues and social cues. Although previous studies of this species in unfragmented landscapes found that social postbreeding song cues played a key role in determining territory settlement, our prebreeding playbacks were not associated with territory occupancy or turnover. Our results suggest that in heterogeneous landscapes during spring settlement, vegetation structure may be a more reliable signal of reproductive performance than the physical location of other individuals.

  15. A scale space approach for unsupervised feature selection in mass spectra classification for ovarian cancer detection.

    Science.gov (United States)

    Ceccarelli, Michele; d'Acierno, Antonio; Facchiano, Angelo

    2009-10-15

    Mass spectrometry spectra, widely used in proteomics studies as a screening tool for protein profiling and to detect discriminatory signals, are high dimensional data. A large number of local maxima (a.k.a. peaks) have to be analyzed as part of computational pipelines aimed at the realization of efficient predictive and screening protocols. With this kind of data dimensions and samples size the risk of over-fitting and selection bias is pervasive. Therefore the development of bio-informatics methods based on unsupervised feature extraction can lead to general tools which can be applied to several fields of predictive proteomics. We propose a method for feature selection and extraction grounded on the theory of multi-scale spaces for high resolution spectra derived from analysis of serum. Then we use support vector machines for classification. In particular we use a database containing 216 samples spectra divided in 115 cancer and 91 control samples. The overall accuracy averaged over a large cross validation study is 98.18. The area under the ROC curve of the best selected model is 0.9962. We improved previous known results on the problem on the same data, with the advantage that the proposed method has an unsupervised feature selection phase. All the developed code, as MATLAB scripts, can be downloaded from http://medeaserver.isa.cnr.it/dacierno/spectracode.htm.

  16. A hybrid classical-quantum approach for ultra-scaled confined nanostructures : modeling and simulation*

    Directory of Open Access Journals (Sweden)

    Pietra Paola

    2012-04-01

    Full Text Available We propose a hybrid classical-quantum model to study the motion of electrons in ultra-scaled confined nanostructures. The transport of charged particles, considered as one dimensional, is described by a quantum effective mass model in the active zone coupled directly to a drift-diffusion problem in the rest of the device. We explain how this hybrid model takes into account the peculiarities due to the strong confinement and we present numerical simulations for a simplified carbon nanotube. Nous proposons un modèle hybride classique-quantique pour décrire le mouvement des électrons dans des nanostructures très fortement confinées. Le transport des particules, consideré unidimensionel, est décrit par un modèle quantique avec masse effective dans la zone active couplé à un problème de dérive-diffusion dans le reste du domaine. Nous expliquons comment ce modèle hybride prend en compte les spécificités de ce très fort confinement et nous présentons des résultats numériques pour un nanotube de carbone simplifié.

  17. Fixed point structure of quenched, planar quantum electrodynamics

    International Nuclear Information System (INIS)

    Love, S.T.

    1986-07-01

    Gauge theories exhibiting a hierarchy of fermion mass scales may contain a pseudo-Nambu-Boldstone boson of spontaneously broken scale invariance. The relation between scale and chiral symmetry breaking is studied analytically in quenched, planar quantum electrodynamics in four dimensions. The model possesses a novel nonperturbative ultraviolet fixed point governing its strong coupling phase which requires the mixing of four fermion operators. 12 refs

  18. A hybrid approach to estimating national scale spatiotemporal variability of PM2.5 in the contiguous United States.