WorldWideScience

Sample records for exploiting flow-based approaches

  1. Exploiting similarity in turbulent shear flows for turbulence modeling

    Science.gov (United States)

    Robinson, David F.; Harris, Julius E.; Hassan, H. A.

    1992-12-01

    It is well known that current k-epsilon models cannot predict the flow over a flat plate and its wake. In an effort to address this issue and other issues associated with turbulence closure, a new approach for turbulence modeling is proposed which exploits similarities in the flow field. Thus, if we consider the flow over a flat plate and its wake, then in addition to taking advantage of the log-law region, we can exploit the fact that the flow becomes self-similar in the far wake. This latter behavior makes it possible to cast the governing equations as a set of total differential equations. Solutions of this set and comparison with measured shear stress and velocity profiles yields the desired set of model constants. Such a set is, in general, different from other sets of model constants. The rational for such an approach is that if we can correctly model the flow over a flat plate and its far wake, then we can have a better chance of predicting the behavior in between. It is to be noted that the approach does not appeal, in any way, to the decay of homogeneous turbulence. This is because the asymptotic behavior of the flow under consideration is not representative of the decay of homogeneous turbulence.

  2. Exploiting similarity in turbulent shear flows for turbulence modeling

    Science.gov (United States)

    Robinson, David F.; Harris, Julius E.; Hassan, H. A.

    1992-01-01

    It is well known that current k-epsilon models cannot predict the flow over a flat plate and its wake. In an effort to address this issue and other issues associated with turbulence closure, a new approach for turbulence modeling is proposed which exploits similarities in the flow field. Thus, if we consider the flow over a flat plate and its wake, then in addition to taking advantage of the log-law region, we can exploit the fact that the flow becomes self-similar in the far wake. This latter behavior makes it possible to cast the governing equations as a set of total differential equations. Solutions of this set and comparison with measured shear stress and velocity profiles yields the desired set of model constants. Such a set is, in general, different from other sets of model constants. The rational for such an approach is that if we can correctly model the flow over a flat plate and its far wake, then we can have a better chance of predicting the behavior in between. It is to be noted that the approach does not appeal, in any way, to the decay of homogeneous turbulence. This is because the asymptotic behavior of the flow under consideration is not representative of the decay of homogeneous turbulence.

  3. An improved approach for flow-based cloud point extraction.

    Science.gov (United States)

    Frizzarin, Rejane M; Rocha, Fábio R P

    2014-04-11

    Novel strategies are proposed to circumvent the main drawbacks of flow-based cloud point extraction (CPE). The surfactant-rich phase (SRP) was directly retained into the optical path of the spectrophotometric cell, thus avoiding its dilution previously to the measurement and yielding higher sensitivity. Solenoid micro-pumps were exploited to improve mixing by the pulsed flow and also to modulate the flow-rate for retention and removal of the SRP, thus avoiding the elution step, often carried out with organic solvents. The heat released and the increase of the salt concentration provided by an on-line neutralization reaction were exploited to induce the cloud point without an external heating device. These innovations were demonstrated by the spectrophotometric determination of iron, yielding a linear response from 10 to 200 μg L(-1) with a coefficient of variation of 2.3% (n=7). Detection limit and sampling rate were estimated at 5 μg L(-1) (95% confidence level) and 26 samples per hour, respectively. The enrichment factor was 8.9 and the procedure consumed only 6 μg of TAN and 390 μg of Triton X-114 per determination. At the 95% confidence level, the results obtained for freshwater samples agreed with the reference procedure and those obtained for digests of bovine muscle, rice flour, brown bread and tort lobster agreed with the certified reference values. The proposed procedure thus shows advantages in relation to previously proposed approaches for flow-based CPE, being a fast and environmental friendly alternative for on-line separation and pre-concentration. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. A Solvent-Free Base Liberation of a Tertiary Aminoalkyl Halide by Flow Chemistry

    DEFF Research Database (Denmark)

    Pedersen, Michael Jønch; Skovby, Tommy; Mealy, Michael J.

    2016-01-01

    A flow setup for base liberation of 3-(N,N-dimethylamino)propyl chloride hydrochloride and solvent-free separation of the resulting free base has been developed. Production in flow profits from an on-demand approach, useful for labile aminoalkyl halides. The requirement for obtaining a dry product...... has been fulfilled by the simple use of a saturated NaOH solution, followed by isolation of the liquid phases by gravimetric separation. The flow setup has an E factor reduction of nearly 50%, and a distillation step has been avoided. The method exemplifies how flow chemistry can be exploited...

  5. A Risk-Based Ecohydrological Approach to Assessing Environmental Flow Regimes

    Science.gov (United States)

    Mcgregor, Glenn B.; Marshall, Jonathan C.; Lobegeiger, Jaye S.; Holloway, Dean; Menke, Norbert; Coysh, Julie

    2018-03-01

    For several decades there has been recognition that water resource development alters river flow regimes and impacts ecosystem values. Determining strategies to protect or restore flow regimes to achieve ecological outcomes is a focus of water policy and legislation in many parts of the world. However, consideration of existing environmental flow assessment approaches for application in Queensland identified deficiencies precluding their adoption. Firstly, in managing flows and using ecosystem condition as an indicator of effectiveness, many approaches ignore the fact that river ecosystems are subjected to threatening processes other than flow regime alteration. Secondly, many focus on providing flows for responses without considering how often they are necessary to sustain ecological values in the long-term. Finally, few consider requirements at spatial-scales relevant to the desired outcomes, with frequent focus on individual places rather than the regions supporting sustainability. Consequently, we developed a risk-based ecohydrological approach that identifies ecosystem values linked to desired ecological outcomes, is sensitive to flow alteration and uses indicators of broader ecosystem requirements. Monitoring and research is undertaken to quantify flow-dependencies and ecological modelling is used to quantify flow-related ecological responses over an historical flow period. The relative risk from different flow management scenarios can be evaluated at relevant spatial-scales. This overcomes the deficiencies identified above and provides a robust and useful foundation upon which to build the information needed to support water planning decisions. Application of the risk assessment approach is illustrated here by two case studies.

  6. Exploitation as the Unequal Exchange of Labour : An Axiomatic Approach

    OpenAIRE

    Yoshihara, Naoki; Veneziani, Roberto

    2009-01-01

    In subsistence economies with general convex technology and rational optimising agents, a new, axiomatic approach is developed, which allows an explicit analysis of the core positive and normative intuitions behind the concept of exploitation. Three main new axioms, called Labour Exploitation in Subsistence Economies , Relational Exploitation , and Feasibility of Non-Exploitation , are presented and it is proved that they uniquely characterise a definition of exploitation conceptually related...

  7. Activity-based exploitation of Full Motion Video (FMV)

    Science.gov (United States)

    Kant, Shashi

    2012-06-01

    Video has been a game-changer in how US forces are able to find, track and defeat its adversaries. With millions of minutes of video being generated from an increasing number of sensor platforms, the DOD has stated that the rapid increase in video is overwhelming their analysts. The manpower required to view and garner useable information from the flood of video is unaffordable, especially in light of current fiscal restraints. "Search" within full-motion video has traditionally relied on human tagging of content, and video metadata, to provision filtering and locate segments of interest, in the context of analyst query. Our approach utilizes a novel machine-vision based approach to index FMV, using object recognition & tracking, events and activities detection. This approach enables FMV exploitation in real-time, as well as a forensic look-back within archives. This approach can help get the most information out of video sensor collection, help focus the attention of overburdened analysts form connections in activity over time and conserve national fiscal resources in exploiting FMV.

  8. The multiphase flow system used in exploiting depleted reservoirs: water-based Micro-bubble drilling fluid

    International Nuclear Information System (INIS)

    Zheng Lihui; He Xiaoqing; Wang Xiangchun; Fu Lixia

    2009-01-01

    Water-based micro-bubble drilling fluid, which is used to exploit depleted reservoirs, is a complicated multiphase flow system that is composed of gas, water, oil, polymer, surfactants and solids. The gas phase is separate from bulk water by two layers and three membranes. They are 'surface tension reducing membrane', 'high viscosity layer', 'high viscosity fixing membrane', 'compatibility enhancing membrane' and 'concentration transition layer of liner high polymer (LHP) and surfactants' from every gas phase centre to the bulk water. 'Surface tension reducing membrane', 'high viscosity layer' and 'high viscosity fixing membrane' bond closely to pack air forming 'air-bag', 'compatibility enhancing membrane' and 'concentration transition layer of LHP and surfactants' absorb outside 'air-bag' to form 'incompact zone'. From another point of view, 'air-bag' and 'incompact zone' compose micro-bubble. Dynamic changes of 'incompact zone' enable micro-bubble to exist lonely or aggregate together, and lead the whole fluid, which can wet both hydrophilic and hydrophobic surface, to possess very high viscosity at an extremely low shear rate but to possess good fluidity at a higher shear rate. When the water-based micro-bubble drilling fluid encounters leakage zones, it will automatically regulate the sizes and shapes of the bubbles according to the slot width of fracture, the height of cavern as well as the aperture of openings, or seal them by making use of high viscosity of the system at a very low shear rate. Measurements of the rheological parameters indicate that water-based micro-bubble drilling fluid has very high plastic viscosity, yield point, initial gel, final gel and high ratio of yield point and plastic viscosity. All of these properties make the multiphase flow system meet the requirements of petroleum drilling industry. Research on interface between gas and bulk water of this multiphase flow system can provide us with information of synthesizing effective

  9. Methodological Approach to Company Cash Flows Target-Oriented Forecasting Based on Financial Position Analysis

    OpenAIRE

    Sergey Krylov

    2012-01-01

    The article treats a new methodological approach to the company cash flows target-oriented forecasting based on its financial position analysis. The approach is featured to be universal and presumes application of the following techniques developed by the author: financial ratio values correction techniques and correcting cash flows techniques. The financial ratio values correction technique assumes to analyze and forecast company financial position while the correcting cash flows technique i...

  10. Are Flow Injection-based Approaches Suitable for Automated Handling of Solid Samples?

    DEFF Research Database (Denmark)

    Miró, Manuel; Hansen, Elo Harald; Cerdà, Victor

    Flow-based approaches were originally conceived for liquid-phase analysis, implying that constituents in solid samples generally had to be transferred into the liquid state, via appropriate batch pretreatment procedures, prior to analysis. Yet, in recent years, much effort has been focused...... electrolytic or aqueous leaching, on-line dialysis/microdialysis, in-line filtration, and pervaporation-based procedures have been successfully implemented in continuous flow/flow injection systems. In this communication, the new generation of flow analysis, including sequential injection, multicommutated flow.......g., soils, sediments, sludges), and thus, ascertaining the potential mobility, bioavailability and eventual impact of anthropogenic elements on biota [2]. In this context, the principles of sequential injection-microcolumn extraction (SI-MCE) for dynamic fractionation are explained in detail along...

  11. A service and value based approach to estimating environmental flows

    DEFF Research Database (Denmark)

    Korsgaard, Louise; Jensen, R.A.; Jønch-Clausen, Torkil

    2008-01-01

    at filling that gap by presenting a new environmental flows assessment approach that explicitly links environmental flows to (socio)-economic values by focusing on ecosystem services. This Service Provision Index (SPI) approach is a novel contribution to the existing field of environmental flows assessment...... of sustaining ecosystems but also a matter of supporting humankind/livelihoods. One reason for the marginalisation of environmental flows is the lack of operational methods to demonstrate the inherently multi-disciplinary link between environmental flows, ecosystem services and economic value. This paper aims...

  12. Exploiting the Error-Correcting Capabilities of Low Density Parity Check Codes in Distributed Video Coding using Optical Flow

    DEFF Research Database (Denmark)

    Rakêt, Lars Lau; Søgaard, Jacob; Salmistraro, Matteo

    2012-01-01

    We consider Distributed Video Coding (DVC) in presence of communication errors. First, we present DVC side information generation based on a new method of optical flow driven frame interpolation, where a highly optimized TV-L1 algorithm is used for the flow calculations and combine three flows....... Thereafter methods for exploiting the error-correcting capabilities of the LDPCA code in DVC are investigated. The proposed frame interpolation includes a symmetric flow constraint to the standard forward-backward frame interpolation scheme, which improves quality and handling of large motion. The three...... flows are combined in one solution. The proposed frame interpolation method consistently outperforms an overlapped block motion compensation scheme and a previous TV-L1 optical flow frame interpolation method with an average PSNR improvement of 1.3 dB and 2.3 dB respectively. For a GOP size of 2...

  13. A critical comparison of constant and pulsed flow systems exploiting gas diffusion.

    Science.gov (United States)

    Silva, Claudineia Rodrigues; Henriquez, Camelia; Frizzarin, Rejane Mara; Zagatto, Elias Ayres Guidetti; Cerda, Victor

    2016-02-01

    Considering the beneficial aspects arising from the implementation of pulsed flows in flow analysis, and the relevance of in-line gas diffusion as an analyte separation/concentration step, influence of flow pattern in flow systems with in-line gas diffusion was critically investigated. To this end, constant or pulsed flows delivered by syringe or solenoid pumps were exploited. For each flow pattern, two variants involving different interaction times of the donor with the acceptor streams were studied. In the first one, both the acceptor and donor streams were continuously flowing, whereas in the second one, the acceptor was stopped during the gas diffusion step. Four different volatile species (ammonia, ethanol, carbon dioxide and hydrogen sulfide) were selected as models. For the flow patterns and variants studied, the efficiencies of mass transport in the gas diffusion process were compared, and sensitivity, repeatability, sampling frequency and recorded peak shape were evaluated. Analysis of the results revealed that sensitivity is strongly dependent on the implemented variant, and that flow pattern is an important feature in flow systems with in-line gas diffusion. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. Drag reduction in a turbulent channel flow using a passivity-based approach

    Science.gov (United States)

    Heins, Peter; Jones, Bryn; Sharma, Atul

    2013-11-01

    A new active feedback control strategy for attenuating perturbation energy in a turbulent channel flow is presented. Using a passivity-based approach, a controller synthesis procedure has been devised which is capable of making the linear dynamics of a channel flow as close to passive as is possible given the limitations on sensing and actuation. A controller that is capable of making the linearized flow passive is guaranteed to globally stabilize the true flow. The resulting controller is capable of greatly restricting the amount of turbulent energy that the nonlinearity can feed back into the flow. DNS testing of a controller using wall-sensing of streamwise and spanwise shear stress and actuation via wall transpiration acting upon channel flows with Reτ = 100 - 250 showed significant reductions in skin-friction drag.

  15. MacCormack's technique-based pressure reconstruction approach for PIV data in compressible flows with shocks

    Science.gov (United States)

    Liu, Shun; Xu, Jinglei; Yu, Kaikai

    2017-06-01

    This paper proposes an improved approach for extraction of pressure fields from velocity data, such as obtained by particle image velocimetry (PIV), especially for steady compressible flows with strong shocks. The principle of this approach is derived from Navier-Stokes equations, assuming adiabatic condition and neglecting viscosity of flow field boundaries measured by PIV. The computing method is based on MacCormack's technique in computational fluid dynamics. Thus, this approach is called the MacCormack method. Moreover, the MacCormack method is compared with several approaches proposed in previous literature, including the isentropic method, the spatial integration and the Poisson method. The effects of velocity error level and PIV spatial resolution on these approaches are also quantified by using artificial velocity data containing shock waves. The results demonstrate that the MacCormack method has higher reconstruction accuracy than other approaches, and its advantages become more remarkable with shock strengthening. Furthermore, the performance of the MacCormack method is also validated by using synthetic PIV images with an oblique shock wave, confirming the feasibility and advantage of this approach in real PIV experiments. This work is highly significant for the studies on aerospace engineering, especially the outer flow fields of supersonic aircraft and the internal flow fields of ramjets.

  16. Nanoparticle-based assays in automated flow systems: A review

    Energy Technology Data Exchange (ETDEWEB)

    Passos, Marieta L.C. [LAQV, REQUIMTE, Departamento de Ciências Químicas, Faculdade de Farmácia, Universidade do Porto, Rua Jorge Viterbo Ferreira, n° 228, 4050-313 Porto (Portugal); Pinto, Paula C.A.G., E-mail: ppinto@ff.up.pt [LAQV, REQUIMTE, Departamento de Ciências Químicas, Faculdade de Farmácia, Universidade do Porto, Rua Jorge Viterbo Ferreira, n° 228, 4050-313 Porto (Portugal); Santos, João L.M., E-mail: joaolms@ff.up.pt [LAQV, REQUIMTE, Departamento de Ciências Químicas, Faculdade de Farmácia, Universidade do Porto, Rua Jorge Viterbo Ferreira, n° 228, 4050-313 Porto (Portugal); Saraiva, M. Lúcia M.F.S., E-mail: lsaraiva@ff.up.pt [LAQV, REQUIMTE, Departamento de Ciências Químicas, Faculdade de Farmácia, Universidade do Porto, Rua Jorge Viterbo Ferreira, n° 228, 4050-313 Porto (Portugal); Araujo, André R.T.S. [LAQV, REQUIMTE, Departamento de Ciências Químicas, Faculdade de Farmácia, Universidade do Porto, Rua Jorge Viterbo Ferreira, n° 228, 4050-313 Porto (Portugal); Unidade de Investigação para o Desenvolvimento do Interior, Instituto Politécnico da Guarda, Av. Dr. Francisco de Sá Carneiro, n° 50, 6300-559 Guarda (Portugal)

    2015-08-19

    Nanoparticles (NPs) exhibit a number of distinctive and entrancing properties that explain their ever increasing application in analytical chemistry, mainly as chemosensors, signaling tags, catalysts, analytical signal enhancers, reactive species generators, analyte recognition and scavenging/separation entities. The prospect of associating NPs with automated flow-based analytical is undoubtedly a challenging perspective as it would permit confined, cost-effective and reliable analysis, within a shorter timeframe, while exploiting the features of NPs. This article aims at examining state-of-the-art on continuous flow analysis and microfluidic approaches involving NPs such as noble metals (gold and silver), magnetic materials, carbon, silica or quantum dots. Emphasis is devoted to NP format, main practical achievements and fields of application. In this context, the functionalization of NPs with distinct chemical species and ligands is debated in what concerns the motivations and strengths of developed approaches. The utilization of NPs to improve detector's performance in electrochemical application is out of the scope of this review. The works discussed in this review were published in the period of time comprised between the years 2000 and 2013. - Highlights: • The state of the art of flowing stream systems comprising NPs was reviewed. • The use of different types of nanoparticles in each flow technique is discussed. • The most expressive and profitable applications are summarized. • The main conclusions and future perspectives were compiled in the final section.

  17. Nanoparticle-based assays in automated flow systems: A review

    International Nuclear Information System (INIS)

    Passos, Marieta L.C.; Pinto, Paula C.A.G.; Santos, João L.M.; Saraiva, M. Lúcia M.F.S.; Araujo, André R.T.S.

    2015-01-01

    Nanoparticles (NPs) exhibit a number of distinctive and entrancing properties that explain their ever increasing application in analytical chemistry, mainly as chemosensors, signaling tags, catalysts, analytical signal enhancers, reactive species generators, analyte recognition and scavenging/separation entities. The prospect of associating NPs with automated flow-based analytical is undoubtedly a challenging perspective as it would permit confined, cost-effective and reliable analysis, within a shorter timeframe, while exploiting the features of NPs. This article aims at examining state-of-the-art on continuous flow analysis and microfluidic approaches involving NPs such as noble metals (gold and silver), magnetic materials, carbon, silica or quantum dots. Emphasis is devoted to NP format, main practical achievements and fields of application. In this context, the functionalization of NPs with distinct chemical species and ligands is debated in what concerns the motivations and strengths of developed approaches. The utilization of NPs to improve detector's performance in electrochemical application is out of the scope of this review. The works discussed in this review were published in the period of time comprised between the years 2000 and 2013. - Highlights: • The state of the art of flowing stream systems comprising NPs was reviewed. • The use of different types of nanoparticles in each flow technique is discussed. • The most expressive and profitable applications are summarized. • The main conclusions and future perspectives were compiled in the final section

  18. AN AERIAL-IMAGE DENSE MATCHING APPROACH BASED ON OPTICAL FLOW FIELD

    Directory of Open Access Journals (Sweden)

    W. Yuan

    2016-06-01

    Full Text Available Dense matching plays an important role in many fields, such as DEM (digital evaluation model producing, robot navigation and 3D environment reconstruction. Traditional approaches may meet the demand of accuracy. But the calculation time and out puts density is hardly be accepted. Focus on the matching efficiency and complex terrain surface matching feasibility an aerial image dense matching method based on optical flow field is proposed in this paper. First, some high accurate and uniformed control points are extracted by using the feature based matching method. Then the optical flow is calculated by using these control points, so as to determine the similar region between two images. Second, the optical flow field is interpolated by using the multi-level B-spline interpolation in the similar region and accomplished the pixel by pixel coarse matching. Final, the results related to the coarse matching refinement based on the combined constraint, which recognizes the same points between images. The experimental results have shown that our method can achieve per-pixel dense matching points, the matching accuracy achieves sub-pixel level, and fully meet the three-dimensional reconstruction and automatic generation of DSM-intensive matching’s requirements. The comparison experiments demonstrated that our approach’s matching efficiency is higher than semi-global matching (SGM and Patch-based multi-view stereo matching (PMVS which verifies the feasibility and effectiveness of the algorithm.

  19. Improving Anomaly Detection for Text-Based Protocols by Exploiting Message Structures

    Directory of Open Access Journals (Sweden)

    Christian M. Mueller

    2010-12-01

    Full Text Available Service platforms using text-based protocols need to be protected against attacks. Machine-learning algorithms with pattern matching can be used to detect even previously unknown attacks. In this paper, we present an extension to known Support Vector Machine (SVM based anomaly detection algorithms for the Session Initiation Protocol (SIP. Our contribution is to extend the amount of different features used for classification (feature space by exploiting the structure of SIP messages, which reduces the false positive rate. Additionally, we show how combining our approach with attribute reduction significantly improves throughput.

  20. A multicriteria approach to identify investment opportunities for the exploitation of the clean development mechanism

    International Nuclear Information System (INIS)

    Diakoulaki, D.; Georgiou, P.; Tourkolias, C.; Georgopoulou, E.; Lalas, D.; Mirasgedis, S.; Sarafidis, Y.

    2007-01-01

    The aim of the present paper is to investigate the prospects for the exploitation of the Kyoto Protocol's Clean Development Mechanism (CDM) in Greece. The paper is addressing 3 questions: in which country, what kind of investment, with which economic and environmental return? The proposed approach is based on a multicriteria analysis for identifying priority countries and interesting investment opportunities in each priority country. These opportunities are then evaluated through a conventional financial analysis in order to assess their economic and environmental attractiveness. To this purpose, the IRR of a typical project in each investment category is calculated by taking into account country-specific parameters, such as baseline emission factors, load factors, costs, energy prices etc. The results reveal substantial differences in the economic and environmental return of different types of projects in different host-countries and show that for the full exploitation of the CDM a multifaceted approach to decision-making is necessary

  1. A distributed predictive control approach for periodic flow-based networks: application to drinking water systems

    Science.gov (United States)

    Grosso, Juan M.; Ocampo-Martinez, Carlos; Puig, Vicenç

    2017-10-01

    This paper proposes a distributed model predictive control approach designed to work in a cooperative manner for controlling flow-based networks showing periodic behaviours. Under this distributed approach, local controllers cooperate in order to enhance the performance of the whole flow network avoiding the use of a coordination layer. Alternatively, controllers use both the monolithic model of the network and the given global cost function to optimise the control inputs of the local controllers but taking into account the effect of their decisions over the remainder subsystems conforming the entire network. In this sense, a global (all-to-all) communication strategy is considered. Although the Pareto optimality cannot be reached due to the existence of non-sparse coupling constraints, the asymptotic convergence to a Nash equilibrium is guaranteed. The resultant strategy is tested and its effectiveness is shown when applied to a large-scale complex flow-based network: the Barcelona drinking water supply system.

  2. Exploitation of Labour and Exploitation of Commodities: a “New Interpretation”

    OpenAIRE

    Veneziani, Roberto; Yoshihara, Naoki

    2011-01-01

    In the standard Okishio-Morishima approach, the existence of profits is proved to be equivalent to the exploitation of labour. Yet, it can also be proved that the existence of profits is equivalent to the ‘exploitation’ of any good. Labour and commodity exploitation are just different numerical representations of the productiveness of the economy. This paper presents an alternative approach to exploitation theory which is related to the New Interpretation (Duménil 1980; Foley 1982). In this a...

  3. Root Exploit Detection and Features Optimization: Mobile Device and Blockchain Based Medical Data Management.

    Science.gov (United States)

    Firdaus, Ahmad; Anuar, Nor Badrul; Razak, Mohd Faizal Ab; Hashem, Ibrahim Abaker Targio; Bachok, Syafiq; Sangaiah, Arun Kumar

    2018-05-04

    The increasing demand for Android mobile devices and blockchain has motivated malware creators to develop mobile malware to compromise the blockchain. Although the blockchain is secure, attackers have managed to gain access into the blockchain as legal users, thereby comprising important and crucial information. Examples of mobile malware include root exploit, botnets, and Trojans and root exploit is one of the most dangerous malware. It compromises the operating system kernel in order to gain root privileges which are then used by attackers to bypass the security mechanisms, to gain complete control of the operating system, to install other possible types of malware to the devices, and finally, to steal victims' private keys linked to the blockchain. For the purpose of maximizing the security of the blockchain-based medical data management (BMDM), it is crucial to investigate the novel features and approaches contained in root exploit malware. This study proposes to use the bio-inspired method of practical swarm optimization (PSO) which automatically select the exclusive features that contain the novel android debug bridge (ADB). This study also adopts boosting (adaboost, realadaboost, logitboost, and multiboost) to enhance the machine learning prediction that detects unknown root exploit, and scrutinized three categories of features including (1) system command, (2) directory path and (3) code-based. The evaluation gathered from this study suggests a marked accuracy value of 93% with Logitboost in the simulation. Logitboost also helped to predicted all the root exploit samples in our developed system, the root exploit detection system (RODS).

  4. Model-based design and optimization of vanadium redox flow batteries

    Energy Technology Data Exchange (ETDEWEB)

    Koenig, Sebastian

    2017-07-19

    This work targets on increasing the efficiency of the Vanadium Redox Flow Battery (VRFB) using a model-based approach. First, a detailed instruction for setting up a VRFB model on a system level is given. Modelling of open-circuit-voltage, ohmic overpotential, concentration overpotential, Vanadium crossover, shunt currents as well as pump power demand is presented. All sub-models are illustrated using numerical examples. Using experimental data from three battery manufacturers, the voltage model validated. The identified deviations reveal deficiencies in the literature model. By correctly deriving the mass transfer coefficients and adapting the effective electrode area, these deficiencies are eliminated. The validated battery model is then deployed in an extensive design study. By varying the electrode area between 1000 and 4000 cm{sup 2} and varying the design of the electrolyte supply channel, twenty-four different cell designs are created using finite element analysis. These designs are subsequently simulated in 40-cell stacks deployed in systems with a single stack and systems with a three-stack string. Using the simulation results, the impact of different design parameters on different loss mechanisms is investigated. While operating the VRFB, the electrolyte flow rate is the most important operational parameter. A novel, model-based optimization strategy is presented and compared to established flow rate control strategies. Further, a voltage controller is introduced which delays the violation of cell voltage limits by controlling the flow rate as long as the pump capacity is not fully exploited.

  5. Model-based design and optimization of vanadium redox flow batteries

    International Nuclear Information System (INIS)

    Koenig, Sebastian

    2017-01-01

    This work targets on increasing the efficiency of the Vanadium Redox Flow Battery (VRFB) using a model-based approach. First, a detailed instruction for setting up a VRFB model on a system level is given. Modelling of open-circuit-voltage, ohmic overpotential, concentration overpotential, Vanadium crossover, shunt currents as well as pump power demand is presented. All sub-models are illustrated using numerical examples. Using experimental data from three battery manufacturers, the voltage model validated. The identified deviations reveal deficiencies in the literature model. By correctly deriving the mass transfer coefficients and adapting the effective electrode area, these deficiencies are eliminated. The validated battery model is then deployed in an extensive design study. By varying the electrode area between 1000 and 4000 cm 2 and varying the design of the electrolyte supply channel, twenty-four different cell designs are created using finite element analysis. These designs are subsequently simulated in 40-cell stacks deployed in systems with a single stack and systems with a three-stack string. Using the simulation results, the impact of different design parameters on different loss mechanisms is investigated. While operating the VRFB, the electrolyte flow rate is the most important operational parameter. A novel, model-based optimization strategy is presented and compared to established flow rate control strategies. Further, a voltage controller is introduced which delays the violation of cell voltage limits by controlling the flow rate as long as the pump capacity is not fully exploited.

  6. TwitterSensing: An Event-Based Approach for Wireless Sensor Networks Optimization Exploiting Social Media in Smart City Applications.

    Science.gov (United States)

    Costa, Daniel G; Duran-Faundez, Cristian; Andrade, Daniel C; Rocha-Junior, João B; Peixoto, João Paulo Just

    2018-04-03

    Modern cities are subject to periodic or unexpected critical events, which may bring economic losses or even put people in danger. When some monitoring systems based on wireless sensor networks are deployed, sensing and transmission configurations of sensor nodes may be adjusted exploiting the relevance of the considered events, but efficient detection and classification of events of interest may be hard to achieve. In Smart City environments, several people spontaneously post information in social media about some event that is being observed and such information may be mined and processed for detection and classification of critical events. This article proposes an integrated approach to detect and classify events of interest posted in social media, notably in Twitter , and the assignment of sensing priorities to source nodes. By doing so, wireless sensor networks deployed in Smart City scenarios can be optimized for higher efficiency when monitoring areas under the influence of the detected events.

  7. TwitterSensing: An Event-Based Approach for Wireless Sensor Networks Optimization Exploiting Social Media in Smart City Applications

    Directory of Open Access Journals (Sweden)

    Daniel G. Costa

    2018-04-01

    Full Text Available Modern cities are subject to periodic or unexpected critical events, which may bring economic losses or even put people in danger. When some monitoring systems based on wireless sensor networks are deployed, sensing and transmission configurations of sensor nodes may be adjusted exploiting the relevance of the considered events, but efficient detection and classification of events of interest may be hard to achieve. In Smart City environments, several people spontaneously post information in social media about some event that is being observed and such information may be mined and processed for detection and classification of critical events. This article proposes an integrated approach to detect and classify events of interest posted in social media, notably in Twitter, and the assignment of sensing priorities to source nodes. By doing so, wireless sensor networks deployed in Smart City scenarios can be optimized for higher efficiency when monitoring areas under the influence of the detected events.

  8. Flow Regime Identification of Co-Current Downward Two-Phase Flow With Neural Network Approach

    International Nuclear Information System (INIS)

    Hiroshi Goda; Seungjin Kim; Ye Mi; Finch, Joshua P.; Mamoru Ishii; Jennifer Uhle

    2002-01-01

    Flow regime identification for an adiabatic vertical co-current downward air-water two-phase flow in the 25.4 mm ID and the 50.8 mm ID round tubes was performed by employing an impedance void meter coupled with the neural network classification approach. This approach minimizes the subjective judgment in determining the flow regimes. The signals obtained by an impedance void meter were applied to train the self-organizing neural network to categorize these impedance signals into a certain number of groups. The characteristic parameters set into the neural network classification included the mean, standard deviation and skewness of impedance signals in the present experiment. The classification categories adopted in the present investigation were four widely accepted flow regimes, viz. bubbly, slug, churn-turbulent, and annular flows. These four flow regimes were recognized based upon the conventional flow visualization approach by a high-speed motion analyzer. The resulting flow regime maps classified by the neural network were compared with the results obtained through the flow visualization method, and consequently the efficiency of the neural network classification for flow regime identification was demonstrated. (authors)

  9. Flow-Based Network Analysis of the Caenorhabditis elegans Connectome.

    Science.gov (United States)

    Bacik, Karol A; Schaub, Michael T; Beguerisse-Díaz, Mariano; Billeh, Yazan N; Barahona, Mauricio

    2016-08-01

    We exploit flow propagation on the directed neuronal network of the nematode C. elegans to reveal dynamically relevant features of its connectome. We find flow-based groupings of neurons at different levels of granularity, which we relate to functional and anatomical constituents of its nervous system. A systematic in silico evaluation of the full set of single and double neuron ablations is used to identify deletions that induce the most severe disruptions of the multi-resolution flow structure. Such ablations are linked to functionally relevant neurons, and suggest potential candidates for further in vivo investigation. In addition, we use the directional patterns of incoming and outgoing network flows at all scales to identify flow profiles for the neurons in the connectome, without pre-imposing a priori categories. The four flow roles identified are linked to signal propagation motivated by biological input-response scenarios.

  10. Real-Time and Resilient Intrusion Detection: A Flow-Based Approach

    NARCIS (Netherlands)

    Hofstede, R.J.; Pras, Aiko

    Due to the demanding performance requirements of packet-based monitoring solutions on network equipment, flow-based intrusion detection systems will play an increasingly important role in current high-speed networks. The required technologies are already available and widely deployed: NetFlow and

  11. A multi-pumping flow-based procedure with improved sensitivity for the spectrophotometric determination of acid-dissociable cyanide in natural waters.

    Science.gov (United States)

    Frizzarin, Rejane M; Rocha, Fábio R P

    2013-01-03

    An analytical procedure with improved sensitivity was developed for cyanide determination in natural waters, exploiting the reaction with the complex of Cu(I) with 2,2'-biquinoline 4,4'-dicarboxylic acid (BCA). The flow system was based on the multi-pumping approach and long pathlength spectrophotometry with a flow cell based on a Teflon AF 2400(®) liquid core waveguide was exploited to increase sensitivity. A linear response was achieved from 5 to 200μg L(-1), with coefficient of variation of 1.5% (n=10). The detection limit and the sampling rate were 2μg L(-1) (99.7% confidence level), and 22h(-1), respectively. Per determination, 48ng of Cu(II), 5μg of ascorbic acid and 0.9μg of BCA were consumed. As high as 100mg L(-1) thiocyanate, nitrite or sulfite did not affect cyanide determination. Sulfide did not interfere at concentrations lower than 40 and 200μg L(-1) before or after sample pretreatment with hydrogen peroxide. The results for natural waters samples agreed with those obtained by a fluorimetric flow-based procedure at the 95% confidence level. The proposed procedure is then a reliable, fast and environmentally friendly alternative for cyanide determination in natural waters. Copyright © 2012 Elsevier B.V. All rights reserved.

  12. Exploiting impedance shaping approaches to overcome force overshoots in delicate interaction tasks

    Directory of Open Access Journals (Sweden)

    Loris Roveda

    2016-09-01

    Full Text Available The aim of the presented article is to overcome the force overshoot issue in impedance based force tracking applications. Nowadays, light-weight manipulators are involved in high-accurate force control applications (such as polishing tasks, where the force overshoot issue is critical (i.e. damaging the component causing a production waste, exploiting the impedance control. Two main force tracking impedance control approaches are described in literature: (a set-point deformation and (b variable stiffness approaches. However, no contributions are directly related to the force overshoot issue. The presented article extends both such methodologies to analytically achieve the force overshoots avoidance in interaction tasks based on the on-line estimation of the interacting environment stiffness (available through an EKF. Both the proposed control algorithms allow to achieve a linear closed-loop dynamics for the coupled robot-environment system. Therefore, control gains can be analytically on-line calculated to achieve an over-damped closed-loop dynamics of the controlled coupled system. Control strategies have been validated in experiments, involving a KUKA LWR 4+. A probing task has been performed, representative of many industrial tasks (e.g. assembly tasks, in which a main force task direction is defined.

  13. Rational Exploitation and Utilizing of Groundwater in Jiangsu Coastal Area

    Science.gov (United States)

    Kang, B.; Lin, X.

    2017-12-01

    Jiangsu coastal area is located in the southeast coast of China, where is a new industrial base and an important coastal and Land Resources Development Zone of China. In the areas with strong human exploitation activities, regional groundwater evolution is obviously affected by human activities. In order to solve the environmental geological problems caused by groundwater exploitation fundamentally, we must find out the forming conditions of regional groundwater hydrodynamic field, and the impact of human activities on groundwater hydrodynamic field evolution and hydrogeochemical evolition. Based on these results, scientific management and reasonable exploitation of the regional groundwater resources can be provided for the utilization. Taking the coastal area of Jiangsu as the research area, we investigate and analyze of the regional hydrogeological conditions. The numerical simulation model of groundwater flow was established according to the water power, chemical and isotopic methods, the conditions of water flow and the influence of hydrodynamic field on the water chemical field. We predict the evolution of regional groundwater dynamics under the influence of human activities and climate change and evaluate the influence of groundwater dynamic field evolution on the environmental geological problems caused by groundwater exploitation under various conditions. We get the following conclusions: Three groundwater exploitation optimal schemes were established. The groundwater salinization was taken as the primary control condition. The substitution model was proposed to model groundwater exploitation and water level changes by BP network method.Then genetic algorithm was used to solve the optimization solution. Three groundwater exploitation optimal schemes were submit to local water resource management. The first sheme was used to solve the groundwater salinization problem. The second sheme focused on dual water supply. The third sheme concerned on emergency water

  14. New approach to exploit optimally the PV array output energy by maximizing the discharge rate of a directly-coupled photovoltaic water pumping system (DC/PVPS)

    International Nuclear Information System (INIS)

    Boutelhig, Azzedine; Hadj Arab, Amar; Hanini, Salah

    2016-01-01

    Highlights: • Mismatches on a designed d-c PV pumping system have been highlighted. • A new approach predicting the maximal discharge has been developed. • The approach has been discussed versus its linearity coefficient. • The approach effectiveness has been investigated and approved. • Theoretical and experimental obtained values have been compared and approved. - Abstract: A directly-coupled photovoltaic water pumping system (DC/PVPS) is generally designed by considering the worst month conditions on lowest daylight-hours, the maximum monthly daily required water volume and tank to store the excess water. In case of absence of hydraulic storage (water tank) or it is not enough dimensioned, the extra amount of pumped water is lost or is not reasonably used, when the system is operated on full daylight-hour. Beside that the extra amount of energy, which might be produced by the PV generator, is not exploited, when the system is operated only during a specified period-time needed to satisfy the demand. Beyond the accurate design that satisfying the end-user, a new approach has been developed as target to exploit maximally the PV array energy production, by maximizing the discharge rate of the system. The methodology consists of approaching maximally the demanded energy to the supplied energy on full operating day. Based on the demand/supply energy condition, the approach has been developed, upon the PV array and the pump performance models. The issued approach predicts the maximum delivery capacity of the system on monthly daily water volumes versus the monthly daily averages of solar irradiation, previously recorded. Its efficacy has been investigated and discussed according to the estimated and experimental values of its linearity coefficient, following the characterization tests of a designed system, carried out at our pumping test facility in Ghardaia (Algeria). The new theoretically and experimentally obtained flow-rates fit well, except

  15. Approaches in anomaly-based network intrusion detection systems

    NARCIS (Netherlands)

    Bolzoni, D.; Etalle, S.; Di Pietro, R.; Mancini, L.V.

    2008-01-01

    Anomaly-based network intrusion detection systems (NIDSs) can take into consideration packet headers, the payload, or a combination of both. We argue that payload-based approaches are becoming the most effective methods to detect attacks. Nowadays, attacks aim mainly to exploit vulnerabilities at

  16. Approaches in Anomaly-based Network Intrusion Detection Systems

    NARCIS (Netherlands)

    Bolzoni, D.; Etalle, Sandro

    Anomaly-based network intrusion detection systems (NIDSs) can take into consideration packet headers, the payload, or a combination of both. We argue that payload-based approaches are becoming the most effective methods to detect attacks. Nowadays, attacks aim mainly to exploit vulnerabilities at

  17. Geographically distributed Batch System as a Service: the INDIGO-DataCloud approach exploiting HTCondor

    Science.gov (United States)

    Aiftimiei, D. C.; Antonacci, M.; Bagnasco, S.; Boccali, T.; Bucchi, R.; Caballer, M.; Costantini, A.; Donvito, G.; Gaido, L.; Italiano, A.; Michelotto, D.; Panella, M.; Salomoni, D.; Vallero, S.

    2017-10-01

    One of the challenges a scientific computing center has to face is to keep delivering well consolidated computational frameworks (i.e. the batch computing farm), while conforming to modern computing paradigms. The aim is to ease system administration at all levels (from hardware to applications) and to provide a smooth end-user experience. Within the INDIGO- DataCloud project, we adopt two different approaches to implement a PaaS-level, on-demand Batch Farm Service based on HTCondor and Mesos. In the first approach, described in this paper, the various HTCondor daemons are packaged inside pre-configured Docker images and deployed as Long Running Services through Marathon, profiting from its health checks and failover capabilities. In the second approach, we are going to implement an ad-hoc HTCondor framework for Mesos. Container-to-container communication and isolation have been addressed exploring a solution based on overlay networks (based on the Calico Project). Finally, we have studied the possibility to deploy an HTCondor cluster that spans over different sites, exploiting the Condor Connection Broker component, that allows communication across a private network boundary or firewall as in case of multi-site deployments. In this paper, we are going to describe and motivate our implementation choices and to show the results of the first tests performed.

  18. A bi-population based scheme for an explicit exploration/exploitation trade-off in dynamic environments

    Science.gov (United States)

    Ben-Romdhane, Hajer; Krichen, Saoussen; Alba, Enrique

    2017-05-01

    Optimisation in changing environments is a challenging research topic since many real-world problems are inherently dynamic. Inspired by the natural evolution process, evolutionary algorithms (EAs) are among the most successful and promising approaches that have addressed dynamic optimisation problems. However, managing the exploration/exploitation trade-off in EAs is still a prevalent issue, and this is due to the difficulties associated with the control and measurement of such a behaviour. The proposal of this paper is to achieve a balance between exploration and exploitation in an explicit manner. The idea is to use two equally sized populations: the first one performs exploration while the second one is responsible for exploitation. These tasks are alternated from one generation to the next one in a regular pattern, so as to obtain a balanced search engine. Besides, we reinforce the ability of our algorithm to quickly adapt after cnhanges by means of a memory of past solutions. Such a combination aims to restrain the premature convergence, to broaden the search area, and to speed up the optimisation. We show through computational experiments, and based on a series of dynamic problems and many performance measures, that our approach improves the performance of EAs and outperforms competing algorithms.

  19. An Approach to Predict Debris Flow Average Velocity

    Directory of Open Access Journals (Sweden)

    Chen Cao

    2017-03-01

    Full Text Available Debris flow is one of the major threats for the sustainability of environmental and social development. The velocity directly determines the impact on the vulnerability. This study focuses on an approach using radial basis function (RBF neural network and gravitational search algorithm (GSA for predicting debris flow velocity. A total of 50 debris flow events were investigated in the Jiangjia gully. These data were used for building the GSA-based RBF approach (GSA-RBF. Eighty percent (40 groups of the measured data were selected randomly as the training database. The other 20% (10 groups of data were used as testing data. Finally, the approach was applied to predict six debris flow gullies velocities in the Wudongde Dam site area, where environmental conditions were similar to the Jiangjia gully. The modified Dongchuan empirical equation and the pulled particle analysis of debris flow (PPA approach were used for comparison and validation. The results showed that: (i the GSA-RBF predicted debris flow velocity values are very close to the measured values, which performs better than those using RBF neural network alone; (ii the GSA-RBF results and the MDEE results are similar in the Jiangjia gully debris flow velocities prediction, and GSA-RBF performs better; (iii in the study area, the GSA-RBF results are validated reliable; and (iv we could consider more variables in predicting the debris flow velocity by using GSA-RBF on the basis of measured data in other areas, which is more applicable. Because the GSA-RBF approach was more accurate, both the numerical simulation and the empirical equation can be taken into consideration for constructing debris flow mitigation works. They could be complementary and verified for each other.

  20. Characteristics-based modelling of flow problems

    International Nuclear Information System (INIS)

    Saarinen, M.

    1994-02-01

    The method of characteristics is an exact way to proceed to the solution of hyperbolic partial differential equations. The numerical solutions, however, are obtained in the fixed computational grid where interpolations of values between the mesh points cause numerical errors. The Piecewise Linear Interpolation Method, PLIM, the utilization of which is based on the method of characteristics, has been developed to overcome these deficiencies. The thesis concentrates on the computer simulation of the two-phase flow. The main topics studied are: (1) the PLIM method has been applied to study the validity of the numerical scheme through solving various flow problems to achieve knowledge for the further development of the method, (2) the mathematical and physical validity and applicability of the two-phase flow equations based on the SFAV (Separation of the two-phase Flow According to Velocities) approach has been studied, and (3) The SFAV approach has been further developed for particular cases such as stratified horizontal two-phase flow. (63 refs., 4 figs.)

  1. Debris flow susceptibility assessment based on an empirical approach in the central region of South Korea

    Science.gov (United States)

    Kang, Sinhang; Lee, Seung-Rae

    2018-05-01

    Many debris flow spreading analyses have been conducted during recent decades to prevent damage from debris flows. An empirical approach that has been used in various studies on debris flow spreading has advantages such as simple data acquisition and good applicability for large areas. In this study, a GIS-based empirical model that was developed at the University of Lausanne (Switzerland) is used to assess the debris flow susceptibility. Study sites are classified based on the types of soil texture or geological conditions, which can indirectly consider geotechnical or rheological properties, to supplement the weaknesses of Flow-R which neglects local controlling factors. The mean travel angle for each classification is calculated from a debris flow inventory map. The debris flow susceptibility is assessed based on changes in the flow-direction algorithm, an inertial function with a 5-m DEM resolution. A simplified friction-limited model was applied to the runout distance analysis by using the appropriate travel angle for the corresponding classification with a velocity limit of 28 m/s. The most appropriate algorithm combinations that derived the highest average of efficiency and sensitivity for each classification are finally determined by applying a confusion matrix with the efficiency and the sensitivity to the results of the susceptibility assessment. The proposed schemes can be useful for debris flow susceptibility assessment in both the study area and the central region of Korea, which has similar environmental factors such as geological conditions, topography and rainfall characteristics to the study area.

  2. An effective PSO-based memetic algorithm for flow shop scheduling.

    Science.gov (United States)

    Liu, Bo; Wang, Ling; Jin, Yi-Hui

    2007-02-01

    This paper proposes an effective particle swarm optimization (PSO)-based memetic algorithm (MA) for the permutation flow shop scheduling problem (PFSSP) with the objective to minimize the maximum completion time, which is a typical non-deterministic polynomial-time (NP) hard combinatorial optimization problem. In the proposed PSO-based MA (PSOMA), both PSO-based searching operators and some special local searching operators are designed to balance the exploration and exploitation abilities. In particular, the PSOMA applies the evolutionary searching mechanism of PSO, which is characterized by individual improvement, population cooperation, and competition to effectively perform exploration. On the other hand, the PSOMA utilizes several adaptive local searches to perform exploitation. First, to make PSO suitable for solving PFSSP, a ranked-order value rule based on random key representation is presented to convert the continuous position values of particles to job permutations. Second, to generate an initial swarm with certain quality and diversity, the famous Nawaz-Enscore-Ham (NEH) heuristic is incorporated into the initialization of population. Third, to balance the exploration and exploitation abilities, after the standard PSO-based searching operation, a new local search technique named NEH_1 insertion is probabilistically applied to some good particles selected by using a roulette wheel mechanism with a specified probability. Fourth, to enrich the searching behaviors and to avoid premature convergence, a simulated annealing (SA)-based local search with multiple different neighborhoods is designed and incorporated into the PSOMA. Meanwhile, an effective adaptive meta-Lamarckian learning strategy is employed to decide which neighborhood to be used in SA-based local search. Finally, to further enhance the exploitation ability, a pairwise-based local search is applied after the SA-based search. Simulation results based on benchmarks demonstrate the effectiveness

  3. A Potential Approach for Low Flow Selection in Water Resource Supply and Management

    Science.gov (United States)

    Ying Ouyang

    2012-01-01

    Low flow selections are essential to water resource management, water supply planning, and watershed ecosystem restoration. In this study, a new approach, namely the frequent-low (FL) approach (or frequent-low index), was developed based on the minimum frequent-low flow or level used in minimum flows and/or levels program in northeast Florida, USA. This FL approach was...

  4. Learning Metasploit exploitation and development

    CERN Document Server

    Balapure, Aditya

    2013-01-01

    A practical, hands-on tutorial with step-by-step instructions. The book will follow a smooth and easy-to-follow tutorial approach, covering the essentials and then showing the readers how to write more sophisticated exploits.This book targets exploit developers, vulnerability analysts and researchers, network administrators, and ethical hackers looking to gain advanced knowledge in exploitation development and identifying vulnerabilities. The primary goal is to take readers wishing to get into more advanced exploitation discovery and reaching the next level.Prior experience exploiting basic st

  5. A computational fluid dynamics and effectiveness-NTU based co-simulation approach for flow mal-distribution analysis in microchannel heat exchanger headers

    International Nuclear Information System (INIS)

    Huang, Long; Lee, Moon Soo; Saleh, Khaled; Aute, Vikrant; Radermacher, Reinhard

    2014-01-01

    Refrigerant flow mal-distribution is a practical challenge in most microchannel heat exchangers (MCHXs) applications. Geometry design, uneven heat transfer and pressure drop in the different microchannel tubes are three main reasons leading to the flow mal-distribution. To efficiently and accurately account for these three effects, a new MCHX co-simulation approach is proposed in this paper. The proposed approach combines a detailed header simulation based on computational fluid dynamics (CFD) and a robust effectiveness-based finite volume tube-side heat transfer and refrigerant flow modeling tool. The co-simulation concept is demonstrated on a ten-tube MCHX case study. Gravity effect and uneven airflow effect were numerically analyzed using both water and condensing R134a as the working fluids. The approach was validated against experimental data for an automotive R134a condenser. The inlet header was cut open after the experimental data had been collected. The detailed header geometry was reproduced using the proposed CFD header model. Good prediction accuracy was achieved compared to the experimental data. The presented co-simulation approach is capable of predicting detailed refrigerant flow behavior while accurately predicts the overall heat exchanger performance. - Highlights: •MCHX header flow distribution is analyzed by a co-simulation approach. •The proposed method is capable of simulating both single-phase and two-phase flow. •An actual header geometry is reproduced in the CFD header model. •The modeling work is experimentally validated with good accuracy. •Gravity effect and air side mal-distribution are accounted for

  6. Correlation between 2D and 3D flow curve modelling of DP steels using a microstructure-based RVE approach

    International Nuclear Information System (INIS)

    Ramazani, A.; Mukherjee, K.; Quade, H.; Prahl, U.; Bleck, W.

    2013-01-01

    A microstructure-based approach by means of representative volume elements (RVEs) is employed to evaluate the flow curve of DP steels using virtual tensile tests. Microstructures with different martensite fractions and morphologies are studied in two- and three-dimensional approaches. Micro sections of DP microstructures with various amounts of martensite have been converted to 2D RVEs, while 3D RVEs were constructed statistically with randomly distributed phases. A dislocation-based model is used to describe the flow curve of each ferrite and martensite phase separately as a function of carbon partitioning and microstructural features. Numerical tensile tests of RVE were carried out using the ABAQUS/Standard code to predict the flow behaviour of DP steels. It is observed that 2D plane strain modelling gives an underpredicted flow curve for DP steels, while the 3D modelling gives a quantitatively reasonable description of flow curve in comparison to the experimental data. In this work, a von Mises stress correlation factor σ 3D /σ 2D has been identified to compare the predicted flow curves of these two dimensionalities showing a third order polynomial relation with respect to martensite fraction and a second order polynomial relation with respect to equivalent plastic strain, respectively. The quantification of this polynomial correlation factor is performed based on laboratory-annealed DP600 chemistry with varying martensite content and it is validated for industrially produced DP qualities with various chemistry, strength level and martensite fraction.

  7. Optimization of information content in a mass spectrometry based flow-chemistry system by investigating different ionization approaches.

    Science.gov (United States)

    Martha, Cornelius T; Hoogendoorn, Jan-Carel; Irth, Hubertus; Niessen, Wilfried M A

    2011-05-15

    Current development in catalyst discovery includes combinatorial synthesis methods for the rapid generation of compound libraries combined with high-throughput performance-screening methods to determine the associated activities. Of these novel methodologies, mass spectrometry (MS) based flow chemistry methods are especially attractive due to the ability to combine sensitive detection of the formed reaction product with identification of introduced catalyst complexes. Recently, such a mass spectrometry based continuous-flow reaction detection system was utilized to screen silver-adducted ferrocenyl bidentate catalyst complexes for activity in a multicomponent synthesis of a substituted 2-imidazoline. Here, we determine the merits of different ionization approaches by studying the combination of sensitive detection of product formation in the continuous-flow system with the ability to simultaneous characterize the introduced [ferrocenyl bidentate+Ag](+) catalyst complexes. To this end, we study the ionization characteristics of electrospray ionization (ESI), atmospheric-pressure chemical ionization (APCI), no-discharge APCI, dual ESI/APCI, and dual APCI/no-discharge APCI. Finally, we investigated the application potential of the different ionization approaches by the investigation of ferrocenyl bidentate catalyst complex responses in different solvents. Copyright © 2011 Elsevier B.V. All rights reserved.

  8. Exploitation and disadvantage

    NARCIS (Netherlands)

    Ferguson, B.

    2016-01-01

    According to some accounts of exploitation, most notably Ruth Sample's (2003) degradation-based account and Robert Goodin's (1987) vulnerability-based account, exploitation occurs when an advantaged party fails to constrain their advantage in light of another's disadvantage, regardless of the cause

  9. A hybrid Bayesian network approach for trade-offs between environmental flows and agricultural water using dynamic discretization

    Science.gov (United States)

    Xue, Jie; Gui, Dongwei; Lei, Jiaqiang; Sun, Huaiwei; Zeng, Fanjiang; Feng, Xinlong

    2017-12-01

    Agriculture and the eco-environment are increasingly competing for water. The extension of intensive farmland for ensuring food security has resulted in excessive water exploitation by agriculture. Consequently, this has led to a lack of water supply in natural ecosystems. This paper proposes a trade-off framework to coordinate the water-use conflict between agriculture and the eco-environment, based on economic compensation for irrigation stakeholders. A hybrid Bayesian network (HBN) is developed to implement the framework, including: (a) agricultural water shortage assessments after meeting environmental flows; (b) water-use tradeoff analysis between agricultural irrigation and environmental flows using the HBN; and (c) quantification of the agricultural economic compensation for different irrigation stakeholders. The constructed HBN is computed by dynamic discretization, which is a more robust and accurate propagation algorithm than general static discretization. A case study of the Qira oasis area in Northwest China demonstrates that the water trade-off based on economic compensation depends on the available water supply and environmental flows at different levels. Agricultural irrigation water extracted for grain crops should be preferentially guaranteed to ensure food security, in spite of higher economic compensation in other cash crops' irrigation for water coordination. Updating water-saving engineering and adopting drip irrigation technology in agricultural facilities after satisfying environmental flows would greatly relieve agricultural water shortage and save the economic compensation for different irrigation stakeholders. The approach in this study can be easily applied in water-stressed areas worldwide for dealing with water competition.

  10. Identification and Prediction of Large Pedestrian Flow in Urban Areas Based on a Hybrid Detection Approach

    Directory of Open Access Journals (Sweden)

    Kaisheng Zhang

    2016-12-01

    Full Text Available Recently, population density has grown quickly with the increasing acceleration of urbanization. At the same time, overcrowded situations are more likely to occur in populous urban areas, increasing the risk of accidents. This paper proposes a synthetic approach to recognize and identify the large pedestrian flow. In particular, a hybrid pedestrian flow detection model was constructed by analyzing real data from major mobile phone operators in China, including information from smartphones and base stations (BS. With the hybrid model, the Log Distance Path Loss (LDPL model was used to estimate the pedestrian density from raw network data, and retrieve information with the Gaussian Progress (GP through supervised learning. Temporal-spatial prediction of the pedestrian data was carried out with Machine Learning (ML approaches. Finally, a case study of a real Central Business District (CBD scenario in Shanghai, China using records of millions of cell phone users was conducted. The results showed that the new approach significantly increases the utility and capacity of the mobile network. A more reasonable overcrowding detection and alert system can be developed to improve safety in subway lines and other hotspot landmark areas, such as the Bundle, People’s Square or Disneyland, where a large passenger flow generally exists.

  11. Energetic Variational Approach to Multi-Component Fluid Flows

    Science.gov (United States)

    Kirshtein, Arkadz; Liu, Chun; Brannick, James

    2017-11-01

    In this talk I will introduce the systematic energetic variational approach for dissipative systems applied to multi-component fluid flows. These variational approaches are motivated by the seminal works of Rayleigh and Onsager. The advantage of this approach is that we have to postulate only energy law and some kinematic relations based on fundamental physical principles. The method gives a clear, quick and consistent way to derive the PDE system. I will compare different approaches to three-component flows using diffusive interface method and discuss their advantages and disadvantages. The diffusive interface method is an approach for modeling interactions among complex substances. The main idea behind this method is to introduce phase field labeling functions in order to model the contact line by smooth change from one type of material to another. The work of Arkadz Kirshtein and Chun Liu is partially supported by NSF Grants DMS-141200 and DMS-1216938.

  12. International Trade Modelling Using Open Flow Networks: A Flow-Distance Based Analysis.

    Science.gov (United States)

    Shen, Bin; Zhang, Jiang; Li, Yixiao; Zheng, Qiuhua; Li, Xingsen

    2015-01-01

    This paper models and analyzes international trade flows using open flow networks (OFNs) with the approaches of flow distances, which provide a novel perspective and effective tools for the study of international trade. We discuss the establishment of OFNs of international trade from two coupled viewpoints: the viewpoint of trading commodity flow and that of money flow. Based on the novel model with flow distance approaches, meaningful insights are gained. First, by introducing the concepts of trade trophic levels and niches, countries' roles and positions in the global supply chains (or value-added chains) can be evaluated quantitatively. We find that the distributions of trading "trophic levels" have the similar clustering pattern for different types of commodities, and summarize some regularities between money flow and commodity flow viewpoints. Second, we find that active and competitive countries trade a wide spectrum of products, while inactive and underdeveloped countries trade a limited variety of products. Besides, some abnormal countries import many types of goods, which the vast majority of countries do not need to import. Third, harmonic node centrality is proposed and we find the phenomenon of centrality stratification. All the results illustrate the usefulness of the model of OFNs with its network approaches for investigating international trade flows.

  13. Hacking the art of exploitation

    CERN Document Server

    Erickson, Jon

    2003-01-01

    A comprehensive introduction to the techniques of exploitation and creative problem-solving methods commonly referred to as "hacking," Hacking: The Art of Exploitation is for both technical and non-technical people who are interested in computer security. It shows how hackers exploit programs and write exploits, instead of just how to run other people's exploits. Unlike many so-called hacking books, this book explains the technical aspects of hacking, including stack based overflows, heap based overflows, string exploits, return-into-libc, shellcode, and cryptographic attacks on 802.11b.

  14. Two-Phase Immiscible Flows in Porous Media: The Mesocopic Maxwell–Stefan Approach

    DEFF Research Database (Denmark)

    Shapiro, Alexander

    2015-01-01

    We develop an approach to coupling between viscous flows of the two phases in porous media, based on the Maxwell–Stefan formalism. Two versions of the formalism are presented: the general form, and the form based on the interaction of the flowing phases with the interface between them. The last...... of mixing” between the flowing phases. Comparison to the available experimental data on the steady-state two-phase relative permeabilities is presented....... approach is supported by the description of the flow on the mesoscopic level, as coupled boundary problems for the Brinkmann or Stokes equations. It becomes possible, in some simplifying geometric assumptions, to derive exact expressions for the phenomenological coefficients in the Maxwell–Stefan transport...

  15. Robust and Accurate Image-Based Georeferencing Exploiting Relative Orientation Constraints

    Science.gov (United States)

    Cavegn, S.; Blaser, S.; Nebiker, S.; Haala, N.

    2018-05-01

    Urban environments with extended areas of poor GNSS coverage as well as indoor spaces that often rely on real-time SLAM algorithms for camera pose estimation require sophisticated georeferencing in order to fulfill our high requirements of a few centimeters for absolute 3D point measurement accuracies. Since we focus on image-based mobile mapping, we extended the structure-from-motion pipeline COLMAP with georeferencing capabilities by integrating exterior orientation parameters from direct sensor orientation or SLAM as well as ground control points into bundle adjustment. Furthermore, we exploit constraints for relative orientation parameters among all cameras in bundle adjustment, which leads to a significant robustness and accuracy increase especially by incorporating highly redundant multi-view image sequences. We evaluated our integrated georeferencing approach on two data sets, one captured outdoors by a vehicle-based multi-stereo mobile mapping system and the other captured indoors by a portable panoramic mobile mapping system. We obtained mean RMSE values for check point residuals between image-based georeferencing and tachymetry of 2 cm in an indoor area, and 3 cm in an urban environment where the measurement distances are a multiple compared to indoors. Moreover, in comparison to a solely image-based procedure, our integrated georeferencing approach showed a consistent accuracy increase by a factor of 2-3 at our outdoor test site. Due to pre-calibrated relative orientation parameters, images of all camera heads were oriented correctly in our challenging indoor environment. By performing self-calibration of relative orientation parameters among respective cameras of our vehicle-based mobile mapping system, remaining inaccuracies from suboptimal test field calibration were successfully compensated.

  16. ROBUST AND ACCURATE IMAGE-BASED GEOREFERENCING EXPLOITING RELATIVE ORIENTATION CONSTRAINTS

    Directory of Open Access Journals (Sweden)

    S. Cavegn

    2018-05-01

    Full Text Available Urban environments with extended areas of poor GNSS coverage as well as indoor spaces that often rely on real-time SLAM algorithms for camera pose estimation require sophisticated georeferencing in order to fulfill our high requirements of a few centimeters for absolute 3D point measurement accuracies. Since we focus on image-based mobile mapping, we extended the structure-from-motion pipeline COLMAP with georeferencing capabilities by integrating exterior orientation parameters from direct sensor orientation or SLAM as well as ground control points into bundle adjustment. Furthermore, we exploit constraints for relative orientation parameters among all cameras in bundle adjustment, which leads to a significant robustness and accuracy increase especially by incorporating highly redundant multi-view image sequences. We evaluated our integrated georeferencing approach on two data sets, one captured outdoors by a vehicle-based multi-stereo mobile mapping system and the other captured indoors by a portable panoramic mobile mapping system. We obtained mean RMSE values for check point residuals between image-based georeferencing and tachymetry of 2 cm in an indoor area, and 3 cm in an urban environment where the measurement distances are a multiple compared to indoors. Moreover, in comparison to a solely image-based procedure, our integrated georeferencing approach showed a consistent accuracy increase by a factor of 2–3 at our outdoor test site. Due to pre-calibrated relative orientation parameters, images of all camera heads were oriented correctly in our challenging indoor environment. By performing self-calibration of relative orientation parameters among respective cameras of our vehicle-based mobile mapping system, remaining inaccuracies from suboptimal test field calibration were successfully compensated.

  17. A hybrid load flow and event driven simulation approach to multi-state system reliability evaluation

    International Nuclear Information System (INIS)

    George-Williams, Hindolo; Patelli, Edoardo

    2016-01-01

    Structural complexity of systems, coupled with their multi-state characteristics, renders their reliability and availability evaluation difficult. Notwithstanding the emergence of various techniques dedicated to complex multi-state system analysis, simulation remains the only approach applicable to realistic systems. However, most simulation algorithms are either system specific or limited to simple systems since they require enumerating all possible system states, defining the cut-sets associated with each state and monitoring their occurrence. In addition to being extremely tedious for large complex systems, state enumeration and cut-set definition require a detailed understanding of the system's failure mechanism. In this paper, a simple and generally applicable simulation approach, enhanced for multi-state systems of any topology is presented. Here, each component is defined as a Semi-Markov stochastic process and via discrete-event simulation, the operation of the system is mimicked. The principles of flow conservation are invoked to determine flow across the system for every performance level change of its components using the interior-point algorithm. This eliminates the need for cut-set definition and overcomes the limitations of existing techniques. The methodology can also be exploited to account for effects of transmission efficiency and loading restrictions of components on system reliability and performance. The principles and algorithms developed are applied to two numerical examples to demonstrate their applicability. - Highlights: • A discrete event simulation model based on load flow principles. • Model does not require system path or cut sets. • Applicable to binary and multi-state systems of any topology. • Supports multiple output systems with competing demand. • Model is intuitive and generally applicable.

  18. EU Enlargement: Migration flows from Central and Eastern Europe into the Nordic countries - exploiting a natural experiment

    DEFF Research Database (Denmark)

    Pedersen, Peder J.; Pytlikova, Mariola

    In this paper we look at migration flows from 10 Central and Eastern European Countries (CEEC) to 5 Nordic countries over the years 1985 - 2007. We exploit a natural experiment that arose from the fact that while Sweden opened its labour market from the day one of the 2004 EU enlargement......, and Finland and Iceland from year 2006, the other Nordic countries chose a transition period in relation to the "new" EU members. The results based on a differences-in-differences estimator show that the estimated effect of the opening of the Swedish, Finnish and Icelandic labour markets on migration from...... the CEECs that entered the EU in 2004 is not significantly different from zero. However, the effect of the opening of the Swedish and Finnish labour markets in 2007 on migration from the 2007 EU entrants, Bulgaria and Romania, is significantly positive. Further, we are interested in the overall effect...

  19. A potential approach for low flow selection in water resource supply and management

    Science.gov (United States)

    Ouyang, Ying

    2012-08-01

    SummaryLow flow selections are essential to water resource management, water supply planning, and watershed ecosystem restoration. In this study, a new approach, namely the frequent-low (FL) approach (or frequent-low index), was developed based on the minimum frequent-low flow or level used in minimum flows and/or levels program in northeast Florida, USA. This FL approach was then compared to the conventional 7Q10 approach for low flow selections prior to its applications, using the USGS flow data from the freshwater environment (Big Sunflower River, Mississippi) as well as from the estuarine environment (St. Johns River, Florida). Unlike the FL approach that is associated with the biological and ecological impacts, the 7Q10 approach could lead to the selections of extremely low flows (e.g., near-zero flows) that may hinder its use for establishing criteria to prevent streams from significant harm to biological and ecological communities. Additionally, the 7Q10 approach could not be used when the period of data records is less than 10 years by definition while this may not the case for the FL approach. Results from both approaches showed that the low flows from the Big Sunflower River and the St. Johns River decreased as time elapsed, demonstrating that these two rivers have become drier during the last several decades with a potential of salted water intrusion to the St. Johns River. Results from the FL approach further revealed that the recurrence probability of low flow increased while the recurrence interval of low flow decreased as time elapsed in both rivers, indicating that low flows occurred more frequent in these rivers as time elapsed. This report suggests that the FL approach, developed in this study, is a useful alternative for low flow selections in addition to the 7Q10 approach.

  20. An Exploitability Analysis Technique for Binary Vulnerability Based on Automatic Exception Suppression

    Directory of Open Access Journals (Sweden)

    Zhiyuan Jiang

    2018-01-01

    Full Text Available To quickly verify and fix vulnerabilities, it is necessary to judge the exploitability of the massive crash generated by the automated vulnerability mining tool. While the current manual analysis of the crash process is inefficient and time-consuming, the existing automated tools can only handle execute exceptions and some write exceptions but cannot handle common read exceptions. To address this problem, we propose a method of determining the exploitability based on the exception type suppression. This method enables the program to continue to execute until an exploitable exception is triggered. The method performs a symbolic replay of the crash sample, constructing and reusing data gadget, to bypass the complex exception, thereby improving the efficiency and accuracy of vulnerability exploitability analysis. The testing of typical CGC/RHG binary software shows that this method can automatically convert a crash that cannot be judged by existing analysis tools into a different crash type and judge the exploitability successfully.

  1. A Simple Approach to Characterize Gas-Aqueous Liquid Two-phase Flow Configuration Based on Discrete Solid-Liquid Contact Electrification.

    Science.gov (United States)

    Choi, Dongwhi; Lee, Donghyeon; Kim, Dong Sung

    2015-10-14

    In this study, we first suggest a simple approach to characterize configuration of gas-aqueous liquid two-phase flow based on discrete solid-liquid contact electrification, which is a newly defined concept as a sequential process of solid-liquid contact and successive detachment of the contact liquid from the solid surface. This approach exhibits several advantages such as simple operation, precise measurement, and cost-effectiveness. By using electric potential that is spontaneously generated by discrete solid-liquid contact electrification, the configurations of the gas-aqueous liquid two-phase flow such as size of a gas slug and flow rate are precisely characterized. According to the experimental and numerical analyses on parameters that affect electric potential, gas slugs have been verified to behave similarly to point electric charges when the measuring point of the electric potential is far enough from the gas slug. In addition, the configuration of the gas-aqueous liquid two-phase microfluidic system with multiple gas slugs is also characterized by using the presented approach. For a proof-of-concept demonstration of using the proposed approach in a self-triggered sensor, a gas slug detector with a counter system is developed to show its practicality and applicability.

  2. Chitosan-based nanosystems and their exploited antimicrobial activity.

    Science.gov (United States)

    Perinelli, Diego Romano; Fagioli, Laura; Campana, Raffaella; Lam, Jenny K W; Baffone, Wally; Palmieri, Giovanni Filippo; Casettari, Luca; Bonacucina, Giulia

    2018-05-30

    Chitosan is a biodegradable and biocompatible natural polysaccharide that has a wide range of applications in the field of pharmaceutics, biomedical, chemical, cosmetics, textile and food industry. One of the most interesting characteristics of chitosan is its antibacterial and antifungal activity, and together with its excellent safety profile in human, it has attracted considerable attention in various research disciplines. The antimicrobial activity of chitosan is dependent on a number of factors, including its molecular weight, degree of deacetylation, degree of substitution, physical form, as well as structural properties of the cell wall of the target microorganisms. While the sole use of chitosan may not be sufficient to produce an adequate antimicrobial effect to fulfil different purposes, the incorporation of this biopolymer with other active substances such as drugs, metals and natural compounds in nanosystems is a commonly employed strategy to enhance its antimicrobial potential. In this review, we aim to provide an overview on the different approaches that exploit the antimicrobial activity of chitosan-based nanosystems and their applications, and highlight the latest advances in this field. Copyright © 2018 Elsevier B.V. All rights reserved.

  3. Surrogate Model Application to the Identification of Optimal Groundwater Exploitation Scheme Based on Regression Kriging Method—A Case Study of Western Jilin Province

    Directory of Open Access Journals (Sweden)

    Yongkai An

    2015-07-01

    Full Text Available This paper introduces a surrogate model to identify an optimal exploitation scheme, while the western Jilin province was selected as the study area. A numerical simulation model of groundwater flow was established first, and four exploitation wells were set in the Tongyu county and Qian Gorlos county respectively so as to supply water to Daan county. Second, the Latin Hypercube Sampling (LHS method was used to collect data in the feasible region for input variables. A surrogate model of the numerical simulation model of groundwater flow was developed using the regression kriging method. An optimization model was established to search an optimal groundwater exploitation scheme using the minimum average drawdown of groundwater table and the minimum cost of groundwater exploitation as multi-objective functions. Finally, the surrogate model was invoked by the optimization model in the process of solving the optimization problem. Results show that the relative error and root mean square error of the groundwater table drawdown between the simulation model and the surrogate model for 10 validation samples are both lower than 5%, which is a high approximation accuracy. The contrast between the surrogate-based simulation optimization model and the conventional simulation optimization model for solving the same optimization problem, shows the former only needs 5.5 hours, and the latter needs 25 days. The above results indicate that the surrogate model developed in this study could not only considerably reduce the computational burden of the simulation optimization process, but also maintain high computational accuracy. This can thus provide an effective method for identifying an optimal groundwater exploitation scheme quickly and accurately.

  4. Fast incorporation of optical flow into active polygons.

    Science.gov (United States)

    Unal, Gozde; Krim, Hamid; Yezzi, Anthony

    2005-06-01

    In this paper, we first reconsider, in a different light, the addition of a prediction step to active contour-based visual tracking using an optical flow and clarify the local computation of the latter along the boundaries of continuous active contours with appropriate regularizers. We subsequently detail our contribution of computing an optical flow-based prediction step directly from the parameters of an active polygon, and of exploiting it in object tracking. This is in contrast to an explicitly separate computation of the optical flow and its ad hoc application. It also provides an inherent regularization effect resulting from integrating measurements along polygon edges. As a result, we completely avoid the need of adding ad hoc regularizing terms to the optical flow computations, and the inevitably arbitrary associated weighting parameters. This direct integration of optical flow into the active polygon framework distinguishes this technique from most previous contour-based approaches, where regularization terms are theoretically, as well as practically, essential. The greater robustness and speed due to a reduced number of parameters of this technique are additional and appealing features.

  5. Anthropology of sexual exploitation

    Directory of Open Access Journals (Sweden)

    Lalić Velibor

    2009-01-01

    Full Text Available In this paper, the authors observe sexual exploitation from an anthropological perspective. They analyze the rational, ethical, emotional and mythological dimensions of human sexuality. Consequently, after setting the phenomenon in a social and historical context, sexual exploitation is closely observed in the contemporary age. Based on thoughts of relevant thinkers, they make the conclusion that the elimination of sexual exploitation is not an utterly legal issue, but political and economical issues as well. Namely, legal norms are not sufficient to overcome sexual exploitation, but, political and economical relationships in contemporary societies, which will be based on sincere equal opportunities must be established.

  6. An Airway Network Flow Assignment Approach Based on an Efficient Multiobjective Optimization Framework

    Directory of Open Access Journals (Sweden)

    Xiangmin Guan

    2015-01-01

    Full Text Available Considering reducing the airspace congestion and the flight delay simultaneously, this paper formulates the airway network flow assignment (ANFA problem as a multiobjective optimization model and presents a new multiobjective optimization framework to solve it. Firstly, an effective multi-island parallel evolution algorithm with multiple evolution populations is employed to improve the optimization capability. Secondly, the nondominated sorting genetic algorithm II is applied for each population. In addition, a cooperative coevolution algorithm is adapted to divide the ANFA problem into several low-dimensional biobjective optimization problems which are easier to deal with. Finally, in order to maintain the diversity of solutions and to avoid prematurity, a dynamic adjustment operator based on solution congestion degree is specifically designed for the ANFA problem. Simulation results using the real traffic data from China air route network and daily flight plans demonstrate that the proposed approach can improve the solution quality effectively, showing superiority to the existing approaches such as the multiobjective genetic algorithm, the well-known multiobjective evolutionary algorithm based on decomposition, and a cooperative coevolution multiobjective algorithm as well as other parallel evolution algorithms with different migration topology.

  7. A service and value based approach to estimating environmental flows

    DEFF Research Database (Denmark)

    Korsgaard, Louise; Jensen, R.A.; Jønch-Clausen, Torkil

    2008-01-01

    An important challenge of Integrated Water Resources Management (IWRM) is to balance water allocation between different users and uses. While economically and/or politically powerful users have relatively well developed methods for quantifying and justifying their water needs, this is not the case...... methodologies. The SPI approach is a pragmatic and transparent tool for incorporating ecosystems and environmental flows into the evaluation of water allocation scenarios, negotiations of trade-offs and decision-making in IWRM....

  8. EU Enlargement: Migration flows from Central and Eastern Europe into the Nordic countries - exploiting a natural experiment

    DEFF Research Database (Denmark)

    Pytlikova, Mariola; Pedersen, Peder J.

    We look at migration flows from 8 Central and Eastern European Countries (CEECs) to 5 Nordic countries over the years 1985 - 2005 and we can exploit a natural experiment that arose from the fact that while Sweden opened its labour market from the day one of the 2004 EU enlargement, the other Nordic...... countries chose a transition period in relation to the "new" EU members. We employ a differences-in-differences estimator in our analysis. The results show that the estimated effect of the opening of Swedish labour market in 2004 on migration is insignificantly different from zero. Further, we...... are interested in the overall effect of the "EU entry" on migration. Therefore we look at migration flows from CEECs during the first round EU enlargement towards CEECs in 2004 and compare them with migration flows from Bulgaria and Romania. We again used a DD estimator in our analysis. The estimated effect...

  9. Partial Averaged Navier-Stokes approach for cavitating flow

    International Nuclear Information System (INIS)

    Zhang, L; Zhang, Y N

    2015-01-01

    Partial Averaged Navier Stokes (PANS) is a numerical approach developed for studying practical engineering problems (e.g. cavitating flow inside hydroturbines) with a resonance cost and accuracy. One of the advantages of PANS is that it is suitable for any filter width, leading a bridging method from traditional Reynolds Averaged Navier-Stokes (RANS) to direct numerical simulations by choosing appropriate parameters. Comparing with RANS, the PANS model will inherit many physical nature from parent RANS but further resolve more scales of motion in great details, leading to PANS superior to RANS. As an important step for PANS approach, one need to identify appropriate physical filter-width control parameters e.g. ratios of unresolved-to-total kinetic energy and dissipation. In present paper, recent studies of cavitating flow based on PANS approach are introduced with a focus on the influences of filter-width control parameters on the simulation results

  10. HBC-Evo: predicting human breast cancer by exploiting amino acid sequence-based feature spaces and evolutionary ensemble system.

    Science.gov (United States)

    Majid, Abdul; Ali, Safdar

    2015-01-01

    We developed genetic programming (GP)-based evolutionary ensemble system for the early diagnosis, prognosis and prediction of human breast cancer. This system has effectively exploited the diversity in feature and decision spaces. First, individual learners are trained in different feature spaces using physicochemical properties of protein amino acids. Their predictions are then stacked to develop the best solution during GP evolution process. Finally, results for HBC-Evo system are obtained with optimal threshold, which is computed using particle swarm optimization. Our novel approach has demonstrated promising results compared to state of the art approaches.

  11. Flow Formulations for Curriculum-based Course Timetabling

    DEFF Research Database (Denmark)

    Bagger, Niels-Christian Fink; Kristiansen, Simon; Sørensen, Matias

    2017-01-01

    lower bound on one data instance in the benchmark data set from the second international timetabling competition. Regarding upper bounds, the formulation based on the minimum cost flow problem performs better on average than other mixed integer programming approaches for the CTT.......In this paper we present two mixed-integer programming formulations for the Curriculum based Course Timetabling Problem (CTT). We show that the formulations contain underlying network structures by dividing the CTT into two separate models and then connect the two models using flow formulation...... techniques. The first mixed-integer programming formulation is based on an underlying minimum cost flow problem, which decreases the number of integer variables significantly and improves the performance compared to an intuitive mixed-integer programming formulation. The second formulation is based...

  12. Evaluating Maximum Wind Energy Exploitation in Active Distribution Networks

    DEFF Research Database (Denmark)

    Siano, Pierluigi; Chen, Peiyuan; Chen, Zhe

    2010-01-01

    The increased spreading of distributed and renewable generation requires moving towards active management of distribution networks. In this paper, in order to evaluate maximum wind energy exploitation in active distribution networks, a method based on a multi-period optimal power flow (OPF......) analysis is proposed. Active network management schemes such as coordinated voltage control, energy curtailment and power factor control are integrated in the method in order to investigate their impacts on the maximization of wind energy exploitation. Some case studies, using real data from a Danish...... distribution system, confirmed the effectiveness of the proposed method in evaluating the optimal applications of active management schemes to increase wind energy harvesting without costly network reinforcement for the connection of wind generation....

  13. A Simplified Micromechanical Modeling Approach to Predict the Tensile Flow Curve Behavior of Dual-Phase Steels

    Science.gov (United States)

    Nanda, Tarun; Kumar, B. Ravi; Singh, Vishal

    2017-11-01

    Micromechanical modeling is used to predict material's tensile flow curve behavior based on microstructural characteristics. This research develops a simplified micromechanical modeling approach for predicting flow curve behavior of dual-phase steels. The existing literature reports on two broad approaches for determining tensile flow curve of these steels. The modeling approach developed in this work attempts to overcome specific limitations of the existing two approaches. This approach combines dislocation-based strain-hardening method with rule of mixtures. In the first step of modeling, `dislocation-based strain-hardening method' was employed to predict tensile behavior of individual phases of ferrite and martensite. In the second step, the individual flow curves were combined using `rule of mixtures,' to obtain the composite dual-phase flow behavior. To check accuracy of proposed model, four distinct dual-phase microstructures comprising of different ferrite grain size, martensite fraction, and carbon content in martensite were processed by annealing experiments. The true stress-strain curves for various microstructures were predicted with the newly developed micromechanical model. The results of micromechanical model matched closely with those of actual tensile tests. Thus, this micromechanical modeling approach can be used to predict and optimize the tensile flow behavior of dual-phase steels.

  14. A Brief Introduction of Task-based Approach

    Institute of Scientific and Technical Information of China (English)

    王丹

    2012-01-01

    The task-based language teaching approach is one of the syllabus models that have been proposed in the last twenty years or so. Task-based syllabus represent a particular realization of communicative language teaching. Task-based teaching/learning helps develop students’ communicative competence, enabling them to communicate effectively in real communicating world and engage in interaction. The most active element in the process of the task-based teaching is the learner’ creativity. By exploiting this kind of creativity, learning can be made significantly more efficient and more interesting. It is well-known that the task-based teaching/learning have a rich potential for promoting successful second language learning than the traditional teaching/learning. Task-based approach is reflected not only in China but also in some other countries, such as America, Canada, Singapore, Hong Kong and son on.

  15. Modeling of annular two-phase flow using a unified CFD approach

    Energy Technology Data Exchange (ETDEWEB)

    Li, Haipeng, E-mail: haipengl@kth.se; Anglart, Henryk, E-mail: henryk@kth.se

    2016-07-15

    Highlights: • Annular two-phase flow has been modeled using a unified CFD approach. • Liquid film was modeled based on a two-dimensional thin film assumption. • Both Eulerian and Lagrangian methods were employed for the gas core flow modeling. - Abstract: A mechanistic model of annular flow with evaporating liquid film has been developed using computational fluid dynamics (CFD). The model is employing a separate solver with two-dimensional conservation equations to predict propagation of a thin boiling liquid film on solid walls. The liquid film model is coupled to a solver of three-dimensional conservation equations describing the gas core, which is assumed to contain a saturated mixture of vapor and liquid droplets. Both the Eulerian–Eulerian and the Eulerian–Lagrangian approach are used to describe the droplet and vapor motion in the gas core. All the major interaction phenomena between the liquid film and the gas core flow have been accounted for, including the liquid film evaporation as well as the droplet deposition and entrainment. The resultant unified framework for annular flow has been applied to the steam-water flow with conditions typical for a Boiling Water Reactor (BWR). The simulation results for the liquid film flow rate show good agreement with the experimental data, with the potential to predict the dryout occurrence based on criteria of critical film thickness or critical film flow rate.

  16. Modeling of annular two-phase flow using a unified CFD approach

    International Nuclear Information System (INIS)

    Li, Haipeng; Anglart, Henryk

    2016-01-01

    Highlights: • Annular two-phase flow has been modeled using a unified CFD approach. • Liquid film was modeled based on a two-dimensional thin film assumption. • Both Eulerian and Lagrangian methods were employed for the gas core flow modeling. - Abstract: A mechanistic model of annular flow with evaporating liquid film has been developed using computational fluid dynamics (CFD). The model is employing a separate solver with two-dimensional conservation equations to predict propagation of a thin boiling liquid film on solid walls. The liquid film model is coupled to a solver of three-dimensional conservation equations describing the gas core, which is assumed to contain a saturated mixture of vapor and liquid droplets. Both the Eulerian–Eulerian and the Eulerian–Lagrangian approach are used to describe the droplet and vapor motion in the gas core. All the major interaction phenomena between the liquid film and the gas core flow have been accounted for, including the liquid film evaporation as well as the droplet deposition and entrainment. The resultant unified framework for annular flow has been applied to the steam-water flow with conditions typical for a Boiling Water Reactor (BWR). The simulation results for the liquid film flow rate show good agreement with the experimental data, with the potential to predict the dryout occurrence based on criteria of critical film thickness or critical film flow rate.

  17. Six scenarios of exploiting an ontology based, mobilized learning environment

    NARCIS (Netherlands)

    Kismihók, G.; Szabó, I.; Vas, R.

    2012-01-01

    In this article, six different exploitation possibilities of an educational ontology based, mobilized learning management system are presented. The focal point of this system is the educational ontology model. The first version of this educational ontology model serves as a foundation for curriculum

  18. All-Fullerene-Based Cells for Nonaqueous Redox Flow Batteries.

    Science.gov (United States)

    Friedl, Jochen; Lebedeva, Maria A; Porfyrakis, Kyriakos; Stimming, Ulrich; Chamberlain, Thomas W

    2018-01-10

    Redox flow batteries have the potential to revolutionize our use of intermittent sustainable energy sources such as solar and wind power by storing the energy in liquid electrolytes. Our concept study utilizes a novel electrolyte system, exploiting derivatized fullerenes as both anolyte and catholyte species in a series of battery cells, including a symmetric, single species system which alleviates the common problem of membrane crossover. The prototype multielectron system, utilizing molecular based charge carriers, made from inexpensive, abundant, and sustainable materials, principally, C and Fe, demonstrates remarkable current and energy densities and promising long-term cycling stability.

  19. Microfluidic volumetric flow determination using optical coherence tomography speckle: An autocorrelation approach

    Energy Technology Data Exchange (ETDEWEB)

    De Pretto, Lucas R., E-mail: lucas.de.pretto@usp.br; Nogueira, Gesse E. C.; Freitas, Anderson Z. [Instituto de Pesquisas Energéticas e Nucleares, IPEN–CNEN/SP, Avenida Lineu Prestes, 2242, 05508-000 São Paulo (Brazil)

    2016-04-28

    Functional modalities of Optical Coherence Tomography (OCT) based on speckle analysis are emerging in the literature. We propose a simple approach to the autocorrelation of OCT signal to enable volumetric flow rate differentiation, based on decorrelation time. Our results show that this technique could distinguish flows separated by 3 μl/min, limited by the acquisition speed of the system. We further perform a B-scan of gradient flow inside a microchannel, enabling the visualization of the drag effect on the walls.

  20. An acoustic-convective splitting-based approach for the Kapila two-phase flow model

    Energy Technology Data Exchange (ETDEWEB)

    Eikelder, M.F.P. ten, E-mail: m.f.p.teneikelder@tudelft.nl [EDF R& D, AMA, 7 boulevard Gaspard Monge, 91120 Palaiseau (France); Eindhoven University of Technology, Department of Mathematics and Computer Science, P.O. Box 513, 5600 MB Eindhoven (Netherlands); Daude, F. [EDF R& D, AMA, 7 boulevard Gaspard Monge, 91120 Palaiseau (France); IMSIA, UMR EDF-CNRS-CEA-ENSTA 9219, Université Paris Saclay, 828 Boulevard des Maréchaux, 91762 Palaiseau (France); Koren, B.; Tijsseling, A.S. [Eindhoven University of Technology, Department of Mathematics and Computer Science, P.O. Box 513, 5600 MB Eindhoven (Netherlands)

    2017-02-15

    In this paper we propose a new acoustic-convective splitting-based numerical scheme for the Kapila five-equation two-phase flow model. The splitting operator decouples the acoustic waves and convective waves. The resulting two submodels are alternately numerically solved to approximate the solution of the entire model. The Lagrangian form of the acoustic submodel is numerically solved using an HLLC-type Riemann solver whereas the convective part is approximated with an upwind scheme. The result is a simple method which allows for a general equation of state. Numerical computations are performed for standard two-phase shock tube problems. A comparison is made with a non-splitting approach. The results are in good agreement with reference results and exact solutions.

  1. Fourier-based approach to interpolation in single-slice helical computed tomography

    International Nuclear Information System (INIS)

    La Riviere, Patrick J.; Pan Xiaochuan

    2001-01-01

    It has recently been shown that longitudinal aliasing can be a significant and detrimental presence in reconstructed single-slice helical computed tomography (CT) volumes. This aliasing arises because the directly measured data in helical CT are generally undersampled by a factor of at least 2 in the longitudinal direction and because the exploitation of the redundancy of fanbeam data acquired over 360 degree sign to generate additional longitudinal samples does not automatically eliminate the aliasing. In this paper we demonstrate that for pitches near 1 or lower, the redundant fanbeam data, when used properly, can provide sufficient information to satisfy a generalized sampling theorem and thus to eliminate aliasing. We develop and evaluate a Fourier-based algorithm, called 180FT, that accomplishes this. As background we present a second Fourier-based approach, called 360FT, that makes use only of the directly measured data. Both Fourier-based approaches exploit the fast Fourier transform and the Fourier shift theorem to generate from the helical projection data a set of fanbeam sinograms corresponding to equispaced transverse slices. Slice-by-slice reconstruction is then performed by use of two-dimensional fanbeam algorithms. The proposed approaches are compared to their counterparts based on the use of linear interpolation - the 360LI and 180LI approaches. The aliasing suppression property of the 180FT approach is a clear advantage of the approach and represents a step toward the desirable goal of achieving uniform longitudinal resolution properties in reconstructed helical CT volumes

  2. Up-Rating - An Alternative Approach to Meeting Future Power Demands - Exploitation of Design Margins

    Energy Technology Data Exchange (ETDEWEB)

    Bruce, Barnaby; Schwarz, Thomas [AREVA NP GmbH, Freyeslebenstr. 1, 91058 Erlangen (Germany)

    2008-07-01

    Up-rating is a world-wide implemented approach that takes advantage of increased calculation and analytic abilities developed since commissioning and applies them to old plants. In doing so, what would possibly be considered today as over-engineered design margins are exploited and plant performance is improved, without necessarily involving extensive modifications or replacement of hardware. It is therefore a short-term alternative, compared to new plants, with little change in environmental ramifications for power production capacity gained. Up-rating is also more accepted by the wider community and licensing authorities, thus complimenting the building of new plants. The 10% thermal up-rating of the nuclear power plant at Almaraz, Spain, requires a comprehensive reanalysis of all power components. This paper focuses on those measures required to ensure the performance of the steam generators at increased load as an example of design margin exploitation in such crucial components. (authors)

  3. Agent-Based Collaborative Traffic Flow Management, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — We propose agent-based game-theoretic approaches for simulation of strategies involved in multi-objective collaborative traffic flow management (CTFM). Intelligent...

  4. A combined triggering-propagation modeling approach for the assessment of rainfall induced debris flow susceptibility

    Science.gov (United States)

    Stancanelli, Laura Maria; Peres, David Johnny; Cancelliere, Antonino; Foti, Enrico

    2017-07-01

    Rainfall-induced shallow slides can evolve into debris flows that move rapidly downstream with devastating consequences. Mapping the susceptibility to debris flow is an important aid for risk mitigation. We propose a novel practical approach to derive debris flow inundation maps useful for susceptibility assessment, that is based on the integrated use of DEM-based spatially-distributed hydrological and slope stability models with debris flow propagation models. More specifically, the TRIGRS infiltration and infinite slope stability model and the FLO-2D model for the simulation of the related debris flow propagation and deposition are combined. An empirical instability-to-debris flow triggering threshold calibrated on the basis of observed events, is applied to link the two models and to accomplish the task of determining the amount of unstable mass that develops as a debris flow. Calibration of the proposed methodology is carried out based on real data of the debris flow event occurred on 1 October 2009, in the Peloritani mountains area (Italy). Model performance, assessed by receiver-operating-characteristics (ROC) indexes, evidences fairly good reproduction of the observed event. Comparison with the performance of the traditional debris flow modeling procedure, in which sediment and water hydrographs are inputed as lumped at selected points on top of the streams, is also performed, in order to assess quantitatively the limitations of such commonly applied approach. Results show that the proposed method, besides of being more process-consistent than the traditional hydrograph-based approach, can potentially provide a more accurate simulation of debris-flow phenomena, in terms of spatial patterns of erosion and deposition as well on the quantification of mobilized volumes and depths, avoiding overestimation of debris flow triggering volume and, thus, of maximum inundation flow depths.

  5. Exploration of Exploitation Approaches of European Projects on ICT and Foreign Language Learning: the CEFcult project case

    NARCIS (Netherlands)

    Rusman, Ellen; Rajagopal, Kamakshi; Stoyanov, Slavi; Van Maele, Jan

    2011-01-01

    Rusman, E., Rajagopal, K., Stoyanov, S., & Van Maele, J. (2011, 20-21 October). Exploration of Exploitation Approaches of European Projects on ICT and Foreign Language Learning: the CEFcult project case. Paper presented at the 4th International Conference ICT for Language Learning, Florence, Italy.

  6. Exploiting Non-Markovianity for Quantum Control.

    Science.gov (United States)

    Reich, Daniel M; Katz, Nadav; Koch, Christiane P

    2015-07-22

    Quantum technology, exploiting entanglement and the wave nature of matter, relies on the ability to accurately control quantum systems. Quantum control is often compromised by the interaction of the system with its environment since this causes loss of amplitude and phase. However, when the dynamics of the open quantum system is non-Markovian, amplitude and phase flow not only from the system into the environment but also back. Interaction with the environment is then not necessarily detrimental. We show that the back-flow of amplitude and phase can be exploited to carry out quantum control tasks that could not be realized if the system was isolated. The control is facilitated by a few strongly coupled, sufficiently isolated environmental modes. Our paradigmatic example considers a weakly anharmonic ladder with resonant amplitude control only, restricting realizable operations to SO(N). The coupling to the environment, when harnessed with optimization techniques, allows for full SU(N) controllability.

  7. Spatial dynamics of ecosystem service flows: a comprehensive approach to quantifying actual services

    Science.gov (United States)

    Bagstad, Kenneth J.; Johnson, Gary W.; Voigt, Brian; Villa, Ferdinando

    2013-01-01

    Recent ecosystem services research has highlighted the importance of spatial connectivity between ecosystems and their beneficiaries. Despite this need, a systematic approach to ecosystem service flow quantification has not yet emerged. In this article, we present such an approach, which we formalize as a class of agent-based models termed “Service Path Attribution Networks” (SPANs). These models, developed as part of the Artificial Intelligence for Ecosystem Services (ARIES) project, expand on ecosystem services classification terminology introduced by other authors. Conceptual elements needed to support flow modeling include a service's rivalness, its flow routing type (e.g., through hydrologic or transportation networks, lines of sight, or other approaches), and whether the benefit is supplied by an ecosystem's provision of a beneficial flow to people or by absorption of a detrimental flow before it reaches them. We describe our implementation of the SPAN framework for five ecosystem services and discuss how to generalize the approach to additional services. SPAN model outputs include maps of ecosystem service provision, use, depletion, and flows under theoretical, possible, actual, inaccessible, and blocked conditions. We highlight how these different ecosystem service flow maps could be used to support various types of decision making for conservation and resource management planning.

  8. Dynamic leaching and fractionation of trace elements from environmental solids exploiting a novel circulating-flow platform.

    Science.gov (United States)

    Mori, Masanobu; Nakano, Koji; Sasaki, Masaya; Shinozaki, Haruka; Suzuki, Shiho; Okawara, Chitose; Miró, Manuel; Itabashi, Hideyuki

    2016-02-01

    A dynamic flow-through microcolumn extraction system based on extractant re-circulation is herein proposed as a novel analytical approach for simplification of bioaccessibility tests of trace elements in sediments. On-line metal leaching is undertaken in the format of all injection (AI) analysis, which is a sequel of flow injection analysis, but involving extraction under steady-state conditions. The minimum circulation times and flow rates required to determine the maximum bioaccessible pools of target metals (viz., Cu, Zn, Cd, and Pb) from lake and river sediment samples were estimated using Tessier's sequential extraction scheme and an acid single extraction test. The on-line AIA method was successfully validated by mass balance studies of CRM and real sediment samples. Tessier's test in on-line AI format demonstrated to be carried out by one third of extraction time (6h against more than 17 h by the conventional method), with better analytical precision (15% by the conventional method) and significant decrease in blank readouts as compared with the manual batch counterpart. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Neural network modeling for near wall turbulent flow

    International Nuclear Information System (INIS)

    Milano, Michele; Koumoutsakos, Petros

    2002-01-01

    A neural network methodology is developed in order to reconstruct the near wall field in a turbulent flow by exploiting flow fields provided by direct numerical simulations. The results obtained from the neural network methodology are compared with the results obtained from prediction and reconstruction using proper orthogonal decomposition (POD). Using the property that the POD is equivalent to a specific linear neural network, a nonlinear neural network extension is presented. It is shown that for a relatively small additional computational cost nonlinear neural networks provide us with improved reconstruction and prediction capabilities for the near wall velocity fields. Based on these results advantages and drawbacks of both approaches are discussed with an outlook toward the development of near wall models for turbulence modeling and control

  10. A planning of exploitation to electric systems approach considering environmental criteria Description of a multicriteria optimization paradigm

    International Nuclear Information System (INIS)

    Schweickardt, Gustavo Alejandro; Gimenez Alvarez, Juan Manuel

    2012-01-01

    This work presents a context and a Model to approach the Planning of Exploitation of Electric Systems problem, in the medium term, considering environmental criteria. A decision making process from a Multicriteria Paradigm is introduced. In the past, environmental criteria just were considered or they were ignored. Due to the growing consciousness about environmental impacts of productive processes, a new orientation to the problem is required: a bigger integral quality of the planning process, instead of searching an optimal solution, based in a minimum investment cost. The Application Model considers the Total Cost of Energy Production and the Environmental Impact produced by emissions of CO 2 , SO 2 y NO x from Thermal Units, and is based in a Fuzzy Sets decision-making to represent the uncertainties in the system decision variables and satisfaction degree of solutions. The results obtained from the Traditional and Multicriteria Model, are finally presented.

  11. The flow equation approach to many-particle systems

    CERN Document Server

    Kehrein, Stefan; Fujimori, A; Varma, C; Steiner, F

    2006-01-01

    This self-contained monograph addresses the flow equation approach to many-particle systems. The flow equation approach consists of a sequence of infinitesimal unitary transformations and is conceptually similar to renormalization and scaling methods. Flow equations provide a framework for analyzing Hamiltonian systems where these conventional many-body techniques fail. The text first discusses the general ideas and concepts of the flow equation method. In a second part these concepts are illustrated with various applications in condensed matter theory including strong-coupling problems and non-equilibrium systems. The monograph is accessible to readers familiar with graduate- level solid-state theory.

  12. Approaching multiphase flows from the perspective of computational fluid dynamics

    International Nuclear Information System (INIS)

    Banas, A.O.

    1992-01-01

    Thermalhydraulic simulation methodologies based on subchannel and porous-medium concepts are briefly reviewed and contrasted with the general approach of Computational Fluid Dynamics (CFD). An outline of the advanced CFD methods for single-phase turbulent flows is followed by a short discussion of the unified formulation of averaged equations for turbulent and multiphase flows. Some of the recent applications of CFD at Chalk River Laboratories are discussed, and the complementary role of CFD with regard to the established thermalhydraulic methods of analysis is indicated. (author). 8 refs

  13. A Hybrid Genetic Algorithm Approach for Optimal Power Flow

    Directory of Open Access Journals (Sweden)

    Sydulu Maheswarapu

    2011-08-01

    Full Text Available This paper puts forward a reformed hybrid genetic algorithm (GA based approach to the optimal power flow. In the approach followed here, continuous variables are designed using real-coded GA and discrete variables are processed as binary strings. The outcomes are compared with many other methods like simple genetic algorithm (GA, adaptive genetic algorithm (AGA, differential evolution (DE, particle swarm optimization (PSO and music based harmony search (MBHS on a IEEE30 bus test bed, with a total load of 283.4 MW. Its found that the proposed algorithm is found to offer lowest fuel cost. The proposed method is found to be computationally faster, robust, superior and promising form its convergence characteristics.

  14. Towards a dynamic assessment of raw materials criticality: Linking agent-based demand — With material flow supply modelling approaches

    International Nuclear Information System (INIS)

    Knoeri, Christof; Wäger, Patrick A.; Stamp, Anna; Althaus, Hans-Joerg; Weil, Marcel

    2013-01-01

    Emerging technologies such as information and communication-, photovoltaic- or battery technologies are expected to increase significantly the demand for scarce metals in the near future. The recently developed methods to evaluate the criticality of mineral raw materials typically provide a ‘snapshot’ of the criticality of a certain material at one point in time by using static indicators both for supply risk and for the impacts of supply restrictions. While allowing for insights into the mechanisms behind the criticality of raw materials, these methods cannot account for dynamic changes in products and/or activities over time. In this paper we propose a conceptual framework intended to overcome these limitations by including the dynamic interactions between different possible demand and supply configurations. The framework integrates an agent-based behaviour model, where demand emerges from individual agent decisions and interaction, into a dynamic material flow model, representing the materials' stocks and flows. Within the framework, the environmental implications of substitution decisions are evaluated by applying life-cycle assessment methodology. The approach makes a first step towards a dynamic criticality assessment and will enhance the understanding of industrial substitution decisions and environmental implications related to critical metals. We discuss the potential and limitation of such an approach in contrast to state-of-the-art methods and how it might lead to criticality assessments tailored to the specific circumstances of single industrial sectors or individual companies. - Highlights: ► Current criticality assessment methods provide a ‘snapshot’ at one point in time. ► They do not account for dynamic interactions between demand and supply. ► We propose a conceptual framework to overcomes these limitations. ► The framework integrates an agent-based behaviour model with a dynamic material flow model. ► The approach proposed makes

  15. Pilot plant for exploitation of geothermal waters

    Directory of Open Access Journals (Sweden)

    Stojiljković Dragan T.

    2006-01-01

    Full Text Available In Sijarinska spa, there are some 15 mineral and thermomineral springs, that are already being used for therapeutic purposes. For the exploitation of heat energy boring B-4 is very interesting. It is a boring of a closed type, with the water temperature of about 78°C and a flow rate of about 33 l/s. Waters with the flow rate of about 6 l/s are currently used for heating of the Gejzer hotel, and waters of the flow rate of about 0,121 l/s for the pilot drying plant. The paper presents this pilot plant. .

  16. Detailed debris flow hazard assessment in Andorra: A multidisciplinary approach

    Science.gov (United States)

    Hürlimann, Marcel; Copons, Ramon; Altimir, Joan

    2006-08-01

    In many mountainous areas, the rapid development of urbanisation and the limited space in the valley floors have created a need to construct buildings in zones potentially exposed to debris flow hazard. In these zones, a detailed and coherent hazard assessment is necessary to provide an adequate urban planning. This article presents a multidisciplinary procedure to evaluate the debris flow hazard at a local scale. Our four-step approach was successfully applied to five torrent catchments in the Principality of Andorra, located in the Pyrenees. The first step consisted of a comprehensive geomorphologic and geologic analysis providing an inventory map of the past debris flows, a magnitude-frequency relationship, and a geomorphologic-geologic map. These data were necessary to determine the potential initiation zones and volumes of future debris flows for each catchment. A susceptibility map and different scenarios were the principal outcome of the first step, as well as essential input data for the second step, the runout analysis. A one-dimensional numerical code was applied to analyse the scenarios previously defined. First, the critical channel sections in the fan area were evaluated, then the maximum runout of the debris flows on the fan was studied, and finally simplified intensity maps for each defined scenario were established. The third step of our hazard assessment was the hazard zonation and the compilation of all the results from the two previous steps in a final hazard map. The base of this hazard map was the hazard matrix, which combined the intensity of the debris flow with its probability of occurrence and determined a certain hazard degree. The fourth step referred to the hazard mitigation and included some recommendations for hazard reduction. In Andorra, this four-step approach is actually being applied to assess the debris flow hazard. The final hazard maps, at 1 : 2000 scale, provide an obligatory tool for local land use planning. Experience

  17. Tactile-Sight: A Sensory Substitution Device Based on Distance-Related Vibrotactile Flow

    Directory of Open Access Journals (Sweden)

    Leandro Cancar

    2013-06-01

    Full Text Available Sensory substitution is a research field of increasing interest with regard to technical, applied and theoretical issues. Among the latter, it is of central interest to understand the form in which humans perceive the environment. Ecological psychology, among other approaches, proposes that we can detect higher-order informational variables (in the sense that they are defined over substantial spatial and temporal intervals that specify our interaction with the environment. When using a vibrotactile sensory substitution device, it is reasonable to ask if stimulation on the skin may be exploitable to detect higher-order variables. Motivated by this question, a portable vibrotactile sensory substitution device was built, using distance-based information as a source and driving a large number of vibrotactile actuators (72 in the reported version, 120 max. The portable device was designed to explore real environments, allowing natural unrestricted movement for the user while providing contingent real-time vibrotactile information. Two preliminary experiments were performed. In the first one, participants were asked to detect the time to contact of an approaching ball in a simulated (desktop environment. Reasonable performance was observed in all experimental conditions, including the one with only tactile stimulation. In the second experiment, a portable version of the device was used in a real environment, where participants were asked to hit an approaching ball. Participants were able to coordinate their arm movements with vibrotactile stimulation in appropriate timing. We conclude that vibrotactile flow can be generated by distance-based activation of the actuators and that this stimulation on the skin allows users to perceive time-to-contact related environmental properties.

  18. The current status of theoretically based approaches to the prediction of the critical heat flux in flow boiling

    International Nuclear Information System (INIS)

    Weisman, J.

    1991-01-01

    This paper reports on the phenomena governing the critical heat flux in flow boiling. Inducts which vary with the flow pattern. Separate models are needed for dryout in annular flow, wall overheating in plug or slug flow and formation of a vapor blanket in dispersed flow. The major theories and their current status are described for the annular and dispersed regions. The need for development of the theoretical approach in the plug and slug flow region is indicated

  19. Lattice Boltzmann approach for complex nonequilibrium flows.

    Science.gov (United States)

    Montessori, A; Prestininzi, P; La Rocca, M; Succi, S

    2015-10-01

    We present a lattice Boltzmann realization of Grad's extended hydrodynamic approach to nonequilibrium flows. This is achieved by using higher-order isotropic lattices coupled with a higher-order regularization procedure. The method is assessed for flow across parallel plates and three-dimensional flows in porous media, showing excellent agreement of the mass flow with analytical and numerical solutions of the Boltzmann equation across the full range of Knudsen numbers, from the hydrodynamic regime to ballistic motion.

  20. A robust and accurate approach to computing compressible multiphase flow: Stratified flow model and AUSM+-up scheme

    International Nuclear Information System (INIS)

    Chang, Chih-Hao; Liou, Meng-Sing

    2007-01-01

    In this paper, we propose a new approach to compute compressible multifluid equations. Firstly, a single-pressure compressible multifluid model based on the stratified flow model is proposed. The stratified flow model, which defines different fluids in separated regions, is shown to be amenable to the finite volume method. We can apply the conservation law to each subregion and obtain a set of balance equations. Secondly, the AUSM + scheme, which is originally designed for the compressible gas flow, is extended to solve compressible liquid flows. By introducing additional dissipation terms into the numerical flux, the new scheme, called AUSM + -up, can be applied to both liquid and gas flows. Thirdly, the contribution to the numerical flux due to interactions between different phases is taken into account and solved by the exact Riemann solver. We will show that the proposed approach yields an accurate and robust method for computing compressible multiphase flows involving discontinuities, such as shock waves and fluid interfaces. Several one-dimensional test problems are used to demonstrate the capability of our method, including the Ransom's water faucet problem and the air-water shock tube problem. Finally, several two dimensional problems will show the capability to capture enormous details and complicated wave patterns in flows having large disparities in the fluid density and velocities, such as interactions between water shock wave and air bubble, between air shock wave and water column(s), and underwater explosion

  1. Exploration of Exploitation Approaches of European Projects on ICT and Foreign Language Learning: the CEFcult project case

    OpenAIRE

    Rusman, Ellen; Rajagopal, Kamakshi; Stoyanov, Slavi; Van Maele, Jan

    2011-01-01

    Rusman, E., Rajagopal, K., Stoyanov, S., & Van Maele, J. (2011, 20-21 October). Exploration of Exploitation Approaches of European Projects on ICT and Foreign Language Learning: the CEFcult project case. Paper presented at the 4th International Conference ICT for Language Learning, Florence, Italy. Available at: http://www.pixel-online.net/ICT4LL2011/conferenceproceedings.php

  2. Numerical Evaluation and Optimization of Multiple Hydraulically Fractured Parameters Using a Flow-Stress-Damage Coupled Approach

    Directory of Open Access Journals (Sweden)

    Yu Wang

    2016-04-01

    Full Text Available Multiple-factor analysis and optimization play a critical role in the the ability to maximizethe stimulated reservoir volume (SRV and the success of economic shale gas production. In this paper, taking the typical continental naturally fractured silty laminae shale in China as anexample, response surface methodology (RSM was employed to optimize multiple hydraulic fracturing parameters to maximize the stimulated area in combination with numerical modeling based on the coupled flow-stress-damage (FSD approach. This paper demonstrates hydraulic fracturing effectiveness by defining two indicesnamelythe stimulated reservoir area (SRA and stimulated silty laminae area (SLA. Seven uncertain parameters, such as laminae thickness, spacing, dip angle, cohesion, internal friction angle (IFA, in situ stress difference (SD, and an operational parameter-injection rate (IR with a reasonable range based on silty Laminae Shale, Southeastern Ordos Basin, are used to fit a response of SRA and SLA as the objective function, and finally identity the optimum design under the parameters based on simultaneously maximizingSRA and SLA. In addition, asensitivity analysis of the influential factors is conducted for SRA and SLA. The aim of the study is to improve the artificial ability to control the fracturing network by means of multi-parameteroptimization. This work promises to provide insights into the effective exploitation of unconventional shale gas reservoirs via optimization of the fracturing design for continental shale, Southeastern Ordos Basin, China.

  3. Conceptual Model and Numerical Approaches for Unsaturated Zone Flow and Transport

    International Nuclear Information System (INIS)

    H.H. Liu

    2004-01-01

    The purpose of this model report is to document the conceptual and numerical models used for modeling unsaturated zone (UZ) fluid (water and air) flow and solute transport processes. This work was planned in ''Technical Work Plan for: Unsaturated Zone Flow Model and Analysis Report Integration'' (BSC 2004 [DIRS 169654], Sections 1.2.5, 2.1.1, 2.1.2 and 2.2.1). The conceptual and numerical modeling approaches described in this report are mainly used for models of UZ flow and transport in fractured, unsaturated rock under ambient conditions. Developments of these models are documented in the following model reports: (1) UZ Flow Model and Submodels; (2) Radionuclide Transport Models under Ambient Conditions. Conceptual models for flow and transport in unsaturated, fractured media are discussed in terms of their applicability to the UZ at Yucca Mountain. The rationale for selecting the conceptual models used for modeling of UZ flow and transport is documented. Numerical approaches for incorporating these conceptual models are evaluated in terms of their representation of the selected conceptual models and computational efficiency; and the rationales for selecting the numerical approaches used for modeling of UZ flow and transport are discussed. This report also documents activities to validate the active fracture model (AFM) based on experimental observations and theoretical developments. The AFM is a conceptual model that describes the fracture-matrix interaction in the UZ of Yucca Mountain. These validation activities are documented in Section 7 of this report regarding use of an independent line of evidence to provide additional confidence in the use of the AFM in the UZ models. The AFM has been used in UZ flow and transport models under both ambient and thermally disturbed conditions. Developments of these models are documented

  4. A non-statistical regularization approach and a tensor product decomposition method applied to complex flow data

    Science.gov (United States)

    von Larcher, Thomas; Blome, Therese; Klein, Rupert; Schneider, Reinhold; Wolf, Sebastian; Huber, Benjamin

    2016-04-01

    Handling high-dimensional data sets like they occur e.g. in turbulent flows or in multiscale behaviour of certain types in Geosciences are one of the big challenges in numerical analysis and scientific computing. A suitable solution is to represent those large data sets in an appropriate compact form. In this context, tensor product decomposition methods currently emerge as an important tool. One reason is that these methods often enable one to attack high-dimensional problems successfully, another that they allow for very compact representations of large data sets. We follow the novel Tensor-Train (TT) decomposition method to support the development of improved understanding of the multiscale behavior and the development of compact storage schemes for solutions of such problems. One long-term goal of the project is the construction of a self-consistent closure for Large Eddy Simulations (LES) of turbulent flows that explicitly exploits the tensor product approach's capability of capturing self-similar structures. Secondly, we focus on a mixed deterministic-stochastic subgrid scale modelling strategy currently under development for application in Finite Volume Large Eddy Simulation (LES) codes. Advanced methods of time series analysis for the databased construction of stochastic models with inherently non-stationary statistical properties and concepts of information theory based on a modified Akaike information criterion and on the Bayesian information criterion for the model discrimination are used to construct surrogate models for the non-resolved flux fluctuations. Vector-valued auto-regressive models with external influences form the basis for the modelling approach [1], [2], [4]. Here, we present the reconstruction capabilities of the two modeling approaches tested against 3D turbulent channel flow data computed by direct numerical simulation (DNS) for an incompressible, isothermal fluid at Reynolds number Reτ = 590 (computed by [3]). References [1] I

  5. Exploiting Flexibility in Coupled Electricity and Natural Gas Markets: A Price-Based Approach

    DEFF Research Database (Denmark)

    Ordoudis, Christos; Delikaraoglou, Stefanos; Pinson, Pierre

    2017-01-01

    Natural gas-fired power plants (NGFPPs) are considered a highly flexible component of the energy system and can facilitate the large-scale integration of intermittent renewable generation. Therefore, it is necessary to improve the coordination between electric power and natural gas systems....... Considering a market-based coupling of these systems, we introduce a decision support tool that increases market efficiency in the current setup where day-ahead and balancing markets are cleared sequentially. The proposed approach relies on the optimal adjustment of natural gas price to modify the scheduling...

  6. Risk assessment by dynamic representation of vulnerability, exploitation, and impact

    Science.gov (United States)

    Cam, Hasan

    2015-05-01

    Assessing and quantifying cyber risk accurately in real-time is essential to providing security and mission assurance in any system and network. This paper presents a modeling and dynamic analysis approach to assessing cyber risk of a network in real-time by representing dynamically its vulnerabilities, exploitations, and impact using integrated Bayesian network and Markov models. Given the set of vulnerabilities detected by a vulnerability scanner in a network, this paper addresses how its risk can be assessed by estimating in real-time the exploit likelihood and impact of vulnerability exploitation on the network, based on real-time observations and measurements over the network. The dynamic representation of the network in terms of its vulnerabilities, sensor measurements, and observations is constructed dynamically using the integrated Bayesian network and Markov models. The transition rates of outgoing and incoming links of states in hidden Markov models are used in determining exploit likelihood and impact of attacks, whereas emission rates help quantify the attack states of vulnerabilities. Simulation results show the quantification and evolving risk scores over time for individual and aggregated vulnerabilities of a network.

  7. Synthetic biology approaches: Towards sustainable exploitation of marine bioactive molecules.

    Science.gov (United States)

    Seghal Kiran, G; Ramasamy, Pasiyappazham; Sekar, Sivasankari; Ramu, Meenatchi; Hassan, Saqib; Ninawe, A S; Selvin, Joseph

    2018-06-01

    The discovery of genes responsible for the production of bioactive metabolites via metabolic pathways combined with the advances in synthetic biology tools, has allowed the establishment of numerous microbial cell factories, for instance the yeast cell factories, for the manufacture of highly useful metabolites from renewable biomass. Genome mining and metagenomics are two platforms provide base-line data for reconstruction of genomes and metabolomes which is based in the development of synthetic/semi-synthetic genomes for marine natural products discovery. Engineered biofilms are being innovated on synthetic biology platform using genetic circuits and cell signalling systems as represillators controlling biofilm formation. Recombineering is a process of homologous recombination mediated genetic engineering, includes insertion, deletion or modification of any sequence specifically. Although this discipline considered new to the scientific domain, this field has now developed as promising endeavor on the accomplishment of sustainable exploitation of marine natural products. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. Control volume based modelling of compressible flow in reciprocating machines

    DEFF Research Database (Denmark)

    Andersen, Stig Kildegård; Thomsen, Per Grove; Carlsen, Henrik

    2004-01-01

    , and multidimensional effects must be calculated using empirical correlations; correlations for steady state flow can be used as an approximation. A transformation that assumes ideal gas is presented for transforming equations for masses and energies in control volumes into the corresponding pressures and temperatures......An approach to modelling unsteady compressible flow that is primarily one dimensional is presented. The approach was developed for creating distributed models of machines with reciprocating pistons but it is not limited to this application. The approach is based on the integral form of the unsteady...... conservation laws for mass, energy, and momentum applied to a staggered mesh consisting of two overlapping strings of control volumes. Loss mechanisms can be included directly in the governing equations of models by including them as terms in the conservation laws. Heat transfer, flow friction...

  9. Legal Approaches to Combating the Exploitation of Third-Country National Seasonal Workers

    NARCIS (Netherlands)

    Rijken, Conny

    2015-01-01

    The Directive on Seasonal workers is aimed at combating exploitative practices vis-a-vis seasonal workers from outside the EU. After a thorough analysis of the conditions under which practices can be qualified as exploitative, this article assesses the extent to which the directive is equipped to

  10. New modeling and experimental approaches for characterization of two-phase flow interfacial structure

    International Nuclear Information System (INIS)

    Ishii, Mamoru; Sun, Xiaodong

    2004-01-01

    This paper presents new experimental and modeling approaches in characterizing interfacial structures in gas-liquid two-phase flow. For the experiments, two objective approaches are developed to identify flow regimes and to obtain local interfacial structure data. First, a global measurement technique using a non-intrusive ring-type impedance void-meter and a self-organizing neural network is presented to identify the one-dimensional'' flow regimes. In the application of this measurement technique, two methods are discussed, namely, one based on the probability density function of the impedance probe measurement (PDF input method) and the other based on the sorted impedance signals, which is essentially the cumulative probability distribution function of the impedance signals (instantaneous direct signal input method). In the latter method, the identification can be made close to instantaneously since the required signals can be acquired over a very short time period. In addition, a double-sensor conductivity probe can also be used to obtain ''local'' flow regimes by using the instantaneous direct signal input method with the bubble chord length information. Furthermore, a newly designed conductivity probe with multiple double-sensor heads is proposed to obtain ''two-dimensional'' flow regimes across the flow channel. Secondly, a state-of-the-art four-sensor conductivity probe technique has been developed to obtain detailed local interfacial structure information. The four-sensor conductivity probe accommodates the double-sensor probe capability and can be applied in a wide range of flow regimes spanning from bubbly to churn-turbulent flows. The signal processing scheme is developed such that it categorizes the acquired parameters into two groups based on bubble cord length information. Furthermore, for the modeling of the interfacial structure characterization, the interfacial area transport equation proposed earlier has been studied to provide a dynamic and

  11. Volumetric velocimetry for fluid flows

    Science.gov (United States)

    Discetti, Stefano; Coletti, Filippo

    2018-04-01

    In recent years, several techniques have been introduced that are capable of extracting 3D three-component velocity fields in fluid flows. Fast-paced developments in both hardware and processing algorithms have generated a diverse set of methods, with a growing range of applications in flow diagnostics. This has been further enriched by the increasingly marked trend of hybridization, in which the differences between techniques are fading. In this review, we carry out a survey of the prominent methods, including optical techniques and approaches based on medical imaging. An overview of each is given with an example of an application from the literature, while focusing on their respective strengths and challenges. A framework for the evaluation of velocimetry performance in terms of dynamic spatial range is discussed, along with technological trends and emerging strategies to exploit 3D data. While critical challenges still exist, these observations highlight how volumetric techniques are transforming experimental fluid mechanics, and that the possibilities they offer have just begun to be explored.

  12. Parametric Approach to Assessing Performance of High-Lift Device Active Flow Control Architectures

    Directory of Open Access Journals (Sweden)

    Yu Cai

    2017-02-01

    Full Text Available Active Flow Control is at present an area of considerable research, with multiple potential aircraft applications. While the majority of research has focused on the performance of the actuators themselves, a system-level perspective is necessary to assess the viability of proposed solutions. This paper demonstrates such an approach, in which major system components are sized based on system flow and redundancy considerations, with the impacts linked directly to the mission performance of the aircraft. Considering the case of a large twin-aisle aircraft, four distinct active flow control architectures that facilitate the simplification of the high-lift mechanism are investigated using the demonstrated approach. The analysis indicates a very strong influence of system total mass flow requirement on architecture performance, both for a typical mission and also over the entire payload-range envelope of the aircraft.

  13. An Event-driven, Value-based, Pull Systems Engineering Scheduling Approach

    Science.gov (United States)

    2012-03-01

    combining a services approach to systems engineering with a kanban -based scheduling system. It provides the basis for validating the approach with...agent-based simulations. Keywords-systems engineering; systems engineering process; lean; kanban ; process simulation I. INTRODUCTION AND BACKGROUND...approaches [8], [9], we are investigating the use of flow-based pull scheduling techniques ( kanban systems) in a rapid response development

  14. Cryogenic parallel, single phase flows: an analytical approach

    Science.gov (United States)

    Eichhorn, R.

    2017-02-01

    Managing the cryogenic flows inside a state-of-the-art accelerator cryomodule has become a demanding endeavour: In order to build highly efficient modules, all heat transfers are usually intercepted at various temperatures. For a multi-cavity module, operated at 1.8 K, this requires intercepts at 4 K and at 80 K at different locations with sometimes strongly varying heat loads which for simplicity reasons are operated in parallel. This contribution will describe an analytical approach, based on optimization theories.

  15. CFD model of diabatic annular two-phase flow using the Eulerian–Lagrangian approach

    International Nuclear Information System (INIS)

    Li, Haipeng; Anglart, Henryk

    2015-01-01

    Highlights: • A CFD model of annular two-phase flow with evaporating liquid film has been developed. • A two-dimensional liquid film model is developed assuming that the liquid film is sufficiently thin. • The liquid film model is coupled to the gas core flow, which is represented using the Eulerian–Lagrangian approach. - Abstract: A computational fluid dynamics (CFD) model of annular two-phase flow with evaporating liquid film has been developed based on the Eulerian–Lagrangian approach, with the objective to predict the dryout occurrence. Due to the fact that the liquid film is sufficiently thin in the diabatic annular flow and at the pre-dryout conditions, it is assumed that the flow in the wall normal direction can be neglected, and the spatial gradients of the dependent variables tangential to the wall are negligible compared to those in the wall normal direction. Subsequently the transport equations of mass, momentum and energy for liquid film are integrated in the wall normal direction to obtain two-dimensional equations, with all the liquid film properties depth-averaged. The liquid film model is coupled to the gas core flow, which currently is represented using the Eulerian–Lagrangian technique. The mass, momentum and energy transfers between the liquid film, gas, and entrained droplets have been taken into account. The resultant unified model for annular flow has been applied to the steam–water flow with conditions typical for a Boiling Water Reactor (BWR). The simulation results for the liquid film flow rate show favorable agreement with the experimental data, with the potential to predict the dryout occurrence based on criteria of critical film thickness or critical film flow rate

  16. Final Report, “Exploiting Global View for Resilience”

    Energy Technology Data Exchange (ETDEWEB)

    Chien, Andrew [Univ. of Chicago, IL (United States)

    2017-03-29

    Final technical report for the "Exploiting Global View for Resilience" project. The GVR project aims to create a new approach to portable, resilient applications. The GVR approach builds on a global view data model,, adding versioning (multi-version), user control of timing and rate (multi-stream), and flexible cross layer error signalling and recovery. With a versioned array as a portable abstraction, GVR enables application programmers to exploit deep scientific and application code insights to manage resilience (and its overhead) in a flexible, portable fashion.

  17. Cloud Based Earth Observation Data Exploitation Platforms

    Science.gov (United States)

    Romeo, A.; Pinto, S.; Loekken, S.; Marin, A.

    2017-12-01

    In the last few years data produced daily by several private and public Earth Observation (EO) satellites reached the order of tens of Terabytes, representing for scientists and commercial application developers both a big opportunity for their exploitation and a challenge for their management. New IT technologies, such as Big Data and cloud computing, enable the creation of web-accessible data exploitation platforms, which offer to scientists and application developers the means to access and use EO data in a quick and cost effective way. RHEA Group is particularly active in this sector, supporting the European Space Agency (ESA) in the Exploitation Platforms (EP) initiative, developing technology to build multi cloud platforms for the processing and analysis of Earth Observation data, and collaborating with larger European initiatives such as the European Plate Observing System (EPOS) and the European Open Science Cloud (EOSC). An EP is a virtual workspace, providing a user community with access to (i) large volume of data, (ii) algorithm development and integration environment, (iii) processing software and services (e.g. toolboxes, visualization routines), (iv) computing resources, (v) collaboration tools (e.g. forums, wiki, etc.). When an EP is dedicated to a specific Theme, it becomes a Thematic Exploitation Platform (TEP). Currently, ESA has seven TEPs in a pre-operational phase dedicated to geo-hazards monitoring and prevention, costal zones, forestry areas, hydrology, polar regions, urban areas and food security. On the technology development side, solutions like the multi cloud EO data processing platform provides the technology to integrate ICT resources and EO data from different vendors in a single platform. In particular it offers (i) Multi-cloud data discovery, (ii) Multi-cloud data management and access and (iii) Multi-cloud application deployment. This platform has been demonstrated with the EGI Federated Cloud, Innovation Platform Testbed Poland

  18. Quantifying ozone uptake at the canopy level of spruce, pine and larch trees at the alpine timberline: an approach based on sap flow measurement

    International Nuclear Information System (INIS)

    Wieser, G.; Matyssek, R.; Koestner, B.; Oberhuber, W.

    2003-01-01

    Sap-flow based measurements can be used to estimate ozone uptake at whole-tree and stand levels. - Micro-climatic and ambient ozone data were combined with measurements of sap flow through tree trunks in order to estimate whole-tree ozone uptake of adult Norway spruce (Picea abies), cembran pine (Pinus cembra), and European larch (Larix decidua) trees. Sap flow was monitored by means of the heat balance approach in two trees of each species during the growing season of 1998. In trees making up the stand canopy, the ozone uptake by evergreen foliages was significantly higher than by deciduous ones, when scaled to the ground area. However, if expressed per unit of whole-tree foliage area, ozone flux through the stomata into the needle mesophyll was 1.09, 1.18 and 1.40 nmol m -2 s -1 in Picea abies, Pinus cembra and Larix decidua, respectively. These fluxes are consistent with findings from measurements of needle gas exchange, published from the same species at the study site. It is concluded that the sap flow-based approach offers an inexpensive, spatially and temporally integrating way for estimating ozone uptake at the whole-tree and stand level, intrinsically covering the effect of boundary layers on ozone flux

  19. Exploiting Quantum Resonance to Solve Combinatorial Problems

    Science.gov (United States)

    Zak, Michail; Fijany, Amir

    2006-01-01

    Quantum resonance would be exploited in a proposed quantum-computing approach to the solution of combinatorial optimization problems. In quantum computing in general, one takes advantage of the fact that an algorithm cannot be decoupled from the physical effects available to implement it. Prior approaches to quantum computing have involved exploitation of only a subset of known quantum physical effects, notably including parallelism and entanglement, but not including resonance. In the proposed approach, one would utilize the combinatorial properties of tensor-product decomposability of unitary evolution of many-particle quantum systems for physically simulating solutions to NP-complete problems (a class of problems that are intractable with respect to classical methods of computation). In this approach, reinforcement and selection of a desired solution would be executed by means of quantum resonance. Classes of NP-complete problems that are important in practice and could be solved by the proposed approach include planning, scheduling, search, and optimal design.

  20. Variations in the Flow Approach to CFCLP-TC for Multiobjective Supply Chain Design

    Directory of Open Access Journals (Sweden)

    Minor P. Hertwin

    2014-01-01

    Full Text Available We review the problem for the design of supply chains called Capacitated Fixed Cost Facility Location Problem with Transportation Choices (CFCLP-TC. The problem is based on a production network of two echelons with multiple plants, a set of potential distribution centers, and customers. The problem is formulated as an optimization model with two objective functions based on time and cost. This paper proposes three changes to the original model to compare the sets of efficient solutions and the computational time required to obtain them. The main contribution of this paper is to extend the existing literature by incorporating approaches for the supply of product to customers through multiple sources, the direct flow between plants and customers, without this necessarily implying removing the distribution centers, and the product flow between distribution centers. From these approaches, we generate mathematical programming models and propose to solve through the epsilon-constraint approach for generating Pareto fronts and thus compare each of these approaches with the original model. The models are implemented in GAMS and solved with CPLEX.

  1. Statistical characteristics of falling-film flows: A synergistic approach at the crossroads of direct numerical simulations and experiments

    Science.gov (United States)

    Charogiannis, Alexandros; Denner, Fabian; van Wachem, Berend G. M.; Kalliadasis, Serafim; Markides, Christos N.

    2017-12-01

    We scrutinize the statistical characteristics of liquid films flowing over an inclined planar surface based on film height and velocity measurements that are recovered simultaneously by application of planar laser-induced fluorescence (PLIF) and particle tracking velocimetry (PTV), respectively. Our experiments are complemented by direct numerical simulations (DNSs) of liquid films simulated for different conditions so as to expand the parameter space of our investigation. Our statistical analysis builds upon a Reynolds-like decomposition of the time-varying flow rate that was presented in our previous research effort on falling films in [Charogiannis et al., Phys. Rev. Fluids 2, 014002 (2017), 10.1103/PhysRevFluids.2.014002], and which reveals that the dimensionless ratio of the unsteady term to the mean flow rate increases linearly with the product of the coefficients of variation of the film height and bulk velocity, as well as with the ratio of the Nusselt height to the mean film height, both at the same upstream PLIF/PTV measurement location. Based on relations that are derived to describe these results, a methodology for predicting the mass-transfer capability (through the mean and standard deviation of the bulk flow speed) of these flows is developed in terms of the mean and standard deviation of the film thickness and the mean flow rate, which are considerably easier to obtain experimentally than velocity profiles. The errors associated with these predictions are estimated at ≈1.5 % and 8% respectively in the experiments and at <1 % and <2 % respectively in the DNSs. Beyond the generation of these relations for the prediction of important film flow characteristics based on simple flow information, the data provided can be used to design improved heat- and mass-transfer equipment reactors or other process operation units which exploit film flows, but also to develop and validate multiphase flow models in other physical and technological settings.

  2. The Theory of Exploitation as the Unequal Exchange of Labour

    OpenAIRE

    Veneziani, Roberto; Yoshihara, Naoki

    2016-01-01

    This paper analyses the normative and positive foundations of the theory of exploitation as the unequal exchange of labour (UEL). The key intuitions behind all of the main approaches to UEL exploitation are explicitly analysed as a series of formal claims in a general economic environment. It is then argued that these intuitions can be captured by one fundamental axiom - called Labour Exploitation - which defines the basic domain of all UEL exploitation forms and identifies the formal and the...

  3. The theory of exploitation as the unequal exchange of labour

    OpenAIRE

    Veneziani, Roberto; Yoshihara, Naoki

    2017-01-01

    This paper analyses the normative and positive foundations of the theory of exploitation as the unequal exchange of labour (UEL). The key intuitions behind all of the main approaches to UEL exploitation are explicitly analysed as a series of formal claims in a general economic environment. It is then argued that these intuitions can be captured by one fundamental axiom - called Labour Exploitation - which defines the basic domain of all UEL exploitation forms and identifies the formal and the...

  4. Phishing Detection: Analysis of Visual Similarity Based Approaches

    Directory of Open Access Journals (Sweden)

    Ankit Kumar Jain

    2017-01-01

    Full Text Available Phishing is one of the major problems faced by cyber-world and leads to financial losses for both industries and individuals. Detection of phishing attack with high accuracy has always been a challenging issue. At present, visual similarities based techniques are very useful for detecting phishing websites efficiently. Phishing website looks very similar in appearance to its corresponding legitimate website to deceive users into believing that they are browsing the correct website. Visual similarity based phishing detection techniques utilise the feature set like text content, text format, HTML tags, Cascading Style Sheet (CSS, image, and so forth, to make the decision. These approaches compare the suspicious website with the corresponding legitimate website by using various features and if the similarity is greater than the predefined threshold value then it is declared phishing. This paper presents a comprehensive analysis of phishing attacks, their exploitation, some of the recent visual similarity based approaches for phishing detection, and its comparative study. Our survey provides a better understanding of the problem, current solution space, and scope of future research to deal with phishing attacks efficiently using visual similarity based approaches.

  5. Synthesis of Riboflavines, Quinoxalinones and Benzodiazepines through Chemoselective Flow Based Hydrogenations

    Directory of Open Access Journals (Sweden)

    Marcus Baumann

    2014-07-01

    Full Text Available Robust chemical routes towards valuable bioactive entities such as riboflavines, quinoxalinones and benzodiazepines are described. These make use of modern flow hydrogenation protocols enabling the chemoselective reduction of nitro group containing building blocks in order to rapidly generate the desired amine intermediates in situ. In order to exploit the benefits of continuous processing the individual steps were transformed into a telescoped flow process delivering selected benzodiazepine products on scales of 50 mmol and 120 mmol respectively.

  6. Transition to turbulence in pulsatile flow through heart valves--a modified stability approach.

    Science.gov (United States)

    Bluestein, D; Einav, S

    1994-11-01

    The presence of turbulence in the cardiovascular system is generally an indication of some type of abnormality. Most cardiologists agree that turbulence near a valve indicates either valvular stenosis or regurgitation, depending on the phase of its occurrence during the cardiac cycle. As no satisfying analytical solutions of the stability of turbulent pulsatile flow exist, accurate, unbiased flow stability criteria are needed for the identification of turbulence initiation. The traditional approach uses a stability diagram based upon the stability of a plane Stokes layer where alpha (the Womersley parameter) is defined by the fundamental heart rate. We suggest a modified approach that involves the decomposition of alpha into its frequency components, where alpha is derived from the preferred modes induced on the flow by interaction between flow pulsation and the valve. Transition to turbulence in pulsatile flow through heart values was investigated in a pulse duplicator system using three polymer aortic valve models representing a normal aortic valve, a 65 percent stenosed valve and a 90 percent severely stenosed valve, and two mitral valve models representing a normal mitral valve and a 65 percent stenosed valve. Valve characteristics were closely simulated as to mimic the conditions that alter flow stability and initiate turbulent flow conditions. Valvular velocity waveforms were measured by laser Doppler anemometry (LDA). Spectral analysis was performed on velocity signals at selected spatial and temporal points to produce the power density spectra, in which the preferred frequency modes were identified. The spectra obtained during the rapid closure stage of the valves were found to be governed by the stenosis geometry. A shift toward higher dominant frequencies was correlated with the severity of the stenosis. According to the modified approach, stability of the flow is represented by a cluster of points, each corresponding to a specific dominant mode apparent

  7. Exploiting Software Tool Towards Easier Use And Higher Efficiency

    Science.gov (United States)

    Lin, G. H.; Su, J. T.; Deng, Y. Y.

    2006-08-01

    In developing countries, using data based on instrument made by themselves in maximum extent is very important. It is not only related to maximizing science returns upon prophase investment -- deep accumulations in every aspects but also science output. Based on the idea, we are exploiting a software (called THDP: Tool of Huairou Data Processing). It is used for processing a series of issues, which is met necessary in processing data. This paper discusses its designed purpose, functions, method and specialities. The primary vehicle for general data interpretation is through various techniques of data visualization, techniques of interactive. In the software, we employed Object Oriented approach. It is appropriate to the vehicle. it is imperative that the approach provide not only function, but do so in as convenient a fashion as possible. As result of the software exploiting, it is not only easier to learn data processing for beginner and more convenienter to need further improvement for senior but also increase greatly efficiency in every phrases include analyse, parameter adjusting, result display. Under frame of virtual observatory, for developing countries, we should study more and newer related technologies, which can advance ability and efficiency in science research, like the software we are developing

  8. Geometric saliency to characterize radar exploitation performance

    Science.gov (United States)

    Nolan, Adam; Keserich, Brad; Lingg, Andrew; Goley, Steve

    2014-06-01

    Based on the fundamental scattering mechanisms of facetized computer-aided design (CAD) models, we are able to define expected contributions (EC) to the radar signature. The net result of this analysis is the prediction of the salient aspects and contributing vehicle morphology based on the aspect. Although this approach does not provide the fidelity of an asymptotic electromagnetic (EM) simulation, it does provide very fast estimates of the unique scattering that can be consumed by a signature exploitation algorithm. The speed of this approach is particularly relevant when considering the high dimensionality of target configuration variability due to articulating parts which are computationally burdensome to predict. The key scattering phenomena considered in this work are the specular response from a single bounce interaction with surfaces and dihedral response formed between the ground plane and vehicle. Results of this analysis are demonstrated for a set of civilian target models.

  9. Automated UAV-based video exploitation using service oriented architecture framework

    Science.gov (United States)

    Se, Stephen; Nadeau, Christian; Wood, Scott

    2011-05-01

    Airborne surveillance and reconnaissance are essential for successful military missions. Such capabilities are critical for troop protection, situational awareness, mission planning, damage assessment, and others. Unmanned Aerial Vehicles (UAVs) gather huge amounts of video data but it is extremely labour-intensive for operators to analyze hours and hours of received data. At MDA, we have developed a suite of tools that can process the UAV video data automatically, including mosaicking, change detection and 3D reconstruction, which have been integrated within a standard GIS framework. In addition, the mosaicking and 3D reconstruction tools have also been integrated in a Service Oriented Architecture (SOA) framework. The Visualization and Exploitation Workstation (VIEW) integrates 2D and 3D visualization, processing, and analysis capabilities developed for UAV video exploitation. Visualization capabilities are supported through a thick-client Graphical User Interface (GUI), which allows visualization of 2D imagery, video, and 3D models. The GUI interacts with the VIEW server, which provides video mosaicking and 3D reconstruction exploitation services through the SOA framework. The SOA framework allows multiple users to perform video exploitation by running a GUI client on the operator's computer and invoking the video exploitation functionalities residing on the server. This allows the exploitation services to be upgraded easily and allows the intensive video processing to run on powerful workstations. MDA provides UAV services to the Canadian and Australian forces in Afghanistan with the Heron, a Medium Altitude Long Endurance (MALE) UAV system. On-going flight operations service provides important intelligence, surveillance, and reconnaissance information to commanders and front-line soldiers.

  10. Detecting Activation in fMRI Data: An Approach Based on Sparse Representation of BOLD Signal

    Directory of Open Access Journals (Sweden)

    Blanca Guillen

    2018-01-01

    Full Text Available This paper proposes a simple yet effective approach for detecting activated voxels in fMRI data by exploiting the inherent sparsity property of the BOLD signal in temporal and spatial domains. In the time domain, the approach combines the General Linear Model (GLM with a Least Absolute Deviation (LAD based regression method regularized by the pseudonorm l0 to promote sparsity in the parameter vector of the model. In the spatial domain, detection of activated regions is based on thresholding the spatial map of estimated parameters associated with a particular stimulus. The threshold is calculated by exploiting the sparseness of the BOLD signal in the spatial domain assuming a Laplacian distribution model. The proposed approach is validated using synthetic and real fMRI data. For synthetic data, results show that the proposed approach is able to detect most activated voxels without any false activation. For real data, the method is evaluated through comparison with the SPM software. Results indicate that this approach can effectively find activated regions that are similar to those found by SPM, but using a much simpler approach. This study may lead to the development of robust spatial approaches to further simplifying the complexity of classical schemes.

  11. For your first born child: an ethical defense of the exploitation argument against commercial surrogacy.

    Science.gov (United States)

    Osberg, Brendan

    2006-01-01

    In this essay I explore two arguments against commercial surrogacy, based on commodification and exploitation respectively. I adopt a consequentialist framework and argue that commodification arguments must be grounded in a resultant harm to either child or surrogate, and that a priori arguments which condemn the practice for puritanical reasons cannot form a basis for public law. Furthermore there is no overwhelming evidence of harm caused to either party involved in commercial surrogacy, and hence Canadian law (which forbids the practice) must (and can) be justified on exploitative grounds. Objections raised by Wilkinson based on an 'isolated case' approach are addressed when one takes into account the political implications of public policy. I argue that is precisely these implications that justify laws forbidding commercial surrogacy on the grounds of preventing systematic exploitation.

  12. CONFAC Decomposition Approach to Blind Identification of Underdetermined Mixtures Based on Generating Function Derivatives

    NARCIS (Netherlands)

    de Almeida, Andre L. F.; Luciani, Xavier; Stegeman, Alwin; Comon, Pierre

    This work proposes a new tensor-based approach to solve the problem of blind identification of underdetermined mixtures of complex-valued sources exploiting the cumulant generating function (CGF) of the observations. We show that a collection of second-order derivatives of the CGF of the

  13. Managing the Innovators for Exploration and Exploitation

    Directory of Open Access Journals (Sweden)

    C. Annique UN

    2007-09-01

    Full Text Available I analyze how to manage employees to achieve a balance between exploration and exploitation in large established firms. Previous studies suggest that, although firms need to undertake both exploration and exploitation simultaneously, this is difficult either because of the scarcity of resources or because of the incompatibility of these two processes. Proposed solutions have been ambidexterity, punctuated equilibrium or specialization. I suggest another method: managing employees. Specifically, I argue that using the so-called “innovative” system of human resource management practices, consisting of team-based incentive system, team-based job design, and job rotation, enables the firm to undertake exploration and exploitation simultaneously because it provides the psychological safety for people to explore new knowledge to make novel products and develops employees to have the perspective-taking capability that enables the integration of knowledge cross-functionally for efficiency. Using the so-called “traditional” system of human resource management practices, consisting of individual-based incentive system, individual-based job design, and no job rotation, has limited impact on either exploration or exploitation because it does not create the psychological safety for people to explore new knowledge and does not develop the perspective-taking capability needed for exploitation. Moreover, mixing practices from both systems is better than only using the traditional system in achieving exploration or exploitation, but less effective than only using the innovative system as the mix of practices can create inconsistent expectations on employees.

  14. MEMS Flow Sensors Based on Self-Heated aGe-Thermistors in a Wheatstone Bridge

    Directory of Open Access Journals (Sweden)

    Almir Talic

    2015-04-01

    Full Text Available A thermal flow transduction method combining the advantages of calorimetric and hot-film transduction principles is developed and analyzed by Finite Element Method (FEM simulations and confirmed experimentally. The analyses include electrothermal feedback effects of current driven NTC thermistors. Four thin-film germanium thermistors acting simultaneously as heat sources and as temperature sensors are embedded in a micromachined silicon-nitride membrane. These devices form a self-heated Wheatstone bridge that is unbalanced by convective cooling. The voltage across the bridge and the total dissipated power are exploited as output quantities. The used thin-film thermistors feature an extremely high temperature sensitivity. Combined with properly designed resistance values, a power demand in sub-1mW range enables efficient gas-flow transduction, as confirmed by measurements. Two sensor configurations with different arrangements of the membrane thermistors were examined experimentally. Moreover, we investigated the influence of different layouts on the rise time, the sensitivity, and the usable flow range by means of two-dimensional finite element simulations. The simulation results are in reasonable agreement with corresponding measurement data confirming the basic assumptions and modeling approach.

  15. Energy Based Clutter Filtering for Vector Flow Imaging

    DEFF Research Database (Denmark)

    Villagómez Hoyos, Carlos Armando; Jensen, Jonas; Ewertsen, Caroline

    2017-01-01

    for obtaining vector flow measurements, since the spectra overlaps at high beam-to-flow angles. In this work a distinct approach is proposed, where the energy of the velocity spectrum is used to differentiate among the two signals. The energy based method is applied by limiting the amplitude of the velocity...... spectrum function to a predetermined threshold. The effect of the clutter filtering is evaluated on a plane wave (PW) scan sequence in combination with transverse oscillation (TO) and directional beamforming (DB) for velocity estimation. The performance of the filter is assessed by comparison...

  16. A network-flow based valve-switching aware binding algorithm for flow-based microfluidic biochips

    DEFF Research Database (Denmark)

    Tseng, Kai-Han; You, Sheng-Chi; Minhass, Wajid Hassan

    2013-01-01

    -flow based resource binding algorithm based on breadth-first search (BFS) and minimum cost maximum flow (MCMF) in architectural-level synthesis. The experimental results show that our methodology not only makes significant reduction of valve-switching activities but also diminishes the application completion......Designs of flow-based microfluidic biochips are receiving much attention recently because they replace conventional biological automation paradigm and are able to integrate different biochemical analysis functions on a chip. However, as the design complexity increases, a flow-based microfluidic...... biochip needs more chip-integrated micro-valves, i.e., the basic unit of fluid-handling functionality, to manipulate the fluid flow for biochemical applications. Moreover, frequent switching of micro-valves results in decreased reliability. To minimize the valve-switching activities, we develop a network...

  17. Main principles of developing exploitation models of semiconductor devices

    Science.gov (United States)

    Gradoboev, A. V.; Simonova, A. V.

    2018-05-01

    The paper represents primary tasks, solutions of which allow to develop the exploitation modes of semiconductor devices taking into account complex and combined influence of ionizing irradiation and operation factors. The structure of the exploitation model of the semiconductor device is presented, which is based on radiation and reliability models. Furthermore, it was shown that the exploitation model should take into account complex and combine influence of various ionizing irradiation types and operation factors. The algorithm of developing the exploitation model of the semiconductor devices is proposed. The possibility of creating the radiation model of Schottky barrier diode, Schottky field-effect transistor and Gunn diode is shown based on the available experimental data. The basic exploitation model of IR-LEDs based upon double AlGaAs heterostructures is represented. The practical application of the exploitation models will allow to output the electronic products with guaranteed operational properties.

  18. Cluster-based control of a separating flow over a smoothly contoured ramp

    Science.gov (United States)

    Kaiser, Eurika; Noack, Bernd R.; Spohn, Andreas; Cattafesta, Louis N.; Morzyński, Marek

    2017-12-01

    The ability to manipulate and control fluid flows is of great importance in many scientific and engineering applications. The proposed closed-loop control framework addresses a key issue of model-based control: The actuation effect often results from slow dynamics of strongly nonlinear interactions which the flow reveals at timescales much longer than the prediction horizon of any model. Hence, we employ a probabilistic approach based on a cluster-based discretization of the Liouville equation for the evolution of the probability distribution. The proposed methodology frames high-dimensional, nonlinear dynamics into low-dimensional, probabilistic, linear dynamics which considerably simplifies the optimal control problem while preserving nonlinear actuation mechanisms. The data-driven approach builds upon a state space discretization using a clustering algorithm which groups kinematically similar flow states into a low number of clusters. The temporal evolution of the probability distribution on this set of clusters is then described by a control-dependent Markov model. This Markov model can be used as predictor for the ergodic probability distribution for a particular control law. This probability distribution approximates the long-term behavior of the original system on which basis the optimal control law is determined. We examine how the approach can be used to improve the open-loop actuation in a separating flow dominated by Kelvin-Helmholtz shedding. For this purpose, the feature space, in which the model is learned, and the admissible control inputs are tailored to strongly oscillatory flows.

  19. From medium heterogeneity to flow and transport: A time-domain random walk approach

    Science.gov (United States)

    Hakoun, V.; Comolli, A.; Dentz, M.

    2017-12-01

    The prediction of flow and transport processes in heterogeneous porous media is based on the qualitative and quantitative understanding of the interplay between 1) spatial variability of hydraulic conductivity, 2) groundwater flow and 3) solute transport. Using a stochastic modeling approach, we study this interplay through direct numerical simulations of Darcy flow and advective transport in heterogeneous media. First, we study flow in correlated hydraulic permeability fields and shed light on the relationship between the statistics of log-hydraulic conductivity, a medium attribute, and the flow statistics. Second, we determine relationships between Eulerian and Lagrangian velocity statistics, this means, between flow and transport attributes. We show how Lagrangian statistics and thus transport behaviors such as late particle arrival times are influenced by the medium heterogeneity on one hand and the initial particle velocities on the other. We find that equidistantly sampled Lagrangian velocities can be described by a Markov process that evolves on the characteristic heterogeneity length scale. We employ a stochastic relaxation model for the equidistantly sampled particle velocities, which is parametrized by the velocity correlation length. This description results in a time-domain random walk model for the particle motion, whose spatial transitions are characterized by the velocity correlation length and temporal transitions by the particle velocities. This approach relates the statistical medium and flow properties to large scale transport, and allows for conditioning on the initial particle velocities and thus to the medium properties in the injection region. The approach is tested against direct numerical simulations.

  20. Metagenomic approaches to exploit the biotechnological potential of the microbial consortia of marine sponges.

    Science.gov (United States)

    Kennedy, Jonathan; Marchesi, Julian R; Dobson, Alan D W

    2007-05-01

    Natural products isolated from sponges are an important source of new biologically active compounds. However, the development of these compounds into drugs has been held back by the difficulties in achieving a sustainable supply of these often-complex molecules for pre-clinical and clinical development. Increasing evidence implicates microbial symbionts as the source of many of these biologically active compounds, but the vast majority of the sponge microbial community remain uncultured. Metagenomics offers a biotechnological solution to this supply problem. Metagenomes of sponge microbial communities have been shown to contain genes and gene clusters typical for the biosynthesis of biologically active natural products. Heterologous expression approaches have also led to the isolation of secondary metabolism gene clusters from uncultured microbial symbionts of marine invertebrates and from soil metagenomic libraries. Combining a metagenomic approach with heterologous expression holds much promise for the sustainable exploitation of the chemical diversity present in the sponge microbial community.

  1. A simplified approach for the computation of steady two-phase flow in inverted siphons.

    Science.gov (United States)

    Diogo, A Freire; Oliveira, Maria C

    2016-01-15

    Hydraulic, sanitary, and sulfide control conditions of inverted siphons, particularly in large wastewater systems, can be substantially improved by continuous air injection in the base of the inclined rising branch. This paper presents a simplified approach that was developed for the two-phase flow of the rising branch using the energy equation for a steady pipe flow, based on the average fluid fraction, observed slippage between phases, and isothermal assumption. As in a conventional siphon design, open channel steady uniform flow is assumed in inlet and outlet chambers, corresponding to the wastewater hydraulic characteristics in the upstream and downstream sewers, and the descending branch operates in steady uniform single-phase pipe flow. The proposed approach is tested and compared with data obtained in an experimental siphon setup with two plastic barrels of different diameters operating separately as in a single-barrel siphon. Although the formulations developed are very simple, the results show a good adjustment for the set of the parameters used and conditions tested and are promising mainly for sanitary siphons with relatively moderate heights of the ascending branch. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Reversible logic gates based on enzyme-biocatalyzed reactions and realized in flow cells: a modular approach.

    Science.gov (United States)

    Fratto, Brian E; Katz, Evgeny

    2015-05-18

    Reversible logic gates, such as the double Feynman gate, Toffoli gate and Peres gate, with 3-input/3-output channels are realized using reactions biocatalyzed with enzymes and performed in flow systems. The flow devices are constructed using a modular approach, where each flow cell is modified with one enzyme that biocatalyzes one chemical reaction. The multi-step processes mimicking the reversible logic gates are organized by combining the biocatalytic cells in different networks. This work emphasizes logical but not physical reversibility of the constructed systems. Their advantages and disadvantages are discussed and potential use in biosensing systems, rather than in computing devices, is suggested. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. An optical flow algorithm based on gradient constancy assumption for PIV image processing

    International Nuclear Information System (INIS)

    Zhong, Qianglong; Yang, Hua; Yin, Zhouping

    2017-01-01

    Particle image velocimetry (PIV) has matured as a flow measurement technique. It enables the description of the instantaneous velocity field of the flow by analyzing the particle motion obtained from digitally recorded images. Correlation based PIV evaluation technique is widely used because of its good accuracy and robustness. Although very successful, correlation PIV technique has some weakness which can be avoided by optical flow based PIV algorithms. At present, most of the optical flow methods applied to PIV are based on brightness constancy assumption. However, some factors of flow imaging technology and the nature property of the fluids make the brightness constancy assumption less appropriate in real PIV cases. In this paper, an implementation of a 2D optical flow algorithm (GCOF) based on gradient constancy assumption is introduced. The proposed GCOF assumes the edges of the illuminated PIV particles are constant during motion. It comprises two terms: a combined local-global gradient data term and a first-order divergence and vorticity smooth term. The approach can provide accurate dense motion fields. The approach are tested on synthetic images and on two experimental flows. The comparison of GCOF with other optical flow algorithms indicates the proposed method is more accurate especially in conditions of illumination variation. The comparison of GCOF with correlation PIV technique shows that the proposed GCOF has advantages on preserving small divergence and vorticity structures of the motion field and getting less outliers. As a consequence, the GCOF acquire a more accurate and better topological description of the turbulent flow. (paper)

  4. Learning Based Approach for Optimal Clustering of Distributed Program's Call Flow Graph

    Science.gov (United States)

    Abofathi, Yousef; Zarei, Bager; Parsa, Saeed

    Optimal clustering of call flow graph for reaching maximum concurrency in execution of distributable components is one of the NP-Complete problems. Learning automatas (LAs) are search tools which are used for solving many NP-Complete problems. In this paper a learning based algorithm is proposed to optimal clustering of call flow graph and appropriate distributing of programs in network level. The algorithm uses learning feature of LAs to search in state space. It has been shown that the speed of reaching to solution increases remarkably using LA in search process, and it also prevents algorithm from being trapped in local minimums. Experimental results show the superiority of proposed algorithm over others.

  5. Application of a novel metabolomic approach based on atmospheric pressure photoionization mass spectrometry using flow injection analysis for the study of Alzheimer's disease.

    Science.gov (United States)

    González-Domínguez, Raúl; García-Barrera, Tamara; Gómez-Ariza, José Luis

    2015-01-01

    The use of atmospheric pressure photoionization is not widespread in metabolomics, despite its considerable potential for the simultaneous analysis of compounds with diverse polarities. This work considers the development of a novel analytical approach based on flow injection analysis and atmospheric pressure photoionization mass spectrometry for rapid metabolic screening of serum samples. Several experimental parameters were optimized, such as type of dopant, flow injection solvent, and their flows, given that a careful selection of these variables is mandatory for a comprehensive analysis of metabolites. Toluene and methanol were the most suitable dopant and flow injection solvent, respectively. Moreover, analysis in negative mode required higher solvent and dopant flows (100 µl min(-1) and 40 µl min(-1), respectively) compared to positive mode (50 µl min(-1) and 20 µl min(-1)). Then, the optimized approach was used to elucidate metabolic alterations associated with Alzheimer's disease. Thereby, results confirm the increase of diacylglycerols, ceramides, ceramide-1-phosphate and free fatty acids, indicating membrane destabilization processes, and reduction of fatty acid amides and several neurotransmitters related to impairments in neuronal transmission, among others. Therefore, it could be concluded that this metabolomic tool presents a great potential for analysis of biological samples, considering its high-throughput screening capability, fast analysis and comprehensive metabolite coverage. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. Consensual exploitation : the moral wrong in exploitation and legal restrictions on consensual exploitative transactions

    OpenAIRE

    van der Neut, Wendy

    2014-01-01

    This thesis is about so-­‐called consensual exploitative transactions: transactions to which all parties agree voluntarily, and which are beneficial for all parties, but which are still widely considered exploitative, and for that reason legally restricted in many countries. The thesis asks two main questions: 1. What is wrong with consensual exploitation? 2.What implications does the answer to this question have for the legal restriction of consensual transactions ...

  7. Mode decomposition methods for flows in high-contrast porous media. A global approach

    KAUST Repository

    Ghommem, Mehdi; Calo, Victor M.; Efendiev, Yalchin R.

    2014-01-01

    We apply dynamic mode decomposition (DMD) and proper orthogonal decomposition (POD) methods to flows in highly-heterogeneous porous media to extract the dominant coherent structures and derive reduced-order models via Galerkin projection. Permeability fields with high contrast are considered to investigate the capability of these techniques to capture the main flow features and forecast the flow evolution within a certain accuracy. A DMD-based approach shows a better predictive capability due to its ability to accurately extract the information relevant to long-time dynamics, in particular, the slowly-decaying eigenmodes corresponding to largest eigenvalues. Our study enables a better understanding of the strengths and weaknesses of the applicability of these techniques for flows in high-contrast porous media. Furthermore, we discuss the robustness of DMD- and POD-based reduced-order models with respect to variations in initial conditions, permeability fields, and forcing terms. © 2013 Elsevier Inc.

  8. Robust motion control of oscillatory-base manipulators h∞-control and sliding-mode-control-based approaches

    CERN Document Server

    Toda, Masayoshi

    2016-01-01

    This book provides readers with alternative robust approaches to control design for an important class of systems characteristically associated with ocean-going vessels and structures. These systems, which include crane vessels, on-board cranes, radar gimbals, and a conductivity temperature and depth winch, are modelled as manipulators with oscillating bases. One design approach is based on the H-infinity control framework exploiting an effective combination of PD control, an extended matrix polytope and a robust stability analysis method with a state-dependent coefficient form. The other is based on sliding-mode control using some novel nonlinear sliding surfaces. The model demonstrates how successful motion control can be achieved by suppressing base oscillations and in the presence of uncertainties. This is important not only for ocean engineering systems in which the problems addressed here originate but more generally as a benchmark platform for robust motion control with disturbance rejection. Researche...

  9. A decoupled power flow algorithm using particle swarm optimization technique

    International Nuclear Information System (INIS)

    Acharjee, P.; Goswami, S.K.

    2009-01-01

    A robust, nondivergent power flow method has been developed using the particle swarm optimization (PSO) technique. The decoupling properties between the power system quantities have been exploited in developing the power flow algorithm. The speed of the power flow algorithm has been improved using a simple perturbation technique. The basic power flow algorithm and the improvement scheme have been designed to retain the simplicity of the evolutionary approach. The power flow is rugged, can determine the critical loading conditions and also can handle the flexible alternating current transmission system (FACTS) devices efficiently. Test results on standard test systems show that the proposed method can find the solution when the standard power flows fail.

  10. Ice Flows: A Game-based Learning approach to Science Communication

    Science.gov (United States)

    Le Brocq, Anne

    2017-04-01

    Game-based learning allows people to become immersed in an environment, and learn how the system functions and responds to change through playing a game. Science and gaming share a similar characteristic: they both involve learning and understanding the rules of the environment you are in, in order to achieve your objective. I will share experiences of developing and using the educational game "Ice Flows" for science communication. The game tasks the player with getting a penguin to its destination, through controlling the size of the ice sheet via ocean temperature and snowfall. Therefore, the game aims to educate the user about the environmental controls on the behaviour of the ice sheet, whilst they are enjoying playing a game with penguins. The game was funded by a NERC Large Grant entitled "Ice shelves in a warming world: Filchner Ice Shelf system, Antarctica", so uses data from the Weddell Sea sector of the West Antarctic Ice Sheet to generate unique levels. The game will be easily expandable to other regions of Antarctica and beyond, with the ultimate aim of giving a full understanding to the user of different ice flow regimes across the planet.

  11. A Variational Approach to Video Registration with Subspace Constraints.

    Science.gov (United States)

    Garg, Ravi; Roussos, Anastasios; Agapito, Lourdes

    2013-01-01

    This paper addresses the problem of non-rigid video registration, or the computation of optical flow from a reference frame to each of the subsequent images in a sequence, when the camera views deformable objects. We exploit the high correlation between 2D trajectories of different points on the same non-rigid surface by assuming that the displacement of any point throughout the sequence can be expressed in a compact way as a linear combination of a low-rank motion basis. This subspace constraint effectively acts as a trajectory regularization term leading to temporally consistent optical flow. We formulate it as a robust soft constraint within a variational framework by penalizing flow fields that lie outside the low-rank manifold. The resulting energy functional can be decoupled into the optimization of the brightness constancy and spatial regularization terms, leading to an efficient optimization scheme. Additionally, we propose a novel optimization scheme for the case of vector valued images, based on the dualization of the data term. This allows us to extend our approach to deal with colour images which results in significant improvements on the registration results. Finally, we provide a new benchmark dataset, based on motion capture data of a flag waving in the wind, with dense ground truth optical flow for evaluation of multi-frame optical flow algorithms for non-rigid surfaces. Our experiments show that our proposed approach outperforms state of the art optical flow and dense non-rigid registration algorithms.

  12. Low-Level Exploitation Mitigation by Diverse Microservices

    OpenAIRE

    Otterstad , Christian; Yarygina , Tetiana

    2017-01-01

    Part 2: Microservices and Containers; International audience; This paper discusses a combination of isolatable microservices and software diversity as a mitigation technique against low-level exploitation; the effectiveness and benefits of such an architecture are substantiated. We argue that the core security benefit of microservices with diversity is increased control flow isolation. Additionally, a new microservices mitigation technique leveraging a security monitor service is introduced t...

  13. Novel approach to the exploitation of the tidal energy. Volume 1: Summary and discussion

    Science.gov (United States)

    Gorlov, A. M.

    1981-12-01

    The hydropneumatic concept in the approach to harnessing low tidal hydropower is discussed. The energy of water flow is converted into the energy of an air jet by a specialized air chamber which is placed on the ocean floor across a flowing watercourse. Water passes through the chamber where it works as a natural piston compressing air in the upper part of the closure. Compressed air is used as a new working plenum to drive air turbines. The kinetic energy of an air jet provided by the air chamber is sufficient for stable operation of industrial air turbines. It is possible to use light plastic barriers instead of conventional rigid dams (the water sail concept). It is confirmed that the concept can result in a less expensive and more effective tidal power plant project than the conventional hydroturbine approach.

  14. Approaches to the simulation of unconfined flow and perched groundwater flow in MODFLOW

    Science.gov (United States)

    Bedekar, Vivek; Niswonger, Richard G.; Kipp, Kenneth; Panday, Sorab; Tonkin, Matthew

    2012-01-01

    Various approaches have been proposed to manage the nonlinearities associated with the unconfined flow equation and to simulate perched groundwater conditions using the MODFLOW family of codes. The approaches comprise a variety of numerical techniques to prevent dry cells from becoming inactive and to achieve a stable solution focused on formulations of the unconfined, partially-saturated, groundwater flow equation. Keeping dry cells active avoids a discontinuous head solution which in turn improves the effectiveness of parameter estimation software that relies on continuous derivatives. Most approaches implement an upstream weighting of intercell conductance and Newton-Raphson linearization to obtain robust convergence. In this study, several published approaches were implemented in a stepwise manner into MODFLOW for comparative analysis. First, a comparative analysis of the methods is presented using synthetic examples that create convergence issues or difficulty in handling perched conditions with the more common dry-cell simulation capabilities of MODFLOW. Next, a field-scale three-dimensional simulation is presented to examine the stability and performance of the discussed approaches in larger, practical, simulation settings.

  15. A flow-based methodology for the calculation of TSO to TSO compensations for cross-border flows

    International Nuclear Information System (INIS)

    Glavitsch, H.; Andersson, G.; Lekane, Th.; Marien, A.; Mees, E.; Naef, U.

    2004-01-01

    In the context of the development of the European internal electricity market, several methods for the tarification of cross-border flows have been proposed. This paper presents a flow-based method for the calculation of TSO to TSO compensations for cross-border flows. The basic principle of this approach is the allocation of the costs of cross-border flows to the TSOs who are responsible for these flows. This method is cost reflective, non-transaction based and compatible with domestic tariffs. It can be applied when limited data are available. Each internal transmission network is then modelled as an aggregated node, called 'supernode', and the European network is synthesized by a graph of supernodes and arcs, each arc representing all cross-border lines between two adjacent countries. When detailed data are available, the proposed methodology is also applicable to all the nodes and lines of the transmission network. Costs associated with flows transiting through supernodes or network elements are forwarded through the network in a way reflecting how the flows make use of the network. The costs can be charged either towards loads and exports or towards generations and imports. Combination of the two charging directions can also be considered. (author)

  16. Feasibility study of a latchup-based particle detector exploiting commercial CMOS technologies

    International Nuclear Information System (INIS)

    Gabrielli, A.; Matteucci, G.; Civera, P.; Demarchi, D.; Villani, G.; Weber, M.

    2009-01-01

    The stimulated ignition of latchup effects caused by external radiation has so far proved to be a hidden hazard. Here this effect is described as a novel approach to detect particles by means of a solid-state device susceptible to latchup effects. In addition, the device can also be used as a circuit for reading sensors devices, leaving the capability of sensing to external sensors. The paper first describes the state-of-the-art of the project and its development over the latest years, then the present and future studies are proposed. An elementary cell composed of two transistors connected in a thyristor structure is shown. The study begins using traditional bipolar transistors since the latchup effect is originated as a parasitic circuit composed of such devices. Then, an equivalent circuit built up of MOS transistors is exploited, resulting an even more promising and challenging configuration than that obtained via bipolar transistors. As the MOS transistors are widely used at present in microelectronics devices and sensors, a latchup-based cell is proposed as a novel structure for future applications in particle detection, amplification of signal sensors and radiation monitoring.

  17. Passenger flow analysis of Beijing urban rail transit network using fractal approach

    Science.gov (United States)

    Li, Xiaohong; Chen, Peiwen; Chen, Feng; Wang, Zijia

    2018-04-01

    To quantify the spatiotemporal distribution of passenger flow and the characteristics of an urban rail transit network, we introduce four radius fractal dimensions and two branch fractal dimensions by combining a fractal approach with passenger flow assignment model. These fractal dimensions can numerically describe the complexity of passenger flow in the urban rail transit network and its change characteristics. Based on it, we establish a fractal quantification method to measure the fractal characteristics of passenger follow in the rail transit network. Finally, we validate the reasonability of our proposed method by using the actual data of Beijing subway network. It has been shown that our proposed method can effectively measure the scale-free range of the urban rail transit network, network development and the fractal characteristics of time-varying passenger flow, which further provides a reference for network planning and analysis of passenger flow.

  18. Entropy Viscosity and L1-based Approximations of PDEs: Exploiting Sparsity

    Science.gov (United States)

    2015-10-23

    AFRL-AFOSR-VA-TR-2015-0337 Entropy Viscosity and L1-based Approximations of PDEs: Exploiting Sparsity Jean-Luc Guermond TEXAS A & M UNIVERSITY 750...REPORT DATE (DD-MM-YYYY) 09-05-2015 2. REPORT TYPE Final report 3. DATES COVERED (From - To) 01-07-2012 - 30-06-2015 4. TITLE AND SUBTITLE Entropy ...conservation equations can be stabilized by using the so-called entropy viscosity method and we proposed to to investigate this new technique. We

  19. New approach for simulating groundwater flow in discrete fracture network

    Science.gov (United States)

    Fang, H.; Zhu, J.

    2017-12-01

    In this study, we develop a new approach to calculate groundwater flowrate and hydraulic head distribution in two-dimensional discrete fracture network (DFN) where both laminar and turbulent flows co-exist in individual fractures. The cubic law is used to calculate hydraulic head distribution and flow behaviors in fractures where flow is laminar, while the Forchheimer's law is used to quantify turbulent flow behaviors. Reynolds number is used to distinguish flow characteristics in individual fractures. The combination of linear and non-linear equations is solved iteratively to determine flowrates in all fractures and hydraulic heads at all intersections. We examine potential errors in both flowrate and hydraulic head from the approach of uniform flow assumption. Applying the cubic law in all fractures regardless of actual flow conditions overestimates the flowrate when turbulent flow may exist while applying the Forchheimer's law indiscriminately underestimate the flowrate when laminar flows exist in the network. The contrast of apertures of large and small fractures in the DFN has significant impact on the potential errors of using only the cubic law or the Forchheimer's law. Both the cubic law and Forchheimer's law simulate similar hydraulic head distributions as the main difference between these two approaches lies in predicting different flowrates. Fracture irregularity does not significantly affect the potential errors from using only the cubic law or the Forchheimer's law if network configuration remains similar. Relative density of fractures does not significantly affect the relative performance of the cubic law and Forchheimer's law.

  20. Evaluation of hydrochemical changes due to intensive aquifer exploitation: case studies from Mexico.

    Science.gov (United States)

    Esteller, M V; Rodríguez, R; Cardona, A; Padilla-Sánchez, L

    2012-09-01

    The impact of intensive aquifer exploitation has been observed in numerous places around the world. Mexico is a representative example of this problem. In 2010, 101 out of the 653 aquifers recognized in the country, showed negative social, economic, and environmental effects related to intensive exploitation. The environmental effects include, among others, groundwater level decline, subsidence, attenuation, and drying up of springs, decreased river flow, and deterioration of water quality. This study aimed at determining the hydrochemical changes produced by intensive aquifer exploitation and highlighting water quality modifications, taking as example the Valle de Toluca, Salamanca, and San Luis Potosi aquifers in Mexico's highlands. There, elements such as fluoride, arsenic, iron, and manganese have been detected, resulting from the introduction of older groundwater with longer residence times and distinctive chemical composition (regional flows). High concentrations of other elements such as chloride, sulfate, nitrate, and vanadium, as well as pathogens, all related to anthropogenic pollution sources (wastewater infiltration, irrigation return flow, and atmospheric pollutants, among others) were also observed. Some of these elements (nitrate, fluoride, arsenic, iron, and manganese) have shown concentrations above Mexican and World Health Organization drinking water standards.

  1. Exploiting Maximum Entropy method and ASTER data for assessing debris flow and debris slide susceptibility for the Giampilieri catchment (north-eastern Sicily, Italy).

    KAUST Repository

    Lombardo, Luigi; Bachofer, F.; Cama, M.; Mä rker, M.; Rotigliano, E.

    2016-01-01

    This study aims at evaluating the performance of the Maximum Entropy method in assessing landslide susceptibility, exploiting topographic and multispectral remote sensing predictors. We selected the catchment of the Giampilieri stream, which is located in the north-eastern sector of Sicily (southern Italy), as test site. On 1/10/2009, a storm rainfall triggered in this area hundreds of debris flow/avalanche phenomena causing extensive economical damage and loss of life. Within this area a presence-only-based statistical method was applied to obtain susceptibility models capable of distinguish future activation sites of debris flow and debris slide, which where the main source failure mechanisms for flow or avalanche type propagation. The set of predictors used in this experiment comprised primary and secondary topographic attributes, derived by processing a high resolution digital elevation model, CORINE land cover data and a set of vegetation and mineral indices obtained by processing multispectral ASTER images. All the selected data sources are dated before the disaster. A spatially random partition technique was adopted for validation, generating fifty replicates for each of the two considered movement typologies in order to assess accuracy, precision and reliability of the models. The debris slide and debris flow susceptibility models produced high performances with the first type being the best fitted. The evaluation of the probability estimates around the mean value for each mapped pixel shows an inverted relation, with the most robust models corresponding to the debris flows. With respect to the role of each predictor within the modelling phase, debris flows appeared to be primarily controlled by topographic attributes whilst the debris slides were better explained by remotely sensed derived indices, particularly by the occurrence of previous wildfires across the slope. The overall excellent performances of the two models suggest promising perspectives for

  2. Exploiting Maximum Entropy method and ASTER data for assessing debris flow and debris slide susceptibility for the Giampilieri catchment (north-eastern Sicily, Italy).

    KAUST Repository

    Lombardo, Luigi

    2016-07-18

    This study aims at evaluating the performance of the Maximum Entropy method in assessing landslide susceptibility, exploiting topographic and multispectral remote sensing predictors. We selected the catchment of the Giampilieri stream, which is located in the north-eastern sector of Sicily (southern Italy), as test site. On 1/10/2009, a storm rainfall triggered in this area hundreds of debris flow/avalanche phenomena causing extensive economical damage and loss of life. Within this area a presence-only-based statistical method was applied to obtain susceptibility models capable of distinguish future activation sites of debris flow and debris slide, which where the main source failure mechanisms for flow or avalanche type propagation. The set of predictors used in this experiment comprised primary and secondary topographic attributes, derived by processing a high resolution digital elevation model, CORINE land cover data and a set of vegetation and mineral indices obtained by processing multispectral ASTER images. All the selected data sources are dated before the disaster. A spatially random partition technique was adopted for validation, generating fifty replicates for each of the two considered movement typologies in order to assess accuracy, precision and reliability of the models. The debris slide and debris flow susceptibility models produced high performances with the first type being the best fitted. The evaluation of the probability estimates around the mean value for each mapped pixel shows an inverted relation, with the most robust models corresponding to the debris flows. With respect to the role of each predictor within the modelling phase, debris flows appeared to be primarily controlled by topographic attributes whilst the debris slides were better explained by remotely sensed derived indices, particularly by the occurrence of previous wildfires across the slope. The overall excellent performances of the two models suggest promising perspectives for

  3. Processes, mechanisms, parameters, and modeling approaches for partially saturated flow in soil and rock media

    International Nuclear Information System (INIS)

    Wang, J.S.Y.; Narasimhan, T.N.

    1993-06-01

    This report discusses conceptual models and mathematical equations, analyzes distributions and correlations among hydrological parameters of soils and tuff, introduces new path integration approaches, and outlines scaling procedures to model potential-driven fluid flow in heterogeneous media. To properly model the transition from fracture-dominated flow under saturated conditions to matrix-dominated flow under partially saturated conditions, characteristic curves and permeability functions for fractures and matrix need to be improved and validated. Couplings from two-phase flow, heat transfer, solute transport, and rock deformation to liquid flow are also important. For stochastic modeling of alternating units of welded and nonwelded tuff or formations bounded by fault zones, correlations and constraints on average values of saturated permeability and air entry scaling factor between different units need to be imposed to avoid unlikely combinations of parameters and predictions. Large-scale simulations require efficient and verifiable numerical algorithms. New path integration approaches based on postulates of minimum work and mass conservation to solve flow geometry and potential distribution simultaneously are introduced. This verifiable integral approach, together with fractal scaling procedures to generate statistical realizations with parameter distribution, correlation, and scaling taken into account, can be used to quantify uncertainties and generate the cumulative distribution function for groundwater travel times

  4. The slice balance approach (SBA): a characteristic-based, multiple balance SN approach on unstructured polyhedral meshes

    International Nuclear Information System (INIS)

    Grove, R.E.

    2005-01-01

    The Slice Balance Approach (SBA) is an approach for solving geometrically-complex, neutral-particle transport problems within a multi-group discrete ordinates (S N ) framework. The salient feature is an angle-dependent spatial decomposition. We approximate general surfaces with arbitrary polygonal faces and mesh the geometry with arbitrarily-shaped polyhedral cells. A cell-local spatial decomposition divides cells into angle-dependent slices for each S N direction. This subdivision follows from a characteristic-based view of the transport problem. Most balance-based characteristic methods use it implicitly; we use it explicitly and exploit its properties. Our mathematical approach is a multiple balance approach using exact spatial moments balance equations on cells and slices along with auxiliary relations on slices. We call this the slice balance approach; it is a characteristic-based multiple balance approach. The SBA is intentionally general and can extend differencing schemes to arbitrary 2-D and 3-D meshes. This work contributes to development of general-geometry deterministic transport capability to complement Monte Carlo capability for large, geometrically-complex transport problems. The purpose of this paper is to describe the SBA. We describe the spatial decomposition and mathematical framework and highlight a few interesting properties. We sketch the derivation of two solution schemes, a step characteristic scheme and a diamond-difference-like scheme, to illustrate the approach and we present interesting results for a 2-D problem. (author)

  5. Exploitation and exploration dynamics in recessionary times

    OpenAIRE

    Walrave, B.

    2012-01-01

    Firm performance largely depends on the ability to adapt to, and exploit, changes in the business environment. That is, firms should maintain ecological fitness by reconfiguring their resource base to cope with emerging threats and explore new opportunities, while at the same time exploiting existing resources. As such, firms possessing the ability to simultaneously perform exploitative and explorative initiatives are more resilient. In this respect, the performance implications of balancing ...

  6. Exploiting LSPIV to assess debris-flow velocities in the field

    Science.gov (United States)

    Theule, Joshua I.; Crema, Stefano; Marchi, Lorenzo; Cavalli, Marco; Comiti, Francesco

    2018-01-01

    The assessment of flow velocity has a central role in quantitative analysis of debris flows, both for the characterization of the phenomenology of these processes and for the assessment of related hazards. Large-scale particle image velocimetry (LSPIV) can contribute to the assessment of surface velocity of debris flows, provided that the specific features of these processes (e.g. fast stage variations and particles up to boulder size on the flow surface) are taken into account. Three debris-flow events, each of them consisting of several surges featuring different sediment concentrations, flow stages, and velocities, have been analysed at the inlet of a sediment trap in a stream in the eastern Italian Alps (Gadria Creek). Free software has been employed for preliminary treatment (orthorectification and format conversion) of video-recorded images as well as for LSPIV application. Results show that LSPIV velocities are consistent with manual measurements of the orthorectified imagery and with front velocity measured from the hydrographs in a channel recorded approximately 70 m upstream of the sediment trap. Horizontal turbulence, computed as the standard deviation of the flow directions at a given cross section for a given surge, proved to be correlated with surface velocity and with visually estimated sediment concentration. The study demonstrates the effectiveness of LSPIV in the assessment of surface velocity of debris flows and permit the most crucial aspects to be identified in order to improve the accuracy of debris-flow velocity measurements.

  7. Assessment of unsteady-RANS approach against steady-RANS approach for predicting twin impinging jets in a cross-flow

    Directory of Open Access Journals (Sweden)

    Zhiyin Yang

    2014-12-01

    Full Text Available A complex flow field is created when a vertical/short take-off and landing aircraft is operating near ground. One major concern for this kind of aircraft in ground effect is the possibility of ingestion of hot gases from the jet engine exhausts back into the engine, known as hot gas ingestion, which can increase the intake air temperature and also reduce the oxygen content in the intake air, potentially leading to compressor stall, low combustion efficiency and causing a dramatic loss of lift. This flow field can be represented by the configuration of twin impinging jets in a cross-flow. Accurate prediction of this complicated flow field under the Reynolds averaged Navier–Stokes (RANS approach (current practise in industry is a great challenge as previous studies suggest that some important flow features cannot be captured by the Steady-RANS (SRANS approach even with a second-order Reynolds stress model (RSM. This paper presents a numerical study of this flow using the Unsteady-RANS (URANS approach with a RSM and the results clearly indicate that the URANS approach is superior than the SRANS approach but still the predictions of Reynolds stress are not accurate enough.

  8. A two-stage flow-based intrusion detection model for next-generation networks.

    Science.gov (United States)

    Umer, Muhammad Fahad; Sher, Muhammad; Bi, Yaxin

    2018-01-01

    The next-generation network provides state-of-the-art access-independent services over converged mobile and fixed networks. Security in the converged network environment is a major challenge. Traditional packet and protocol-based intrusion detection techniques cannot be used in next-generation networks due to slow throughput, low accuracy and their inability to inspect encrypted payload. An alternative solution for protection of next-generation networks is to use network flow records for detection of malicious activity in the network traffic. The network flow records are independent of access networks and user applications. In this paper, we propose a two-stage flow-based intrusion detection system for next-generation networks. The first stage uses an enhanced unsupervised one-class support vector machine which separates malicious flows from normal network traffic. The second stage uses a self-organizing map which automatically groups malicious flows into different alert clusters. We validated the proposed approach on two flow-based datasets and obtained promising results.

  9. Flow-based approach for holistic factory engineering and design

    OpenAIRE

    Constantinescu, C.; Westkämper, E.

    2010-01-01

    The engineering of future factories requires digital tools along life cycle phases from investment planning to ramp-up. Manufacturers need scientific-based integrated highly dynamic data management systems for the participative and integrated factory planning. The paper presents a new approach for the continuously integrated product design, factory and process planning, through a service-oriented architecture for the implementation of digital factory tools. A first prototype of the digital fa...

  10. Exploiting Multiple Detections for Person Re-Identification

    Directory of Open Access Journals (Sweden)

    Amran Bhuiyan

    2018-01-01

    Full Text Available Re-identification systems aim at recognizing the same individuals in multiple cameras, and one of the most relevant problems is that the appearance of same individual varies across cameras due to illumination and viewpoint changes. This paper proposes the use of cumulative weighted brightness transfer functions (CWBTFs to model these appearance variations. Different from recently proposed methods which only consider pairs of images to learn a brightness transfer function, we exploit such a multiple-frame-based learning approach that leverages consecutive detections of each individual to transfer the appearance. We first present a CWBTF framework for the task of transforming appearance from one camera to another. We then present a re-identification framework where we segment the pedestrian images into meaningful parts and extract features from such parts, as well as from the whole body. Jointly, both of these frameworks contribute to model the appearance variations more robustly. We tested our approach on standard multi-camera surveillance datasets, showing consistent and significant improvements over existing methods on three different datasets without any other additional cost. Our approach is general and can be applied to any appearance-based method.

  11. Numerical simulation of gas hydrate exploitation from subsea reservoirs in the Black Sea

    Science.gov (United States)

    Janicki, Georg; Schlüter, Stefan; Hennig, Torsten; Deerberg, Görge

    2017-04-01

    Natural gas (methane) is the most environmental friendly source of fossil energy. When coal is replace by natural gas in power production the emission of carbon dioxide is reduced by 50 %. The vast amount of methane assumed in gas hydrate deposits can help to overcome a shortage of fossil energy resources in the future. To increase their potential for energy applications new technological approaches are being discussed and developed worldwide. Besides technical challenges that have to be overcome climate and safety issues have to be considered before a commercial exploitation of such unconventional reservoirs. The potential of producing natural gas from subsea gas hydrate deposits by various means (e. g. depressurization and/or carbon dioxide injection) is numerically studied in the frame of the German research project »SUGAR - Submarine Gas Hydrate Reservoirs«. In order to simulate the exploitation of hydrate-bearing sediments in the subsea, an in-house simulation model HyReS which is implemented in the general-purpose software COMSOL Multiphysics is used. This tool turned out to be especially suited for the flexible implementation of non-standard correlations concerning heat transfer, fluid flow, hydrate kinetics, and other relevant model data. Partially based on the simulation results, the development of a technical concept and its evaluation are the subject of ongoing investigations, whereby geological and ecological criteria are to be considered. The results illustrate the processes and effects occurring during the gas production from a subsea gas hydrate deposit by depressurization. The simulation results from a case study for a deposit located in the Black Sea reveal that the production of natural gas by simple depressurization is possible but with quite low rates. It can be shown that the hydrate decomposition and thus the gas production strongly depend on the geophysical properties of the reservoir, the mass and heat transport within the reservoir, and

  12. The shear flow processing of controlled DNA tethering and stretching for organic molecular electronics.

    Science.gov (United States)

    Yu, Guihua; Kushwaha, Amit; Lee, Jungkyu K; Shaqfeh, Eric S G; Bao, Zhenan

    2011-01-25

    DNA has been recently explored as a powerful tool for developing molecular scaffolds for making reproducible and reliable metal contacts to single organic semiconducting molecules. A critical step in the process of exploiting DNA-organic molecule-DNA (DOD) array structures is the controlled tethering and stretching of DNA molecules. Here we report the development of reproducible surface chemistry for tethering DNA molecules at tunable density and demonstrate shear flow processing as a rationally controlled approach for stretching/aligning DNA molecules of various lengths. Through enzymatic cleavage of λ-phage DNA to yield a series of DNA chains of various lengths from 17.3 μm down to 4.2 μm, we have investigated the flow/extension behavior of these tethered DNA molecules under different flow strengths in the flow-gradient plane. We compared Brownian dynamic simulations for the flow dynamics of tethered λ-DNA in shear, and found our flow-gradient plane experimental results matched well with our bead-spring simulations. The shear flow processing demonstrated in our studies represents a controllable approach for tethering and stretching DNA molecules of various lengths. Together with further metallization of DNA chains within DOD structures, this bottom-up approach can potentially enable efficient and reliable fabrication of large-scale nanoelectronic devices based on single organic molecules, therefore opening opportunities in both fundamental understanding of charge transport at the single molecular level and many exciting applications for ever-shrinking molecular circuits.

  13. Fusion of optical flow based motion pattern analysis and silhouette classification for person tracking and detection

    NARCIS (Netherlands)

    Tangelder, J.W.H.; Lebert, E.; Burghouts, G.J.; Zon, K. van; Den Uyl, M.J.

    2014-01-01

    This paper presents a novel approach to detect persons in video by combining optical flow based motion analysis and silhouette based recognition. A new fast optical flow computation method is described, and its application in a motion based analysis framework unifying human tracking and detection is

  14. Recent developments in automated determinations of trace level concentrations of elements and on-line fractionations schemes exploiting the micro-sequential injection - lab-on-valve approach

    DEFF Research Database (Denmark)

    Hansen, Elo Harald; Miró, Manuel; Long, Xiangbao

    2006-01-01

    The determination of trace level concentrations of elements, such as metal species, in complex matrices by atomic absorption or emission spectrometric methods often require appropriate pretreatments comprising separation of the analyte from interfering constituents and analyte preconcentration...... are presented as based on the exploitation of micro-sequential injection (μSI-LOV) using hydrophobic as well as hydrophilic bead materials. The examples given comprise the presentation of a universal approach for SPE-assays, front-end speciation of Cr(III) and Cr(VI) in a fully automated and enclosed set...

  15. Towards a semantics-based approach in the development of geographic portals

    Science.gov (United States)

    Athanasis, Nikolaos; Kalabokidis, Kostas; Vaitis, Michail; Soulakellis, Nikolaos

    2009-02-01

    As the demand for geospatial data increases, the lack of efficient ways to find suitable information becomes critical. In this paper, a new methodology for knowledge discovery in geographic portals is presented. Based on the Semantic Web, our approach exploits the Resource Description Framework (RDF) in order to describe the geoportal's information with ontology-based metadata. When users traverse from page to page in the portal, they take advantage of the metadata infrastructure to navigate easily through data of interest. New metadata descriptions are published in the geoportal according to the RDF schemas.

  16. Exploitation and exploration dynamics in recessionary times

    NARCIS (Netherlands)

    Walrave, B.

    2012-01-01

    Firm performance largely depends on the ability to adapt to, and exploit, changes in the business environment. That is, firms should maintain ecological fitness by reconfiguring their resource base to cope with emerging threats and explore new opportunities, while at the same time exploiting

  17. On Roof Geometry for Urban Wind Energy Exploitation in High-Rise Buildings

    Directory of Open Access Journals (Sweden)

    Francisco Toja-Silva

    2015-06-01

    Full Text Available The European program HORIZON2020 aims to have 20% of electricity produced by renewable sources. The building sector represents 40% of the European Union energy consumption. Reducing energy consumption in buildings is therefore a priority for energy efficiency. The present investigation explores the most adequate roof shapes compatible with the placement of different types of small wind energy generators on high-rise buildings for urban wind energy exploitation. The wind flow around traditional state-of-the-art roof shapes is considered. In addition, the influence of the roof edge on the wind flow on high-rise buildings is analyzed. These geometries are investigated, both qualitatively and quantitatively, and the turbulence intensity threshold for horizontal axis wind turbines is considered. The most adequate shapes for wind energy exploitation are identified, studying vertical profiles of velocity, turbulent kinetic energy and turbulence intensity. Curved shapes are the most interesting building roof shapes from the wind energy exploitation point of view, leading to the highest speed-up and the lowest turbulence intensity.

  18. Statistical sampling approaches for soil monitoring

    NARCIS (Netherlands)

    Brus, D.J.

    2014-01-01

    This paper describes three statistical sampling approaches for regional soil monitoring, a design-based, a model-based and a hybrid approach. In the model-based approach a space-time model is exploited to predict global statistical parameters of interest such as the space-time mean. In the hybrid

  19. Tribocorrosion in pressurized high temperature water: a mass flow model based on the third body approach

    Energy Technology Data Exchange (ETDEWEB)

    Guadalupe Maldonado, S.

    2014-07-01

    Pressurized water reactors (PWR) used for power generation are operated at elevated temperatures (280-300 °C) and under higher pressure (120-150 bar). In addition to these harsh environmental conditions some components of the PWR assemblies are subject to mechanical loading (sliding, vibration and impacts) leading to undesirable and hardly controllable material degradation phenomena. In such situations wear is determined by the complex interplay (tribocorrosion) between mechanical, material and physical-chemical phenomena. Tribocorrosion in PWR conditions is at present little understood and models need to be developed in order to predict component lifetime over several decades. The goal of this project, carried out in collaboration with the French company AREVA NP, is to develop a predictive model based on the mechanistic understanding of tribocorrosion of specific PWR components (stainless steel control assemblies, stellite grippers). The approach taken here is to describe degradation in terms of electro-chemical and mechanical material flows (third body concept of tribology) from the metal into the friction film (i.e. the oxidized film forming during rubbing on the metal surface) and from the friction film into the environment instead of simple mass loss considerations. The project involves the establishment of mechanistic models for describing the single flows based on ad-hoc tribocorrosion measurements operating at low temperature. The overall behaviour at high temperature and pressure in investigated using a dedicated tribometer (Aurore) including electrochemical control of the contact during rubbing. Physical laws describing the individual flows according to defined mechanisms and as a function of defined physical parameters were identified based on the obtained experimental results and from literature data. The physical laws were converted into mass flow rates and solved as differential equation system by considering the mass balance in compartments

  20. Software compensation in Particle Flow reconstruction

    CERN Document Server

    Lan Tran, Huong; Sefkow, Felix; Green, Steven; Marshall, John; Thomson, Mark; Simon, Frank

    2017-01-01

    The Particle Flow approach to calorimetry requires highly granular calorimeters and sophisticated software algorithms in order to reconstruct and identify individual particles in complex event topologies. The high spatial granularity, together with analog energy information, can be further exploited in software compensation. In this approach, the local energy density is used to discriminate electromagnetic and purely hadronic sub-showers within hadron showers in the detector to improve the energy resolution for single particles by correcting for the intrinsic non-compensation of the calorimeter system. This improvement in the single particle energy resolution also results in a better overall jet energy resolution by improving the energy measurement of identified neutral hadrons and improvements in the pattern recognition stage by a more accurate matching of calorimeter energies to tracker measurements. This paper describes the software compensation technique and its implementation in Particle Flow reconstruct...

  1. METHODOLOGICAL APPROACHES TO THE ANALYSIS OF EFFICIENCY OF CASH FLOW MANAGEMENT IN INVESTMENT ACTIVITY OF THE ENTERPRISES

    OpenAIRE

    I. Magdych

    2015-01-01

    The article explores the methodological approaches to the analysis of cash flows in investment activity of the enterprise; the system of motion net cash flows, reflecting the impact of cash management efficiency on the amount and source of investment cash flows of the enterprise; analytical model of definition of effectiveness of cash management of the enterprise is proposed, based on the selected principals of modeling, comprehensive analysis of cash flows in investing activities and their o...

  2. Recent developments in automatic solid-phase extraction with renewable surfaces exploiting flow-based approaches

    DEFF Research Database (Denmark)

    Miró, Manuel; Hartwell, Supaporn Kradtap; Jakmunee, Jaroon

    2008-01-01

    ,on-line SPE assays performed in permanent mode lack sufficient reliability as a consequence of progressively tighter packing of the bead reactor, contamination of the solid surfaces and potential leakage of functional moieties. This article overviews the current state-of-the-art of an appealing tool...... chemical-derivatization reactions, and it pinpoints the most common instrumental detection techniques utilized. We present and discuss in detail relevant environmental and bioanalytical applications reported in the past few years....

  3. Flip-angle based ratiometric approach for pulsed CEST-MRI pH imaging

    Science.gov (United States)

    Arena, Francesca; Irrera, Pietro; Consolino, Lorena; Colombo Serra, Sonia; Zaiss, Moritz; Longo, Dario Livio

    2018-02-01

    Several molecules have been exploited for developing MRI pH sensors based on the chemical exchange saturation transfer (CEST) technique. A ratiometric approach, based on the saturation of two exchanging pools at the same saturation power, or by varying the saturation power levels on the same pool, is usually needed to rule out the concentration term from the pH measurement. However, all these methods have been demonstrated by using a continuous wave saturation scheme that limits its translation to clinical scanners. This study shows a new ratiometric CEST-MRI pH-mapping approach based on a pulsed CEST saturation scheme for a radiographic contrast agent (iodixanol) possessing a single chemical exchange site. This approach is based on the ratio of the CEST contrast effects at two different flip angles combinations (180°/360° and 180°/720°), keeping constant the mean irradiation RF power (Bavg power). The proposed ratiometric approach index is concentration independent and it showed good pH sensitivity and accuracy in the physiological range between 6.0 and 7.4.

  4. Exploiting CRISPR/Cas: Interference Mechanisms and Applications

    Directory of Open Access Journals (Sweden)

    André Plagens

    2013-07-01

    Full Text Available The discovery of biological concepts can often provide a framework for the development of novel molecular tools, which can help us to further understand and manipulate life. One recent example is the elucidation of the prokaryotic adaptive immune system, clustered regularly interspaced short palindromic repeats (CRISPR/CRISPR-associated (Cas that protects bacteria and archaea against viruses or conjugative plasmids. The immunity is based on small RNA molecules that are incorporated into versatile multi-domain proteins or protein complexes and specifically target viral nucleic acids via base complementarity. CRISPR/Cas interference machines are utilized to develop novel genome editing tools for different organisms. Here, we will review the latest progress in the elucidation and application of prokaryotic CRISPR/Cas systems and discuss possible future approaches to exploit the potential of these interference machineries.

  5. Exploiting CRISPR/Cas: Interference Mechanisms and Applications

    Science.gov (United States)

    Richter, Hagen; Randau, Lennart; Plagens, André

    2013-01-01

    The discovery of biological concepts can often provide a framework for the development of novel molecular tools, which can help us to further understand and manipulate life. One recent example is the elucidation of the prokaryotic adaptive immune system, clustered regularly interspaced short palindromic repeats (CRISPR)/CRISPR-associated (Cas) that protects bacteria and archaea against viruses or conjugative plasmids. The immunity is based on small RNA molecules that are incorporated into versatile multi-domain proteins or protein complexes and specifically target viral nucleic acids via base complementarity. CRISPR/Cas interference machines are utilized to develop novel genome editing tools for different organisms. Here, we will review the latest progress in the elucidation and application of prokaryotic CRISPR/Cas systems and discuss possible future approaches to exploit the potential of these interference machineries. PMID:23857052

  6. Graphene-based absorber exploiting guided mode resonances in one-dimensional gratings.

    Science.gov (United States)

    Grande, M; Vincenti, M A; Stomeo, T; Bianco, G V; de Ceglia, D; Aközbek, N; Petruzzelli, V; Bruno, G; De Vittorio, M; Scalora, M; D'Orazio, A

    2014-12-15

    A one-dimensional dielectric grating, based on a simple geometry, is proposed and investigated to enhance light absorption in a monolayer graphene exploiting guided mode resonances. Numerical findings reveal that the optimized configuration is able to absorb up to 60% of the impinging light at normal incidence for both TE and TM polarizations resulting in a theoretical enhancement factor of about 26 with respect to the monolayer graphene absorption (≈2.3%). Experimental results confirm this behavior showing CVD graphene absorbance peaks up to about 40% over narrow bands of a few nanometers. The simple and flexible design points to a way to realize innovative, scalable and easy-to-fabricate graphene-based optical absorbers.

  7. Radiation environmental impact assessment of copper exploitation

    International Nuclear Information System (INIS)

    Fan Guang; Wen Zhijian

    2010-01-01

    The radiation environmental impact of mineral exploitation on the surrounding environment has become a public concern. This paper presents the radiation environmental impact assessment of copper exploitation. Based on the project description and detailed investigations of surrounding environment, systematic radiation environmental impacts have been identified. The environmental impacts are assessed during both construction and operation phase. The environmental protection measures have also been proposed. The related conclusion and measures can play an active role in copper exploitation and environmental protection. (authors)

  8. Towards a dynamic assessment of raw materials criticality: linking agent-based demand--with material flow supply modelling approaches.

    Science.gov (United States)

    Knoeri, Christof; Wäger, Patrick A; Stamp, Anna; Althaus, Hans-Joerg; Weil, Marcel

    2013-09-01

    Emerging technologies such as information and communication-, photovoltaic- or battery technologies are expected to increase significantly the demand for scarce metals in the near future. The recently developed methods to evaluate the criticality of mineral raw materials typically provide a 'snapshot' of the criticality of a certain material at one point in time by using static indicators both for supply risk and for the impacts of supply restrictions. While allowing for insights into the mechanisms behind the criticality of raw materials, these methods cannot account for dynamic changes in products and/or activities over time. In this paper we propose a conceptual framework intended to overcome these limitations by including the dynamic interactions between different possible demand and supply configurations. The framework integrates an agent-based behaviour model, where demand emerges from individual agent decisions and interaction, into a dynamic material flow model, representing the materials' stocks and flows. Within the framework, the environmental implications of substitution decisions are evaluated by applying life-cycle assessment methodology. The approach makes a first step towards a dynamic criticality assessment and will enhance the understanding of industrial substitution decisions and environmental implications related to critical metals. We discuss the potential and limitation of such an approach in contrast to state-of-the-art methods and how it might lead to criticality assessments tailored to the specific circumstances of single industrial sectors or individual companies. Copyright © 2013 Elsevier B.V. All rights reserved.

  9. A blended pressure/density based method for the computation of incompressible and compressible flows

    International Nuclear Information System (INIS)

    Rossow, C.-C.

    2003-01-01

    An alternative method to low speed preconditioning for the computation of nearly incompressible flows with compressible methods is developed. For this approach the leading terms of the flux difference splitting (FDS) approximate Riemann solver are analyzed in the incompressible limit. In combination with the requirement of the velocity field to be divergence-free, an elliptic equation to solve for a pressure correction to enforce the divergence-free velocity field on the discrete level is derived. The pressure correction equation established is shown to be equivalent to classical methods for incompressible flows. In order to allow the computation of flows at all speeds, a blending technique for the transition from the incompressible, pressure based formulation to the compressible, density based formulation is established. It is found necessary to use preconditioning with this blending technique to account for a remaining 'compressible' contribution in the incompressible limit, and a suitable matrix directly applicable to conservative residuals is derived. Thus, a coherent framework is established to cover the discretization of both incompressible and compressible flows. Compared with standard preconditioning techniques, the blended pressure/density based approach showed improved robustness for high lift flows close to separation

  10. Formation of a Methodological Approach to Evaluating the State of Management of Enterprise Flow Processes

    Directory of Open Access Journals (Sweden)

    Dzobko Iryna P.

    2016-02-01

    Full Text Available The formation of a methodological approach to evaluating management of the state of enterprise flow processes has been considered. Proceeding from the developed and presented in literary sources theoretical propositions on organization of management of enterprise flow processes, the hypothesis of the study is correlation of quantitative and qualitative evaluations of management effectiveness and formation of the integral index on their basis. The article presents stages of implementation of a methodological approach to evaluating the state of management of enterprise flow processes, which implies indicating the components, their characteristics and methods of research. The composition of indicators, on the basis of which it is possible to evaluate effectiveness of management of enterprise flow processes, has been determined. Grouping of such indicators based on the flow nature of enterprise processes has been performed. The grouping of indicators is justified by a pairwise determination of canonical correlations between the selected groups (the obtained high correlation coefficients confirmed the author’s systematization of indicators. It is shown that a specificity of the formation of a methodological approach to evaluating the state of management of enterprise flow processes requires expansion in the direction of aggregation of the results and determination of factors that influence effectiveness of flow processes management. The article carries out such aggregation using the factor analysis. Distribution of a set of objects into different classes according to the results of the cluster analysis has been presented. To obtain an integral estimation of effectiveness of flow processes management, the taxonomic index of a multidimensional object has been built. A peculiarity of the formed methodological approach to evaluating the state of management of enterprise flow processes is in the matrix correlation of integral indicators calculated on

  11. Combination of material flow analysis and substance flow analysis: a powerful approach for decision support in waste management.

    Science.gov (United States)

    Stanisavljevic, Nemanja; Brunner, Paul H

    2014-08-01

    The novelty of this paper is the demonstration of the effectiveness of combining material flow analysis (MFA) with substance flow analysis (SFA) for decision making in waste management. Both MFA and SFA are based on the mass balance principle. While MFA alone has been applied often for analysing material flows quantitatively and hence to determine the capacities of waste treatment processes, SFA is more demanding but instrumental in evaluating the performance of a waste management system regarding the goals "resource conservation" and "environmental protection". SFA focuses on the transformations of wastes during waste treatment: valuable as well as hazardous substances and their transformations are followed through the entire waste management system. A substance-based approach is required because the economic and environmental properties of the products of waste management - recycling goods, residues and emissions - are primarily determined by the content of specific precious or harmful substances. To support the case that MFA and SFA should be combined, a case study of waste management scenarios is presented. For three scenarios, total material flows are quantified by MFA, and the mass flows of six indicator substances (C, N, Cl, Cd, Pb, Hg) are determined by SFA. The combined results are compared to the status quo in view of fulfilling the goals of waste management. They clearly point out specific differences between the chosen scenarios, demonstrating potentials for improvement and the value of the combination of MFA/SFA for decision making in waste management. © The Author(s) 2014.

  12. Statistical Approaches for Spatiotemporal Prediction of Low Flows

    Science.gov (United States)

    Fangmann, A.; Haberlandt, U.

    2017-12-01

    An adequate assessment of regional climate change impacts on streamflow requires the integration of various sources of information and modeling approaches. This study proposes simple statistical tools for inclusion into model ensembles, which are fast and straightforward in their application, yet able to yield accurate streamflow predictions in time and space. Target variables for all approaches are annual low flow indices derived from a data set of 51 records of average daily discharge for northwestern Germany. The models require input of climatic data in the form of meteorological drought indices, derived from observed daily climatic variables, averaged over the streamflow gauges' catchments areas. Four different modeling approaches are analyzed. Basis for all pose multiple linear regression models that estimate low flows as a function of a set of meteorological indices and/or physiographic and climatic catchment descriptors. For the first method, individual regression models are fitted at each station, predicting annual low flow values from a set of annual meteorological indices, which are subsequently regionalized using a set of catchment characteristics. The second method combines temporal and spatial prediction within a single panel data regression model, allowing estimation of annual low flow values from input of both annual meteorological indices and catchment descriptors. The third and fourth methods represent non-stationary low flow frequency analyses and require fitting of regional distribution functions. Method three is subject to a spatiotemporal prediction of an index value, method four to estimation of L-moments that adapt the regional frequency distribution to the at-site conditions. The results show that method two outperforms successive prediction in time and space. Method three also shows a high performance in the near future period, but since it relies on a stationary distribution, its application for prediction of far future changes may be

  13. The Ethics of Exploitation

    Directory of Open Access Journals (Sweden)

    Paul McLaughlin

    2008-11-01

    Full Text Available Philosophical inquiry into exploitation has two major deficiencies to date: it assumes that exploitation is wrong by definition; and it pays too much attention to the Marxian account of exploitation. Two senses of exploitation should be distinguished: the ‘moral’ or pejorative sense and the ‘non-moral’ or ‘non-prejudicial’ sense. By demonstrating the conceptual inadequacy of exploitation as defined in the first sense, and by defining exploitation adequately in the latter sense, we seek to demonstrate the moral complexity of exploitation. We contend, moreover, that moral evaluation of exploitation is only possible once we abandon a strictly Marxian framework and attempt, in the long run, to develop an integral ethic along Godwinian lines.

  14. Combining sap flow and eddy covariance approaches to derive stomatal and non-stomatal O3 fluxes in a forest stand

    International Nuclear Information System (INIS)

    Nunn, A.J.; Cieslik, S.; Metzger, U.; Wieser, G.; Matyssek, R.

    2010-01-01

    Stomatal O 3 fluxes to a mixed beech/spruce stand (Fagus sylvatica/Picea abies) in Central Europe were determined using two different approaches. The sap flow technique yielded the tree-level transpiration, whereas the eddy covariance method provided the stand-level evapotranspiration. Both data were then converted into stomatal ozone fluxes, exemplifying this novel concept for July 2007. Sap flow-based stomatal O 3 flux was 33% of the total O 3 flux, whereas derivation from evapotranspiration rates in combination with the Penman-Monteith algorithm amounted to 47%. In addition to this proportional difference, the sap flow-based assessment yielded lower levels of stomatal O 3 flux and reflected stomatal regulation rather than O 3 exposure, paralleling the daily courses of canopy conductance for water vapor and eddy covariance-based total stand-level O 3 flux. The demonstrated combination of sap flow and eddy covariance approaches supports the development of O 3 risk assessment in forests from O 3 exposure towards flux-based concepts. - Combined tree sap flow and eddy covariance-based methodologies yield stomatal O 3 flux as 33% in total stand flux.

  15. Artificial intelligence based models for stream-flow forecasting: 2000-2015

    Science.gov (United States)

    Yaseen, Zaher Mundher; El-shafie, Ahmed; Jaafar, Othman; Afan, Haitham Abdulmohsin; Sayl, Khamis Naba

    2015-11-01

    The use of Artificial Intelligence (AI) has increased since the middle of the 20th century as seen in its application in a wide range of engineering and science problems. The last two decades, for example, has seen a dramatic increase in the development and application of various types of AI approaches for stream-flow forecasting. Generally speaking, AI has exhibited significant progress in forecasting and modeling non-linear hydrological applications and in capturing the noise complexity in the dataset. This paper explores the state-of-the-art application of AI in stream-flow forecasting, focusing on defining the data-driven of AI, the advantages of complementary models, as well as the literature and their possible future application in modeling and forecasting stream-flow. The review also identifies the major challenges and opportunities for prospective research, including, a new scheme for modeling the inflow, a novel method for preprocessing time series frequency based on Fast Orthogonal Search (FOS) techniques, and Swarm Intelligence (SI) as an optimization approach.

  16. JPEG2000-coded image error concealment exploiting convex sets projections.

    Science.gov (United States)

    Atzori, Luigi; Ginesu, Giaime; Raccis, Alessio

    2005-04-01

    Transmission errors in JPEG2000 can be grouped into three main classes, depending on the affected area: LL, high frequencies at the lower decomposition levels, and high frequencies at the higher decomposition levels. The first type of errors are the most annoying but can be concealed exploiting the signal spatial correlation like in a number of techniques proposed in the past; the second are less annoying but more difficult to address; the latter are often imperceptible. In this paper, we address the problem of concealing the second class or errors when high bit-planes are damaged by proposing a new approach based on the theory of projections onto convex sets. Accordingly, the error effects are masked by iteratively applying two procedures: low-pass (LP) filtering in the spatial domain and restoration of the uncorrupted wavelet coefficients in the transform domain. It has been observed that a uniform LP filtering brought to some undesired side effects that negatively compensated the advantages. This problem has been overcome by applying an adaptive solution, which exploits an edge map to choose the optimal filter mask size. Simulation results demonstrated the efficiency of the proposed approach.

  17. Thirteen years of exploitation with constant oilfield pressure

    Energy Technology Data Exchange (ETDEWEB)

    Dontov-Danu, Gh

    1966-12-01

    The paper describes a restoring and maintaining reservoir pressure by gas injection in two blocks of the Dacian stratum at Buscani. At the beginning of gas injection, the wells produced in gas lift and the crude oil flows were markedly decreasing. After about 6 months of injection the reservoir pressure has been restored, the wells flowed. This system allows constant crude oil flows for long periods. The oilfield recovery factor until December 31, 1965, is 51% i.e. by 150% higher than expected in the case of an exploitation without gas injection. This increase represents the extra crude oil and gasoline production obtained as a result of the application of the reservoir pressure maintenance process. The average consumption of working agent has been of 382 cu m gas per ton of additionally extracted crude oil.

  18. Parallel exploitation of a spatial-spectral classification approach for hyperspectral images on RVC-CAL

    Science.gov (United States)

    Lazcano, R.; Madroñal, D.; Fabelo, H.; Ortega, S.; Salvador, R.; Callicó, G. M.; Juárez, E.; Sanz, C.

    2017-10-01

    Hyperspectral Imaging (HI) assembles high resolution spectral information from hundreds of narrow bands across the electromagnetic spectrum, thus generating 3D data cubes in which each pixel gathers the spectral information of the reflectance of every spatial pixel. As a result, each image is composed of large volumes of data, which turns its processing into a challenge, as performance requirements have been continuously tightened. For instance, new HI applications demand real-time responses. Hence, parallel processing becomes a necessity to achieve this requirement, so the intrinsic parallelism of the algorithms must be exploited. In this paper, a spatial-spectral classification approach has been implemented using a dataflow language known as RVCCAL. This language represents a system as a set of functional units, and its main advantage is that it simplifies the parallelization process by mapping the different blocks over different processing units. The spatial-spectral classification approach aims at refining the classification results previously obtained by using a K-Nearest Neighbors (KNN) filtering process, in which both the pixel spectral value and the spatial coordinates are considered. To do so, KNN needs two inputs: a one-band representation of the hyperspectral image and the classification results provided by a pixel-wise classifier. Thus, spatial-spectral classification algorithm is divided into three different stages: a Principal Component Analysis (PCA) algorithm for computing the one-band representation of the image, a Support Vector Machine (SVM) classifier, and the KNN-based filtering algorithm. The parallelization of these algorithms shows promising results in terms of computational time, as the mapping of them over different cores presents a speedup of 2.69x when using 3 cores. Consequently, experimental results demonstrate that real-time processing of hyperspectral images is achievable.

  19. Competing Discourses about Youth Sexual Exploitation in Canadian News Media.

    Science.gov (United States)

    Saewyc, Elizabeth M; Miller, Bonnie B; Rivers, Robert; Matthews, Jennifer; Hilario, Carla; Hirakata, Pam

    2013-10-01

    Media holds the power to create, maintain, or break down stigmatizing attitudes, which affect policies, funding, and services. To understand how Canadian news media depicts the commercial sexual exploitation of children and youth, we examined 835 Canadian newspaper articles from 1989-2008 using a mixed methods critical discourse analysis approach, comparing representations to existing research about sexually exploited youth. Despite research evidence that equal rates of boys and girls experience exploitation, Canadian news media depicted exploited youth predominantly as heterosexual girls, and described them alternately as victims or workers in a trade, often both in the same story. News media mentioned exploiters far less often than victims, and portrayed them almost exclusively as male, most often called 'customers' or 'consumers,' and occasionally 'predators'; in contrast, research has documented the majority of sexually exploited boys report female exploiters. Few news stories over the past two decades portrayed the diversity of victims, perpetrators, and venues of exploitation reported in research. The focus on victims but not exploiters helps perpetuate stereotypes of sexual exploitation as business or a 'victimless crime,' maintains the status quo, and blurs responsibility for protecting youth under the UN Convention on the Rights of the Child. Health care providers and researchers can be advocates for accuracy in media coverage about sexual exploitation; news reporters and editors should focus on exploiters more than victims, draw on existing research evidence to avoid perpetuating stereotypes, and use accurate terms, such as commercial sexual exploitation, rather than terms related to business or trade.

  20. A probabilistic approach to quantifying spatial patterns of flow regimes and network-scale connectivity

    Science.gov (United States)

    Garbin, Silvia; Alessi Celegon, Elisa; Fanton, Pietro; Botter, Gianluca

    2017-04-01

    The temporal variability of river flow regime is a key feature structuring and controlling fluvial ecological communities and ecosystem processes. In particular, streamflow variability induced by climate/landscape heterogeneities or other anthropogenic factors significantly affects the connectivity between streams with notable implication for river fragmentation. Hydrologic connectivity is a fundamental property that guarantees species persistence and ecosystem integrity in riverine systems. In riverine landscapes, most ecological transitions are flow-dependent and the structure of flow regimes may affect ecological functions of endemic biota (i.e., fish spawning or grazing of invertebrate species). Therefore, minimum flow thresholds must be guaranteed to support specific ecosystem services, like fish migration, aquatic biodiversity and habitat suitability. In this contribution, we present a probabilistic approach aiming at a spatially-explicit, quantitative assessment of hydrologic connectivity at the network-scale as derived from river flow variability. Dynamics of daily streamflows are estimated based on catchment-scale climatic and morphological features, integrating a stochastic, physically based approach that accounts for the stochasticity of rainfall with a water balance model and a geomorphic recession flow model. The non-exceedance probability of ecologically meaningful flow thresholds is used to evaluate the fragmentation of individual stream reaches, and the ensuing network-scale connectivity metrics. A multi-dimensional Poisson Process for the stochastic generation of rainfall is used to evaluate the impact of climate signature on reach-scale and catchment-scale connectivity. The analysis shows that streamflow patterns and network-scale connectivity are influenced by the topology of the river network and the spatial variability of climatic properties (rainfall, evapotranspiration). The framework offers a robust basis for the prediction of the impact of

  1. Integrated approach to model decomposed flow hydrograph using artificial neural network and conceptual techniques

    Science.gov (United States)

    Jain, Ashu; Srinivasulu, Sanaga

    2006-02-01

    This paper presents the findings of a study aimed at decomposing a flow hydrograph into different segments based on physical concepts in a catchment, and modelling different segments using different technique viz. conceptual and artificial neural networks (ANNs). An integrated modelling framework is proposed capable of modelling infiltration, base flow, evapotranspiration, soil moisture accounting, and certain segments of the decomposed flow hydrograph using conceptual techniques and the complex, non-linear, and dynamic rainfall-runoff process using ANN technique. Specifically, five different multi-layer perceptron (MLP) and two self-organizing map (SOM) models have been developed. The rainfall and streamflow data derived from the Kentucky River catchment were employed to test the proposed methodology and develop all the models. The performance of all the models was evaluated using seven different standard statistical measures. The results obtained in this study indicate that (a) the rainfall-runoff relationship in a large catchment consists of at least three or four different mappings corresponding to different dynamics of the underlying physical processes, (b) an integrated approach that models the different segments of the decomposed flow hydrograph using different techniques is better than a single ANN in modelling the complex, dynamic, non-linear, and fragmented rainfall runoff process, (c) a simple model based on the concept of flow recession is better than an ANN to model the falling limb of a flow hydrograph, and (d) decomposing a flow hydrograph into the different segments corresponding to the different dynamics based on the physical concepts is better than using the soft decomposition employed using SOM.

  2. Hybrid flux splitting schemes for numerical resolution of two-phase flows

    Energy Technology Data Exchange (ETDEWEB)

    Flaatten, Tore

    2003-07-01

    This thesis deals with the construction of numerical schemes for approximating. solutions to a hyperbolic two-phase flow model. Numerical schemes for hyperbolic models are commonly divided in two main classes: Flux Vector Splitting (FVS) schemes which are based on scalar computations and Flux Difference Splitting (FDS) schemes which are based on matrix computations. FVS schemes are more efficient than FDS schemes, but FDS schemes are more accurate. The canonical FDS schemes are the approximate Riemann solvers which are based on a local decomposition of the system into its full wave structure. In this thesis the mathematical structure of the model is exploited to construct a class of hybrid FVS/FDS schemes, denoted as Mixture Flux (MF) schemes. This approach is based on a splitting of the system in two components associated with the pressure and volume fraction variables respectively, and builds upon hybrid FVS/FDS schemes previously developed for one-phase flow models. Through analysis and numerical experiments it is demonstrated that the MF approach provides several desirable features, including (1) Improved efficiency compared to standard approximate Riemann solvers, (2) Robustness under stiff conditions, (3) Accuracy on linear and nonlinear phenomena. In particular it is demonstrated that the framework allows for an efficient weakly implicit implementation, focusing on an accurate resolution of slow transients relevant for the petroleum industry. (author)

  3. Stochastic approaches to inflation model building

    International Nuclear Information System (INIS)

    Ramirez, Erandy; Liddle, Andrew R.

    2005-01-01

    While inflation gives an appealing explanation of observed cosmological data, there are a wide range of different inflation models, providing differing predictions for the initial perturbations. Typically models are motivated either by fundamental physics considerations or by simplicity. An alternative is to generate large numbers of models via a random generation process, such as the flow equations approach. The flow equations approach is known to predict a definite structure to the observational predictions. In this paper, we first demonstrate a more efficient implementation of the flow equations exploiting an analytic solution found by Liddle (2003). We then consider alternative stochastic methods of generating large numbers of inflation models, with the aim of testing whether the structures generated by the flow equations are robust. We find that while typically there remains some concentration of points in the observable plane under the different methods, there is significant variation in the predictions amongst the methods considered

  4. Flow cytometry approach for studying the interaction between ...

    African Journals Online (AJOL)

    Flow cytometry approach for studying the interaction between Bacillus mojavensis and Alternaria alternata. Asma Milet, Noreddine Kacem Chaouche, Laid Dehimat, Asma Ait Kaki, Mounira Kara Ali, Philippe Thonart ...

  5. A Deep Learning based Approach to Reduced Order Modeling of Fluids using LSTM Neural Networks

    Science.gov (United States)

    Mohan, Arvind; Gaitonde, Datta

    2017-11-01

    Reduced Order Modeling (ROM) can be used as surrogates to prohibitively expensive simulations to model flow behavior for long time periods. ROM is predicated on extracting dominant spatio-temporal features of the flow from CFD or experimental datasets. We explore ROM development with a deep learning approach, which comprises of learning functional relationships between different variables in large datasets for predictive modeling. Although deep learning and related artificial intelligence based predictive modeling techniques have shown varied success in other fields, such approaches are in their initial stages of application to fluid dynamics. Here, we explore the application of the Long Short Term Memory (LSTM) neural network to sequential data, specifically to predict the time coefficients of Proper Orthogonal Decomposition (POD) modes of the flow for future timesteps, by training it on data at previous timesteps. The approach is demonstrated by constructing ROMs of several canonical flows. Additionally, we show that statistical estimates of stationarity in the training data can indicate a priori how amenable a given flow-field is to this approach. Finally, the potential and limitations of deep learning based ROM approaches will be elucidated and further developments discussed.

  6. Software compensation in particle flow reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Tran, Huong Lan; Krueger, Katja; Sefkow, Felix [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Green, Steven; Marshall, John; Thomson, Mark [Cavendish Laboratory, Cambridge (United Kingdom); Simon, Frank [Max-Planck-Institut fuer Physik, Muenchen (Germany)

    2017-10-15

    The particle flow approach to calorimetry benefits from highly granular calorimeters and sophisticated software algorithms in order to reconstruct and identify individual particles in complex event topologies. The high spatial granularity, together with analogue energy information, can be further exploited in software compensation. In this approach, the local energy density is used to discriminate electromagnetic and purely hadronic sub-showers within hadron showers in the detector to improve the energy resolution for single particles by correcting for the intrinsic non-compensation of the calorimeter system. This improvement in the single particle energy resolution also results in a better overall jet energy resolution by improving the energy measurement of identified neutral hadrons and improvements in the pattern recognition stage by a more accurate matching of calorimeter energies to tracker measurements. This paper describes the software compensation technique and its implementation in particle flow reconstruction with the Pandora Particle Flow Algorithm (PandoraPFA). The impact of software compensation on the choice of optimal transverse granularity for the analogue hadronic calorimeter option of the International Large Detector (ILD) concept is also discussed.

  7. Software compensation in particle flow reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Tran, Huong Lan; Krueger, Katja; Sefkow, Felix [Deutsches Elektronen-Synchrotron DESY, Hamburg (Germany); Green, Steven; Marshall, John; Thomson, Mark [Cavendish Laboratory, Cambridge (United Kingdom); Simon, Frank [Max-Planck-Institut fuer Physik, Muenchen (Germany)

    2017-10-15

    The particle flow approach to calorimetry benefits from highly granular calorimeters and sophisticated software algorithms in order to reconstruct and identify individual particles in complex event topologies. The high spatial granularity, together with analogue energy information, can be further exploited in software compensation. In this approach, the local energy density is used to discriminate electromagnetic and purely hadronic sub-showers within hadron showers in the detector to improve the energy resolution for single particles by correcting for the intrinsic non-compensation of the calorimeter system. This improvement in the single particle energy resolution also results in a better overall jet energy resolution by improving the energy measurement of identified neutral hadrons and improvements in the pattern recognition stage by a more accurate matching of calorimeter energies to tracker measurements. This paper describes the software compensation technique and its implementation in particle flow reconstruction with the Pandora Particle Flow Algorithm (PandoraPFA). The impact of software compensation on the choice of optimal transverse granularity for the analogue hadronic calorimeter option of the International Large Detector (ILD) concept is also discussed. (orig.)

  8. Software compensation in particle flow reconstruction

    International Nuclear Information System (INIS)

    Tran, Huong Lan; Krueger, Katja; Sefkow, Felix; Green, Steven; Marshall, John; Thomson, Mark; Simon, Frank

    2017-10-01

    The particle flow approach to calorimetry benefits from highly granular calorimeters and sophisticated software algorithms in order to reconstruct and identify individual particles in complex event topologies. The high spatial granularity, together with analogue energy information, can be further exploited in software compensation. In this approach, the local energy density is used to discriminate electromagnetic and purely hadronic sub-showers within hadron showers in the detector to improve the energy resolution for single particles by correcting for the intrinsic non-compensation of the calorimeter system. This improvement in the single particle energy resolution also results in a better overall jet energy resolution by improving the energy measurement of identified neutral hadrons and improvements in the pattern recognition stage by a more accurate matching of calorimeter energies to tracker measurements. This paper describes the software compensation technique and its implementation in particle flow reconstruction with the Pandora Particle Flow Algorithm (PandoraPFA). The impact of software compensation on the choice of optimal transverse granularity for the analogue hadronic calorimeter option of the International Large Detector (ILD) concept is also discussed.

  9. Managing the exploitation life of the mining machinery for an unlimited duration of time

    Directory of Open Access Journals (Sweden)

    Vujić Slobodan

    2011-01-01

    Full Text Available The problem of determining the optimum exploitation life of machinery, namely, the optimum time for machinery and equipment replacement, represents a complex and highly responsible engineering task. Taking into consideration the situation prevailing at coal pit mines in Serbia, the tasks of this rank are very complex and difficult. To make a decision on the replacement of capital equipment and machinery, such as bucket wheel excavators within the mentioned systems, implies a management task of utmost responsibility. It requires high professional and analytical knowledge as well as reliable arguments, based on a multidisciplinary professional approach. In this paper, the authors present their views on the problem of establishing the optimum exploitation life of bucket wheel excavators, offering an algorithm, based on dynamic programming, as a solution.

  10. Rethinking exploitation: a process-centered account.

    Science.gov (United States)

    Jansen, Lynn A; Wall, Steven

    2013-12-01

    Exploitation has become an important topic in recent discussions of biomedical and research ethics. This is due in no small measure to the influence of Alan Wertheimer's path-breaking work on the subject. This paper presents some objections to Wertheimer's account of the concept. The objections attempt to show that his account places too much emphasis on outcome-based considerations and too little on process-based considerations. Building on these objections, the paper develops an alternative process-centered account of the concept. This alternative account of exploitation takes as its point of departure the broadly Kantian notion that it is wrong to use another as an instrument for the advancement of one's own ends. It sharpens this slippery notion and adds a number of refinements to it. The paper concludes by arguing that process-centered accounts of exploitation better illuminate the ethical challenges posed by research on human subjects than outcome-centered accounts.

  11. AWSCS-A System to Evaluate Different Approaches for the Automatic Composition and Execution of Web Services Flows.

    Science.gov (United States)

    Tardiole Kuehne, Bruno; Estrella, Julio Cezar; Nunes, Luiz Henrique; Martins de Oliveira, Edvard; Hideo Nakamura, Luis; Gomes Ferreira, Carlos Henrique; Carlucci Santana, Regina Helena; Reiff-Marganiec, Stephan; Santana, Marcos José

    2015-01-01

    This paper proposes a system named AWSCS (Automatic Web Service Composition System) to evaluate different approaches for automatic composition of Web services, based on QoS parameters that are measured at execution time. The AWSCS is a system to implement different approaches for automatic composition of Web services and also to execute the resulting flows from these approaches. Aiming at demonstrating the results of this paper, a scenario was developed, where empirical flows were built to demonstrate the operation of AWSCS, since algorithms for automatic composition are not readily available to test. The results allow us to study the behaviour of running composite Web services, when flows with the same functionality but different problem-solving strategies were compared. Furthermore, we observed that the influence of the load applied on the running system as the type of load submitted to the system is an important factor to define which approach for the Web service composition can achieve the best performance in production.

  12. AWSCS-A System to Evaluate Different Approaches for the Automatic Composition and Execution of Web Services Flows.

    Directory of Open Access Journals (Sweden)

    Bruno Tardiole Kuehne

    Full Text Available This paper proposes a system named AWSCS (Automatic Web Service Composition System to evaluate different approaches for automatic composition of Web services, based on QoS parameters that are measured at execution time. The AWSCS is a system to implement different approaches for automatic composition of Web services and also to execute the resulting flows from these approaches. Aiming at demonstrating the results of this paper, a scenario was developed, where empirical flows were built to demonstrate the operation of AWSCS, since algorithms for automatic composition are not readily available to test. The results allow us to study the behaviour of running composite Web services, when flows with the same functionality but different problem-solving strategies were compared. Furthermore, we observed that the influence of the load applied on the running system as the type of load submitted to the system is an important factor to define which approach for the Web service composition can achieve the best performance in production.

  13. Innovative model-based flow rate optimization for vanadium redox flow batteries

    Science.gov (United States)

    König, S.; Suriyah, M. R.; Leibfried, T.

    2016-11-01

    In this paper, an innovative approach is presented to optimize the flow rate of a 6-kW vanadium redox flow battery with realistic stack dimensions. Efficiency is derived using a multi-physics battery model and a newly proposed instantaneous efficiency determination technique. An optimization algorithm is applied to identify optimal flow rates for operation points defined by state-of-charge (SoC) and current. The proposed method is evaluated against the conventional approach of applying Faraday's first law of electrolysis, scaled to the so-called flow factor. To make a fair comparison, the flow factor is also optimized by simulating cycles with different charging/discharging currents. It is shown through the obtained results that the efficiency is increased by up to 1.2% points; in addition, discharge capacity is also increased by up to 1.0 kWh or 5.4%. Detailed loss analysis is carried out for the cycles with maximum and minimum charging/discharging currents. It is shown that the proposed method minimizes the sum of losses caused by concentration over-potential, pumping and diffusion. Furthermore, for the deployed Nafion 115 membrane, it is observed that diffusion losses increase with stack SoC. Therefore, to decrease stack SoC and lower diffusion losses, a higher flow rate during charging than during discharging is reasonable.

  14. An Example-Based Multi-Atlas Approach to Automatic Labeling of White Matter Tracts.

    Science.gov (United States)

    Yoo, Sang Wook; Guevara, Pamela; Jeong, Yong; Yoo, Kwangsun; Shin, Joseph S; Mangin, Jean-Francois; Seong, Joon-Kyung

    2015-01-01

    We present an example-based multi-atlas approach for classifying white matter (WM) tracts into anatomic bundles. Our approach exploits expert-provided example data to automatically classify the WM tracts of a subject. Multiple atlases are constructed to model the example data from multiple subjects in order to reflect the individual variability of bundle shapes and trajectories over subjects. For each example subject, an atlas is maintained to allow the example data of a subject to be added or deleted flexibly. A voting scheme is proposed to facilitate the multi-atlas exploitation of example data. For conceptual simplicity, we adopt the same metrics in both example data construction and WM tract labeling. Due to the huge number of WM tracts in a subject, it is time-consuming to label each WM tract individually. Thus, the WM tracts are grouped according to their shape similarity, and WM tracts within each group are labeled simultaneously. To further enhance the computational efficiency, we implemented our approach on the graphics processing unit (GPU). Through nested cross-validation we demonstrated that our approach yielded high classification performance. The average sensitivities for bundles in the left and right hemispheres were 89.5% and 91.0%, respectively, and their average false discovery rates were 14.9% and 14.2%, respectively.

  15. Dynamic flow-through approaches for metal fractionation in environmentally relevant solid samples

    DEFF Research Database (Denmark)

    Miró, Manuel; Hansen, Elo Harald; Chomchoei, Roongrat

    2005-01-01

    generations of flow-injection analysis. Special attention is also paid to a novel, robust, non-invasive approach for on-site continuous sampling of soil solutions, capitalizing on flow-through microdialysis, which presents itself as an appealing complementary approach to the conventional lysimeter experiments...

  16. A Finite-Volume approach for compressible single- and two-phase flows in flexible pipelines with fluid-structure interaction

    Science.gov (United States)

    Daude, F.; Galon, P.

    2018-06-01

    A Finite-Volume scheme for the numerical computations of compressible single- and two-phase flows in flexible pipelines is proposed based on an approximate Godunov-type approach. The spatial discretization is here obtained using the HLLC scheme. In addition, the numerical treatment of abrupt changes in area and network including several pipelines connected at junctions is also considered. The proposed approach is based on the integral form of the governing equations making it possible to tackle general equations of state. A coupled approach for the resolution of fluid-structure interaction of compressible fluid flowing in flexible pipes is considered. The structural problem is solved using Euler-Bernoulli beam finite elements. The present Finite-Volume method is applied to ideal gas and two-phase steam-water based on the Homogeneous Equilibrium Model (HEM) in conjunction with a tabulated equation of state in order to demonstrate its ability to tackle general equations of state. The extensive application of the scheme for both shock tube and other transient flow problems demonstrates its capability to resolve such problems accurately and robustly. Finally, the proposed 1-D fluid-structure interaction model appears to be computationally efficient.

  17. Reliability-based assessment of flow assurance of hot waxy crude pipelines

    Energy Technology Data Exchange (ETDEWEB)

    Jinjun, Zhang; Wenke, Zhang; Jianlin, Ding; Bo, Yu [China University of Petroleum - Beijing (CUPB), Beijing (China)

    2009-07-01

    Waxy crude is characterized by its high pour point. Pipeline blockage may occur after prolonged shutdown of a pipeline due to crude oil gelation. Another flow assurance problem is the instable operation at a flow rate below the lowest allowable operation flow rate which is dependent on heat transfer of the pipeline and the viscosity vs. temperature relation of the crude pumped. Besides, for pipelines with thick wax deposition layer, massive depletion of wax deposit in some cases such as pipeline restart at high expelling pressure may also result in blockage of the pipeline, and the pig may be jammed during pigging as a result of thick wax deposition. Conventionally, assessment of these risks has been made by using the deterministic approach. However, many related physical quantities are subject to uncertainty and contribute to reliability of flow assurance. Therefore, the probabilistic approach is suggested and a framework of reliability based assessment of flow assurance of waxy crude pipelines is proposed in this paper. Discussions are also made on the limit state functions and target safety level. In the future study, development of an efficient and robust stochastic-numerical method is crucial. (author)

  18. Security option file - Exploitation (DOS-Expl)

    International Nuclear Information System (INIS)

    2016-01-01

    This document aims at presenting functions performed by Cigeo during its exploitation phase, its main technical and security options which are envisaged with respect to different types of internal or external risks, and a first assessment of its impact on mankind and on the environment during its exploitation in normal operation as well as in incidental or accidental situations. A first volume addresses security principles, approach and management in relationship with the legal and regulatory framework. The second volume presents input data related to waste parcels and used for the installation sizing and operation, the main site characteristics, the main technical options regarding structures and equipment, and the main options regarding exploitation (parcel management, organisational and human aspects, and effluent management). The third volume describes how parcel are processed from their arrival to their setting in storage compartment, an inventory of internal and external risks, and a first assessment of consequences of scenarios on mankind and on the environment. The fourth volume presents options and operations which are envisaged regarding Cigeo closure, and inventory of associated risks

  19. Target Localization with a Single Antenna via Directional Multipath Exploitation

    Directory of Open Access Journals (Sweden)

    Ali H. Muqaibel

    2015-01-01

    Full Text Available Target localization in urban sensing can benefit from angle dependency of the pulse shape at a radar receiver antenna. We propose a localization approach that utilizes the embedded directivity in ultra-wideband (UWB antennas to estimate target positions. A single radar unit sensing operation of indoor targets surrounded by interior walls is considered, where interior wall multipaths are exploited to provide target cross-range. This exploitation assumes resolvability of the multipath components, which is made possible by the virtue of using UWB radar signals. The proposed approach is most attractive when only few multipaths are detectable due to propagation obstructions or owing to low signal-to-noise ratios. Both simulated and experimental data are used to demonstrate the effectiveness of the proposed approach.

  20. Herbivory eliminates fitness costs of mutualism exploiters.

    Science.gov (United States)

    Simonsen, Anna K; Stinchcombe, John R

    2014-04-01

    A common empirical observation in mutualistic interactions is the persistence of variation in partner quality and, in particular, the persistence of exploitative phenotypes. For mutualisms between hosts and symbionts, most mutualism theory assumes that exploiters always impose fitness costs on their host. We exposed legume hosts to mutualistic (nitrogen-fixing) and exploitative (non-nitrogen-fixing) symbiotic rhizobia in field conditions, and manipulated the presence or absence of insect herbivory to determine if the costly fitness effects of exploitative rhizobia are context-dependent. Exploitative rhizobia predictably reduced host fitness when herbivores were excluded. However, insects caused greater damage on hosts associating with mutualistic rhizobia, as a consequence of feeding preferences related to leaf nitrogen content, resulting in the elimination of fitness costs imposed on hosts by exploitative rhizobia. Our experiment shows that herbivory is potentially an important factor in influencing the evolutionary dynamic between legumes and rhizobia. Partner choice and host sanctioning are theoretically predicted to stabilize mutualisms by reducing the frequency of exploitative symbionts. We argue that herbivore pressure may actually weaken selection on choice and sanction mechanisms, thus providing one explanation of why host-based discrimination mechanisms may not be completely effective in eliminating nonbeneficial partners. © 2014 The Authors. New Phytologist © 2014 New Phytologist Trust.

  1. Simulated population responses of common carp to commercial exploitation

    Energy Technology Data Exchange (ETDEWEB)

    Weber, Michael J.; Hennen, Matthew J.; Brown, Michael L.

    2011-12-01

    Common carp Cyprinus carpio is a widespread invasive species that can become highly abundant and impose deleterious ecosystem effects. Thus, aquatic resource managers are interested in controlling common carp populations. Control of invasive common carp populations is difficult, due in part to the inherent uncertainty of how populations respond to exploitation. To understand how common carp populations respond to exploitation, we evaluated common carp population dynamics (recruitment, growth, and mortality) in three natural lakes in eastern South Dakota. Common carp exhibited similar population dynamics across these three systems that were characterized by consistent recruitment (ages 3 to 15 years present), fast growth (K = 0.37 to 0.59), and low mortality (A = 1 to 7%). We then modeled the effects of commercial exploitation on size structure, abundance, and egg production to determine its utility as a management tool to control populations. All three populations responded similarly to exploitation simulations with a 575-mm length restriction, representing commercial gear selectivity. Simulated common carp size structure modestly declined (9 to 37%) in all simulations. Abundance of common carp declined dramatically (28 to 56%) at low levels of exploitation (0 to 20%) but exploitation >40% had little additive effect and populations were only reduced by 49 to 79% despite high exploitation (>90%). Maximum lifetime egg production was reduced from 77 to 89% at a moderate level of exploitation (40%), indicating the potential for recruitment overfishing. Exploitation further reduced common carp size structure, abundance, and egg production when simulations were not size selective. Our results provide insights to how common carp populations may respond to exploitation. Although commercial exploitation may be able to partially control populations, an integrated removal approach that removes all sizes of common carp has a greater chance of controlling population abundance

  2. A Logical Approach to the Statement of Cash Flows

    Science.gov (United States)

    Petro, Fred; Gean, Farrell

    2014-01-01

    Of the three financial statements in financial reporting, the Statement of Cash Flows (SCF) is perhaps the most challenging. The most difficult aspect of the SCF is in developing an understanding of how previous transactions are finalized in this document. The purpose of this paper is to logically explain the indirect approach of cash flow whereby…

  3. Knowledge-based biomedical word sense disambiguation: comparison of approaches

    Directory of Open Access Journals (Sweden)

    Aronson Alan R

    2010-11-01

    Full Text Available Abstract Background Word sense disambiguation (WSD algorithms attempt to select the proper sense of ambiguous terms in text. Resources like the UMLS provide a reference thesaurus to be used to annotate the biomedical literature. Statistical learning approaches have produced good results, but the size of the UMLS makes the production of training data infeasible to cover all the domain. Methods We present research on existing WSD approaches based on knowledge bases, which complement the studies performed on statistical learning. We compare four approaches which rely on the UMLS Metathesaurus as the source of knowledge. The first approach compares the overlap of the context of the ambiguous word to the candidate senses based on a representation built out of the definitions, synonyms and related terms. The second approach collects training data for each of the candidate senses to perform WSD based on queries built using monosemous synonyms and related terms. These queries are used to retrieve MEDLINE citations. Then, a machine learning approach is trained on this corpus. The third approach is a graph-based method which exploits the structure of the Metathesaurus network of relations to perform unsupervised WSD. This approach ranks nodes in the graph according to their relative structural importance. The last approach uses the semantic types assigned to the concepts in the Metathesaurus to perform WSD. The context of the ambiguous word and semantic types of the candidate concepts are mapped to Journal Descriptors. These mappings are compared to decide among the candidate concepts. Results are provided estimating accuracy of the different methods on the WSD test collection available from the NLM. Conclusions We have found that the last approach achieves better results compared to the other methods. The graph-based approach, using the structure of the Metathesaurus network to estimate the relevance of the Metathesaurus concepts, does not perform well

  4. A HACCP-based approach to mastitis control in dairy herds. Part 1: Development.

    Science.gov (United States)

    Beekhuis-Gibbon, Lies; Whyte, Paul; O'Grady, Luke; More, Simon J; Doherty, Michael L

    2011-03-31

    Hazard Analysis and Critical Control Points (HACCP) systems are a risk based preventive approach developed to increase levels of food safety assurance. This is part 1 of a pilot study on the development, implementation and evaluation of a HACCP-based approach for the control of good udder health in dairy cows. The paper describes the use of a novel approach based on a deconstruction of the infectious process in mastitis to identify Critical Control Points (CCPs) and develop a HACCP-based system to prevent and control mastitis in dairy herds. The approach involved the creation of an Infectious Process Flow Diagram, which was then cross-referenced to two production process flow diagrams of the milking process and cow management cycle. The HACCP plan developed, may be suitable for customisation and implementation on dairy farms. This is a logical, systematic approach to the development of a mastitis control programme that could be used as a template for the development of control programmes for other infectious diseases in the dairy herd.

  5. A HACCP-based approach to mastitis control in dairy herds. Part 1: Development

    Directory of Open Access Journals (Sweden)

    Beekhuis-Gibbon Lies

    2011-03-01

    Full Text Available Abstract Hazard Analysis and Critical Control Points (HACCP systems are a risk based preventive approach developed to increase levels of food safety assurance. This is part 1 of a pilot study on the development, implementation and evaluation of a HACCP-based approach for the control of good udder health in dairy cows. The paper describes the use of a novel approach based on a deconstruction of the infectious process in mastitis to identify Critical Control Points (CCPs and develop a HACCP-based system to prevent and control mastitis in dairy herds. The approach involved the creation of an Infectious Process Flow Diagram, which was then cross-referenced to two production process flow diagrams of the milking process and cow management cycle. The HACCP plan developed, may be suitable for customisation and implementation on dairy farms. This is a logical, systematic approach to the development of a mastitis control programme that could be used as a template for the development of control programmes for other infectious diseases in the dairy herd.

  6. Residual coal exploitation and its impact on sustainable development of the coal industry in China

    International Nuclear Information System (INIS)

    Zhang, Yujiang; Feng, Guorui; Zhang, Min; Ren, Hongrui; Bai, Jinwen; Guo, Yuxia; Jiang, Haina; Kang, Lixun

    2016-01-01

    Although China owns large coal reserves, it now faces the problem of depletion of its coal resources in advance. The coal-based energy mix in China will not change in the short term, and a means of delaying the coal resources depletion is therefore urgently required. The residual coal was exploited first with a lower recovery percentage and was evaluated as commercially valuable damaged coal. This approach is in comparison to past evaluations when the residual coal was allocated as exploitation losses. Coal recovery rates, the calculation method of residual coal reserves and statistics of its mines in China were given. On this basis, a discussion concerning the impacts on the delay of China's coal depletion, development of coal exploitation and sustainable developments, as well as technologies and relevant policies, were presented. It is considered that the exploitation of residual coal can effectively delay China's coal depletion, inhibit the construction of new mines, redress the imbalance between supply and demand of coal in eastern China, improve the mining area environment and guarantee social stability. The Chinese government supports the exploitation technologies of residual coal. Hence, exploiting residual coal is of considerable importance in sustainable development of the coal industry in China. - Highlights: •Pay attention to residual coal under changing energy-mix environment in China. •Estimate residual coal reserves and investigate its exploitation mines. •Discuss impacts of residual coal exploitation on delay of coal depletion in China. •Discuss impacts on coal mining industry and residual coal exploitation technology. •Give corresponding policy prescriptions.

  7. A modeling approach to establish environmental flow threshold in ungauged semidiurnal tidal river

    Science.gov (United States)

    Akter, A.; Tanim, A. H.

    2018-03-01

    Due to shortage of flow monitoring data in ungauged semidiurnal river, 'environmental flow' (EF) determination based on its key component 'minimum low flow' is always difficult. For EF assessment this study selected a reach immediately after the Halda-Karnafuli confluence, a unique breeding ground for Indian Carp fishes of Bangladesh. As part of an ungauged tidal river, EF threshold establishment faces challenges in changing ecological paradigms with periodic change of tides and hydrologic alterations. This study describes a novel approach through modeling framework comprising hydrological, hydrodynamic and habitat simulation model. The EF establishment was conceptualized according to the hydrologic process of an ungauged semi-diurnal tidal regime in four steps. Initially, a hydrologic model coupled with a hydrodynamic model to simulate flow considering land use changes effect on streamflow, seepage loss of channel, friction dominated tidal decay as well as lack of long term flow characteristics. Secondly, to define hydraulic habitat feature, a statistical analysis on derived flow data was performed to identify 'habitat suitability'. Thirdly, to observe the ecological habitat behavior based on the identified hydrologic alteration, hydraulic habitat features were investigated. Finally, based on the combined habitat suitability index flow alteration and ecological response relationship was established. Then, the obtained EF provides a set of low flow indices of desired regime and thus the obtained discharge against maximum Weighted Usable Area (WUA) was defined as EF threshold for the selected reach. A suitable EF regime condition was obtained within flow range 25-30.1 m3/s i.e., around 10-12% of the mean annual runoff of 245 m3/s and these findings are within researchers' recommendation of minimum flow requirement. Additionally it was observed that tidal characteristics are dominant process in semi-diurnal regime. However, during the study period (2010-2015) the

  8. An Equity Centered Management Approach to Exploiting Sport Employee Productivity

    OpenAIRE

    Schneider, Robert C.

    2017-01-01

    A primary goal ofsport organizations is to exploit employees’ abilities to their fullestcapacities. Sport managers who successfully maximize employee productivity willgreatly increase the chances of achieving the organization’s goals andobjectives. The full potential of sport employees’ abilities can be realizedthrough the application of the equity component grounded in Adam’s EquityTheory (Adams, 1963). Centered on the premise that the relationship betweensport manager and employer must be o...

  9. Multi-phase flow monitoring with electrical impedance tomography using level set based method

    International Nuclear Information System (INIS)

    Liu, Dong; Khambampati, Anil Kumar; Kim, Sin; Kim, Kyung Youn

    2015-01-01

    Highlights: • LSM has been used for shape reconstruction to monitor multi-phase flow using EIT. • Multi-phase level set model for conductivity is represented by two level set functions. • LSM handles topological merging and breaking naturally during evolution process. • To reduce the computational time, a narrowband technique was applied. • Use of narrowband and optimization approach results in efficient and fast method. - Abstract: In this paper, a level set-based reconstruction scheme is applied to multi-phase flow monitoring using electrical impedance tomography (EIT). The proposed scheme involves applying a narrowband level set method to solve the inverse problem of finding the interface between the regions having different conductivity values. The multi-phase level set model for the conductivity distribution inside the domain is represented by two level set functions. The key principle of the level set-based method is to implicitly represent the shape of interface as the zero level set of higher dimensional function and then solve a set of partial differential equations. The level set-based scheme handles topological merging and breaking naturally during the evolution process. It also offers several advantages compared to traditional pixel-based approach. Level set-based method for multi-phase flow is tested with numerical and experimental data. It is found that level set-based method has better reconstruction performance when compared to pixel-based method

  10. Exploitation of the Virtual Worlds in Tourism and Tourism Education

    Directory of Open Access Journals (Sweden)

    Zejda Pavel

    2016-12-01

    Full Text Available Academics perceive a great potential of virtual worlds in various areas, including tourism and education. Efforts adapting the virtual worlds in practice are, however, still marginal. There is no clear definition of the virtual world. Therefore the author of this article attempts to provide one. The paper also focuses on the barriers of a wider exploitation of the virtual worlds and discusses the principles that might help to increase their potential in tourism area. One of the principles – gamification – favours a wider adaptation of the virtual worlds in tourism. Applying gamification principles provides visitors with some unique experiences while serving as a powerful marketing tool for institutions. The benefits of implementing tourism education activities based on cooperative principles set in an immersive environment of the virtual worlds are depicted afterwards. Finally, this paper includes successful case studies, which show advantages and drawbacks of some approaches in exploiting the virtual worlds in tourism and tourism education.

  11. Exploiting Phase Diversity for CDMA2000 1X Smart Antenna Base Stations

    Science.gov (United States)

    Kim, Seongdo; Hyeon, Seungheon; Choi, Seungwon

    2004-12-01

    A performance analysis of an access channel decoder is presented which exploits a diversity gain due to the independent magnitude of received signals energy at each of the antenna elements of a smart-antenna base-station transceiver subsystem (BTS) operating in CDMA2000 1X signal environment. The objective is to enhance the data retrieval at cellsite during the access period, for which the optimal weight vector of the smart antenna BTS is not available. It is shown in this paper that the access channel decoder proposed in this paper outperforms the conventional one, which is based on a single antenna channel in terms of detection probability of access probe, access channel failure probability, and Walsh-code demodulation performance.

  12. Built-In Data-Flow Integration Testing in Large-Scale Component-Based Systems

    Science.gov (United States)

    Piel, Éric; Gonzalez-Sanchez, Alberto; Gross, Hans-Gerhard

    Modern large-scale component-based applications and service ecosystems are built following a number of different component models and architectural styles, such as the data-flow architectural style. In this style, each building block receives data from a previous one in the flow and sends output data to other components. This organisation expresses information flows adequately, and also favours decoupling between the components, leading to easier maintenance and quicker evolution of the system. Integration testing is a major means to ensure the quality of large systems. Their size and complexity, together with the fact that they are developed and maintained by several stake holders, make Built-In Testing (BIT) an attractive approach to manage their integration testing. However, so far no technique has been proposed that combines BIT and data-flow integration testing. We have introduced the notion of a virtual component in order to realize such a combination. It permits to define the behaviour of several components assembled to process a flow of data, using BIT. Test-cases are defined in a way that they are simple to write and flexible to adapt. We present two implementations of our proposed virtual component integration testing technique, and we extend our previous proposal to detect and handle errors in the definition by the user. The evaluation of the virtual component testing approach suggests that more issues can be detected in systems with data-flows than through other integration testing approaches.

  13. A Cluster-based Approach Towards Detecting and Modeling Network Dictionary Attacks

    Directory of Open Access Journals (Sweden)

    A. Tajari Siahmarzkooh

    2016-12-01

    Full Text Available In this paper, we provide an approach to detect network dictionary attacks using a data set collected as flows based on which a clustered graph is resulted. These flows provide an aggregated view of the network traffic in which the exchanged packets in the network are considered so that more internally connected nodes would be clustered. We show that dictionary attacks could be detected through some parameters namely the number and the weight of clusters in time series and their evolution over the time. Additionally, the Markov model based on the average weight of clusters,will be also created. Finally, by means of our suggested model, we demonstrate that artificial clusters of the flows are created for normal and malicious traffic. The results of the proposed approach on CAIDA 2007 data set suggest a high accuracy for the model and, therefore, it provides a proper method for detecting the dictionary attack.

  14. An active, collaborative approach to learning skills in flow cytometry.

    Science.gov (United States)

    Fuller, Kathryn; Linden, Matthew D; Lee-Pullen, Tracey; Fragall, Clayton; Erber, Wendy N; Röhrig, Kimberley J

    2016-06-01

    Advances in science education research have the potential to improve the way students learn to perform scientific interpretations and understand science concepts. We developed active, collaborative activities to teach skills in manipulating flow cytometry data using FlowJo software. Undergraduate students were given compensated clinical flow cytometry listmode output (FCS) files and asked to design a gating strategy to diagnose patients with different hematological malignancies on the basis of their immunophenotype. A separate cohort of research trainees was given uncompensated data files on which they performed their own compensation, calculated the antibody staining index, designed a sequential gating strategy, and quantified rare immune cell subsets. Student engagement, confidence, and perceptions of flow cytometry were assessed using a survey. Competency against the learning outcomes was assessed by asking students to undertake tasks that required understanding of flow cytometry dot plot data and gating sequences. The active, collaborative approach allowed students to achieve learning outcomes not previously possible with traditional teaching formats, for example, having students design their own gating strategy, without forgoing essential outcomes such as the interpretation of dot plots. In undergraduate students, favorable perceptions of flow cytometry as a field and as a potential career choice were correlated with student confidence but not the ability to perform flow cytometry data analysis. We demonstrate that this new pedagogical approach to teaching flow cytometry is beneficial for student understanding and interpretation of complex concepts. It should be considered as a useful new method for incorporating complex data analysis tasks such as flow cytometry into curricula. Copyright © 2016 The American Physiological Society.

  15. Groundwater flow system stability in shield settings a multi-disciplinary approach

    International Nuclear Information System (INIS)

    Jensen, M.R.; Goodwin, B.W.

    2004-01-01

    , performed using the code FRAC3DVS (Hydrosphere) are focused on assessing the uncertainty and robustness of predictions for groundwater migration based on fracture network geometry and inter-connectivity, flow system dimensionality. spatially variable and correlated permeability fields, topography, salinity and long-term climate change. Work in this regard is proceeding toward coupling site-specific glacial and hydrogeologic numerical models and the inclusion of geologically reasoned Discrete Fracture Network models derived from geostatistical methods that honor fracture statistics and location. This multi-disciplinary approach is yielding an improved geo-scientific basis to convey a sense of understanding in Shield groundwater flow system evolution and stability as affected by climate change. (author)

  16. CONTEMPORARY APPROACHES OF COMPANY PERFORMANCE ANALYSIS BASED ON RELEVANT FINANCIAL INFORMATION

    Directory of Open Access Journals (Sweden)

    Sziki Klara

    2012-12-01

    Full Text Available In this paper we chose to present two components of the financial statements: the profit and loss account and the cash flow statement. These summary documents and different indicators calculated based on them allow us to formulate assessments on the performance and profitability on various functions and levels of the company’s activity. This paper aims to support the hypothesis that the accounting information presented in the profit and loss account and in the cash flow statement is an appropriate source for assessing company performance. The purpose of this research is to answer the question linked to the main hypothesis: Is it the profit and loss statement or the cash flow account that reflects better the performance of a business? Based on the literature of specialty studied we tried a conceptual, analytical and practical approach of the term performance, overviewing some terminological acceptations of the term performance as well as the main indicators of performance analysis on the basis of the profit and loss account and of the cash flow statement: aggregated indicators, also known as intermediary balances of administration, economic rate of return, rate of financial profitability, rate of return through cash flows, operating cash flow rate, rate of generating operating cash out of gross operating result. At the same time we had a comparative approach of the profit and loss account and cash flow statement, outlining the main advantages and disadvantages of these documents. In order to demonstrate the above theoretical assessments, we chose to analyze these indicators based on information from the financial statements of SC Sinteza SA, a company in Bihor county, listed on the Bucharest Stock Exchange.

  17. Multipath ultrasonic gas flow-meter based on multiple reference waves.

    Science.gov (United States)

    Zhou, Hongliang; Ji, Tao; Wang, Ruichen; Ge, Xiaocheng; Tang, Xiaoyu; Tang, Shizhen

    2018-01-01

    Several technologies can be used in ultrasonic gas flow-meters, such as transit-time, Doppler, cross-correlation and etc. In applications, the approach based on measuring transit-time has demonstrated its advantages and become more popular. Among those techniques which can be applied to determine time-of-flight (TOF) of ultrasonic waves, including threshold detection, cross correlation algorithm and other digital signal processing algorithms, cross correlation algorithm has more advantages when the received ultrasonic signal is severely disturbed by the noise. However, the reference wave for cross correlation computation has great influence on the precise measurement of TOF. In the applications of the multipath flow-meters, selection of the reference wave becomes even more complicated. Based on the analysis of the impact factors that will introduce noise and waveform distortion of ultrasonic waves, an averaging method is proposed to determine the reference wave in this paper. In the multipath ultrasonic gas flow-meter, the analysis of each path of ultrasound needs its own reference wave. In case study, a six-path ultrasonic gas flow-meter has been designed and tested with air flow through the pipeline. The results demonstrate that the flow rate accuracy and the repeatability of the TOF are significantly improved by using averaging reference wave, compared with that using random reference wave. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Human-Assisted Machine Information Exploitation: a crowdsourced investigation of information-based problem solving

    Science.gov (United States)

    Kase, Sue E.; Vanni, Michelle; Caylor, Justine; Hoye, Jeff

    2017-05-01

    The Human-Assisted Machine Information Exploitation (HAMIE) investigation utilizes large-scale online data collection for developing models of information-based problem solving (IBPS) behavior in a simulated time-critical operational environment. These types of environments are characteristic of intelligence workflow processes conducted during human-geo-political unrest situations when the ability to make the best decision at the right time ensures strategic overmatch. The project takes a systems approach to Human Information Interaction (HII) by harnessing the expertise of crowds to model the interaction of the information consumer and the information required to solve a problem at different levels of system restrictiveness and decisional guidance. The design variables derived from Decision Support Systems (DSS) research represent the experimental conditions in this online single-player against-the-clock game where the player, acting in the role of an intelligence analyst, is tasked with a Commander's Critical Information Requirement (CCIR) in an information overload scenario. The player performs a sequence of three information processing tasks (annotation, relation identification, and link diagram formation) with the assistance of `HAMIE the robot' who offers varying levels of information understanding dependent on question complexity. We provide preliminary results from a pilot study conducted with Amazon Mechanical Turk (AMT) participants on the Volunteer Science scientific research platform.

  19. Ultrasonic imaging of material flaws exploiting multipath information

    Science.gov (United States)

    Shen, Xizhong; Zhang, Yimin D.; Demirli, Ramazan; Amin, Moeness G.

    2011-05-01

    In this paper, we consider ultrasonic imaging for the visualization of flaws in a material. Ultrasonic imaging is a powerful nondestructive testing (NDT) tool which assesses material conditions via the detection, localization, and classification of flaws inside a structure. Multipath exploitations provide extended virtual array apertures and, in turn, enhance imaging capability beyond the limitation of traditional multisensor approaches. We utilize reflections of ultrasonic signals which occur when encountering different media and interior discontinuities. The waveforms observed at the physical as well as virtual sensors yield additional measurements corresponding to different aspect angles. Exploitation of multipath information addresses unique issues observed in ultrasonic imaging. (1) Utilization of physical and virtual sensors significantly extends the array aperture for image enhancement. (2) Multipath signals extend the angle of view of the narrow beamwidth of the ultrasound transducers, allowing improved visibility and array design flexibility. (3) Ultrasonic signals experience difficulty in penetrating a flaw, thus the aspect angle of the observation is limited unless access to other sides is available. The significant extension of the aperture makes it possible to yield flaw observation from multiple aspect angles. We show that data fusion of physical and virtual sensor data significantly improves the detection and localization performance. The effectiveness of the proposed multipath exploitation approach is demonstrated through experimental studies.

  20. The pdf approach to turbulent polydispersed two-phase flows

    Science.gov (United States)

    Minier, Jean-Pierre; Peirano, Eric

    2001-10-01

    The purpose of this paper is to develop a probabilistic approach to turbulent polydispersed two-phase flows. The two-phase flows considered are composed of a continuous phase, which is a turbulent fluid, and a dispersed phase, which represents an ensemble of discrete particles (solid particles, droplets or bubbles). Gathering the difficulties of turbulent flows and of particle motion, the challenge is to work out a general modelling approach that meets three requirements: to treat accurately the physically relevant phenomena, to provide enough information to address issues of complex physics (combustion, polydispersed particle flows, …) and to remain tractable for general non-homogeneous flows. The present probabilistic approach models the statistical dynamics of the system and consists in simulating the joint probability density function (pdf) of a number of fluid and discrete particle properties. A new point is that both the fluid and the particles are included in the pdf description. The derivation of the joint pdf model for the fluid and for the discrete particles is worked out in several steps. The mathematical properties of stochastic processes are first recalled. The various hierarchies of pdf descriptions are detailed and the physical principles that are used in the construction of the models are explained. The Lagrangian one-particle probabilistic description is developed first for the fluid alone, then for the discrete particles and finally for the joint fluid and particle turbulent systems. In the case of the probabilistic description for the fluid alone or for the discrete particles alone, numerical computations are presented and discussed to illustrate how the method works in practice and the kind of information that can be extracted from it. Comments on the current modelling state and propositions for future investigations which try to link the present work with other ideas in physics are made at the end of the paper.

  1. Opportunistic splitting for scheduling using a score-based approach

    KAUST Repository

    Rashid, Faraan

    2012-06-01

    We consider the problem of scheduling a user in a multi-user wireless environment in a distributed manner. The opportunistic splitting algorithm is applied to find the best group of users without reporting the channel state information to the centralized scheduler. The users find the best among themselves while requiring just a ternary feedback from the common receiver at the end of each mini-slot. The original splitting algorithm is modified to handle users with asymmetric channel conditions. We use a score-based approach with the splitting algorithm to introduce time and throughput fairness while exploiting the multi-user diversity of the network. Analytical and simulation results are given to show that the modified score-based splitting algorithm works well as a fair scheduling scheme with good spectral efficiency and reduced feedback. © 2012 IEEE.

  2. Optimizing Search and Ranking in Folksonomy Systems by Exploiting Context Information

    Science.gov (United States)

    Abel, Fabian; Henze, Nicola; Krause, Daniel

    Tagging systems enable users to annotate resources with freely chosen keywords. The evolving bunch of tag assignments is called folksonomy and there exist already some approaches that exploit folksonomies to improve resource retrieval. In this paper, we analyze and compare graph-based ranking algorithms: FolkRank and SocialPageRank. We enhance these algorithms by exploiting the context of tags, and evaluate the results on the GroupMe! dataset. In GroupMe!, users can organize and maintain arbitrary Web resources in self-defined groups. When users annotate resources in GroupMe!, this can be interpreted in context of a certain group. The grouping activity itself is easy for users to perform. However, it delivers valuable semantic information about resources and their context. We present GRank that uses the context information to improve and optimize the detection of relevant search results, and compare different strategies for ranking result lists in folksonomy systems.

  3. Sustainable exploitation and management of aquatic resources

    DEFF Research Database (Denmark)

    Neuenfeldt, Stefan; Köster, Fritz

    2014-01-01

    DTU Aqua conducts research, provides advice,educates at university level and contributes toinnovation in sustainable exploitation andmanagement of aquatic resources. The vision of DTUAqua is to enable ecologically and economicallysustainable exploitation of aquatic resourcesapplying an integrated...... management. Marineecosystems aims at understanding the mechanisms that govern the interaction between individuals,species and populations in an ecosystem enabling us to determine the stability and flexibility of theecosystem.Marine living resources looks at the sustainable utilization of fish and shellfish...... stocks.Ecosystem effects expands from the ecosystem approach to fisheries management to an integratedapproach where other human activities are taken into consideration. Fisheries management developsmethods, models and tools for predicting and evaluating the effects of management measures andregulations...

  4. Complementary Constrains on Component based Multiphase Flow Problems, Should It Be Implemented Locally or Globally?

    Science.gov (United States)

    Shao, H.; Huang, Y.; Kolditz, O.

    2015-12-01

    Multiphase flow problems are numerically difficult to solve, as it often contains nonlinear Phase transition phenomena A conventional technique is to introduce the complementarity constraints where fluid properties such as liquid saturations are confined within a physically reasonable range. Based on such constraints, the mathematical model can be reformulated into a system of nonlinear partial differential equations coupled with variational inequalities. They can be then numerically handled by optimization algorithms. In this work, two different approaches utilizing the complementarity constraints based on persistent primary variables formulation[4] are implemented and investigated. The first approach proposed by Marchand et.al[1] is using "local complementary constraints", i.e. coupling the constraints with the local constitutive equations. The second approach[2],[3] , namely the "global complementary constrains", applies the constraints globally with the mass conservation equation. We will discuss how these two approaches are applied to solve non-isothermal componential multiphase flow problem with the phase change phenomenon. Several benchmarks will be presented for investigating the overall numerical performance of different approaches. The advantages and disadvantages of different models will also be concluded. References[1] E.Marchand, T.Mueller and P.Knabner. Fully coupled generalized hybrid-mixed finite element approximation of two-phase two-component flow in porous media. Part I: formulation and properties of the mathematical model, Computational Geosciences 17(2): 431-442, (2013). [2] A. Lauser, C. Hager, R. Helmig, B. Wohlmuth. A new approach for phase transitions in miscible multi-phase flow in porous media. Water Resour., 34,(2011), 957-966. [3] J. Jaffré, and A. Sboui. Henry's Law and Gas Phase Disappearance. Transp. Porous Media. 82, (2010), 521-526. [4] A. Bourgeat, M. Jurak and F. Smaï. Two-phase partially miscible flow and transport modeling in

  5. Exploiting gas diffusion for non-invasive sampling in flow analysis: determination of ethanol in alcoholic beverages

    Directory of Open Access Journals (Sweden)

    Simone Vicente

    2006-03-01

    Full Text Available A tubular gas diffusion PTFE membrane is exploited for non-invasive sampling in flow analysis, aiming to develop an improved spectrophotometric determination of ethanol in alcoholic beverages. The probe is immersed into the sample, allowing ethanol to diffuse through the membrane. It is collected into the acceptor stream (acidic dichromate solution, leading to formation of Cr(III, monitored at 600 nm. The analytical curve is linear up to 50% (v/v ethanol, baseline drift is Uma membrana tubular de PTFE permeável a espécies gasosas foi empregada como sonda em sistemas de análises em fluxo visando a proposta de uma estratégia de amostragem não invasiva. Como aplicação, foi selecionada a determinação espectrofotométrica de etanol em bebidas alcoólicas. A sonda é imersa na amostra, permitindo que o analito se difunda através desta e seja coletado pelo fluxo aceptor (solução ácida de dicromato, levando à formação de Cr(III, o qual é monitorado a 600 nm. Linearidade da curva analítica é verificada até 50,0% (v/v de etanol (r > 0,998; n = 8, derivas de linha base são menores do que 0,005 absorbância durante períodos de 4 horas de operação e a velocidade analítica é de 30 h-1 o que corresponde a 0.6 mmol K2Cr2O7 por determinação. Os resultados são precisos (d.p.r. < 2% e concordantes com aqueles obtidos por um método oficial.

  6. Optical Flow in a Smart Sensor Based on Hybrid Analog-Digital Architecture

    Directory of Open Access Journals (Sweden)

    Pablo Guzmán

    2010-03-01

    Full Text Available The purpose of this study is to develop a motion sensor (delivering optical flow estimations using a platform that includes the sensor itself, focal plane processing resources, and co-processing resources on a general purpose embedded processor. All this is implemented on a single device as a SoC (System-on-a-Chip. Optical flow is the 2-D projection into the camera plane of the 3-D motion information presented at the world scenario. This motion representation is widespread well-known and applied in the science community to solve a wide variety of problems. Most applications based on motion estimation require work in real-time; hence, this restriction must be taken into account. In this paper, we show an efficient approach to estimate the motion velocity vectors with an architecture based on a focal plane processor combined on-chip with a 32 bits NIOS II processor. Our approach relies on the simplification of the original optical flow model and its efficient implementation in a platform that combines an analog (focal-plane and digital (NIOS II processor. The system is fully functional and is organized in different stages where the early processing (focal plane stage is mainly focus to pre-process the input image stream to reduce the computational cost in the post-processing (NIOS II stage. We present the employed co-design techniques and analyze this novel architecture. We evaluate the system’s performance and accuracy with respect to the different proposed approaches described in the literature. We also discuss the advantages of the proposed approach as well as the degree of efficiency which can be obtained from the focal plane processing capabilities of the system. The final outcome is a low cost smart sensor for optical flow computation with real-time performance and reduced power consumption that can be used for very diverse application domains.

  7. Optical Flow in a Smart Sensor Based on Hybrid Analog-Digital Architecture

    Science.gov (United States)

    Guzmán, Pablo; Díaz, Javier; Agís, Rodrigo; Ros, Eduardo

    2010-01-01

    The purpose of this study is to develop a motion sensor (delivering optical flow estimations) using a platform that includes the sensor itself, focal plane processing resources, and co-processing resources on a general purpose embedded processor. All this is implemented on a single device as a SoC (System-on-a-Chip). Optical flow is the 2-D projection into the camera plane of the 3-D motion information presented at the world scenario. This motion representation is widespread well-known and applied in the science community to solve a wide variety of problems. Most applications based on motion estimation require work in real-time; hence, this restriction must be taken into account. In this paper, we show an efficient approach to estimate the motion velocity vectors with an architecture based on a focal plane processor combined on-chip with a 32 bits NIOS II processor. Our approach relies on the simplification of the original optical flow model and its efficient implementation in a platform that combines an analog (focal-plane) and digital (NIOS II) processor. The system is fully functional and is organized in different stages where the early processing (focal plane) stage is mainly focus to pre-process the input image stream to reduce the computational cost in the post-processing (NIOS II) stage. We present the employed co-design techniques and analyze this novel architecture. We evaluate the system’s performance and accuracy with respect to the different proposed approaches described in the literature. We also discuss the advantages of the proposed approach as well as the degree of efficiency which can be obtained from the focal plane processing capabilities of the system. The final outcome is a low cost smart sensor for optical flow computation with real-time performance and reduced power consumption that can be used for very diverse application domains. PMID:22319283

  8. Drag with external and pressure drop with internal flows: a new and unifying look at losses in the flow field based on the second law of thermodynamics

    International Nuclear Information System (INIS)

    Herwig, Heinz; Schmandt, Bastian

    2013-01-01

    Internal and external flows are characterized by friction factors and drag coefficients, respectively. Their definitions are based on pressure drop and drag force and thus are very different in character. From a thermodynamics point of view in both cases dissipation occurs which can uniformly be related to the entropy generation in the flow field. Therefore we suggest to account for losses in the flow field by friction factors and drag coefficients that are based on the overall entropy generation due to the dissipation in the internal and external flow fields. This second law analysis (SLA) has been applied to internal flows in many studies already. Examples of this flow category are given together with new cases of external flows, also treated by the general SLA-approach. (paper)

  9. Negative Saturation Approach for Non-Isothermal Compositional Two-Phase Flow Simulations

    NARCIS (Netherlands)

    Salimi, H.; Wolf, K.H.; Bruining, J.

    2011-01-01

    This article deals with developing a solution approach, called the non-isothermal negative saturation (NegSat) solution approach. The NegSat solution approach solves efficiently any non-isothermal compositional flow problem that involves phase disappearance, phase appearance, and phase transition.

  10. CFD Modeling of Wall Steam Condensation: Two-Phase Flow Approach versus Homogeneous Flow Approach

    International Nuclear Information System (INIS)

    Mimouni, S.; Mechitoua, N.; Foissac, A.; Hassanaly, M.; Ouraou, M.

    2011-01-01

    The present work is focused on the condensation heat transfer that plays a dominant role in many accident scenarios postulated to occur in the containment of nuclear reactors. The study compares a general multiphase approach implemented in NEPTUNE C FD with a homogeneous model, of widespread use for engineering studies, implemented in Code S aturne. The model implemented in NEPTUNE C FD assumes that liquid droplets form along the wall within nucleation sites. Vapor condensation on droplets makes them grow. Once the droplet diameter reaches a critical value, gravitational forces compensate surface tension force and then droplets slide over the wall and form a liquid film. This approach allows taking into account simultaneously the mechanical drift between the droplet and the gas, the heat and mass transfer on droplets in the core of the flow and the condensation/evaporation phenomena on the walls. As concern the homogeneous approach, the motion of the liquid film due to the gravitational forces is neglected, as well as the volume occupied by the liquid. Both condensation models and compressible procedures are validated and compared to experimental data provided by the TOSQAN ISP47 experiment (IRSN Saclay). Computational results compare favorably with experimental data, particularly for the Helium and steam volume fractions.

  11. Plasma-based actuators for turbulent boundary layer control in transonic flow

    Science.gov (United States)

    Budovsky, A. D.; Polivanov, P. A.; Vishnyakov, O. I.; Sidorenko, A. A.

    2017-10-01

    The study is devoted to development of methods for active control of flow structure typical for the aircraft wings in transonic flow with turbulent boundary layer. The control strategy accepted in the study was based on using of the effects of plasma discharges interaction with miniature geometrical obstacles of various shapes. The conceptions were studied computationally using 3D RANS, URANS approaches. The results of the computations have shown that energy deposition can significantly change the flow pattern over the obstacles increasing their influence on the flow in boundary layer region. Namely, one of the most interesting and promising data were obtained for actuators basing on combination of vertical wedge with asymmetrical plasma discharge. The wedge considered is aligned with the local streamlines and protruding in the flow by 0.4-0.8 of local boundary layer thickness. The actuator produces negligible distortion of the flow at the absence of energy deposition. Energy deposition along the one side of the wedge results in longitudinal vortex formation in the wake of the actuator providing momentum exchange in the boundary layer. The actuator was manufactured and tested in wind tunnel experiments at Mach number 1.5 using the model of flat plate. The experimental data obtained by PIV proved the availability of the actuator.

  12. Material flow-based economic assessment of landfill mining processes.

    Science.gov (United States)

    Kieckhäfer, Karsten; Breitenstein, Anna; Spengler, Thomas S

    2017-02-01

    This paper provides an economic assessment of alternative processes for landfill mining compared to landfill aftercare with the goal of assisting landfill operators with the decision to choose between the two alternatives. A material flow-based assessment approach is developed and applied to a landfill in Germany. In addition to landfill aftercare, six alternative landfill mining processes are considered. These range from simple approaches where most of the material is incinerated or landfilled again to sophisticated technology combinations that allow for recovering highly differentiated products such as metals, plastics, glass, recycling sand, and gravel. For the alternatives, the net present value of all relevant cash flows associated with plant installation and operation, supply, recycling, and disposal of material flows, recovery of land and landfill airspace, as well as landfill closure and aftercare is computed with an extensive sensitivity analyses. The economic performance of landfill mining processes is found to be significantly influenced by the prices of thermal treatment (waste incineration as well as refuse-derived fuels incineration plant) and recovered land or airspace. The results indicate that the simple process alternatives have the highest economic potential, which contradicts the aim of recovering most of the resources. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Exploiting Phase Diversity for CDMA2000 1X Smart Antenna Base Stations

    Directory of Open Access Journals (Sweden)

    Hyeon Seungheon

    2004-01-01

    Full Text Available A performance analysis of an access channel decoder is presented which exploits a diversity gain due to the independent magnitude of received signals energy at each of the antenna elements of a smart-antenna base-station transceiver subsystem (BTS operating in CDMA2000 1X signal environment. The objective is to enhance the data retrieval at cellsite during the access period, for which the optimal weight vector of the smart antenna BTS is not available. It is shown in this paper that the access channel decoder proposed in this paper outperforms the conventional one, which is based on a single antenna channel in terms of detection probability of access probe, access channel failure probability, and Walsh-code demodulation performance.

  14. Dispositional Flow as a Mediator of the Relationships between Attentional Control and Approaches to Studying during Academic Examination Preparation

    Science.gov (United States)

    Cermakova, Lucie; Moneta, Giovanni B.; Spada, Marcantonio M.

    2010-01-01

    This study investigated how attentional control and study-related dispositional flow influence students' approaches to studying when preparing for academic examinations. Based on information-processing theories, it was hypothesised that attentional control would be positively associated with deep and strategic approaches to studying, and…

  15. Modelling of two-phase flow based on separation of the flow according to velocity

    International Nuclear Information System (INIS)

    Narumo, T.

    1997-01-01

    The thesis concentrates on the development work of a physical one-dimensional two-fluid model that is based on Separation of the Flow According to Velocity (SFAV). The conventional way to model one-dimensional two-phase flow is to derive conservation equations for mass, momentum and energy over the regions occupied by the phases. In the SFAV approach, the two-phase mixture is divided into two subflows, with as distinct average velocities as possible, and momentum conservation equations are derived over their domains. Mass and energy conservation are treated equally with the conventional model because they are distributed very accurately according to the phases, but momentum fluctuations follow better the flow velocity. Submodels for non-uniform transverse profile of velocity and density, slip between the phases within each subflow and turbulence between the subflows have been derived. The model system is hyperbolic in any sensible flow conditions over the whole range of void fraction. Thus, it can be solved with accurate numerical methods utilizing the characteristics. The characteristics agree well with the used experimental data on two-phase flow wave phenomena Furthermore, the characteristics of the SFAV model are as well in accordance with their physical counterparts as of the best virtual-mass models that are typically optimized for special flow regimes like bubbly flow. The SFAV model has proved to be applicable in describing two-phase flow physically correctly because both the dynamics and steady-state behaviour of the model has been considered and found to agree well with experimental data This makes the SFAV model especially suitable for the calculation of fast transients, taking place in versatile form e.g. in nuclear reactors

  16. NARX prediction of some rare chaotic flows: Recurrent fuzzy functions approach

    International Nuclear Information System (INIS)

    Goudarzi, Sobhan; Jafari, Sajad; Moradi, Mohammad Hassan; Sprott, J.C.

    2016-01-01

    The nonlinear and dynamic accommodating capability of time domain models makes them a useful representation of chaotic time series for analysis, modeling and prediction. This paper is devoted to the modeling and prediction of chaotic time series with hidden attractors using a nonlinear autoregressive model with exogenous inputs (NARX) based on a novel recurrent fuzzy functions (RFFs) approach. Case studies of recently introduced chaotic systems with hidden attractors plus classical chaotic systems demonstrate that the proposed modeling methodology exhibits better prediction performance from different viewpoints (short term and long term) compared to some other existing methods. - Highlights: • A new method is proposed for prediction of chaotic time series. • This method is based on novel recurrent fuzzy functions (RFFs) approach. • Some rare chaotic flows are used as test systems. • The new method shows proper performance in short-term prediction. • It also shows proper performance in prediction of attractor's topology.

  17. NARX prediction of some rare chaotic flows: Recurrent fuzzy functions approach

    Energy Technology Data Exchange (ETDEWEB)

    Goudarzi, Sobhan [Biomedical Engineering Department, Amirkabir University of Technology, Tehran 15875-4413 (Iran, Islamic Republic of); Jafari, Sajad, E-mail: sajadjafari@aut.ac.ir [Biomedical Engineering Department, Amirkabir University of Technology, Tehran 15875-4413 (Iran, Islamic Republic of); Moradi, Mohammad Hassan [Biomedical Engineering Department, Amirkabir University of Technology, Tehran 15875-4413 (Iran, Islamic Republic of); Sprott, J.C. [Department of Physics, University of Wisconsin–Madison, Madison, WI 53706 (United States)

    2016-02-15

    The nonlinear and dynamic accommodating capability of time domain models makes them a useful representation of chaotic time series for analysis, modeling and prediction. This paper is devoted to the modeling and prediction of chaotic time series with hidden attractors using a nonlinear autoregressive model with exogenous inputs (NARX) based on a novel recurrent fuzzy functions (RFFs) approach. Case studies of recently introduced chaotic systems with hidden attractors plus classical chaotic systems demonstrate that the proposed modeling methodology exhibits better prediction performance from different viewpoints (short term and long term) compared to some other existing methods. - Highlights: • A new method is proposed for prediction of chaotic time series. • This method is based on novel recurrent fuzzy functions (RFFs) approach. • Some rare chaotic flows are used as test systems. • The new method shows proper performance in short-term prediction. • It also shows proper performance in prediction of attractor's topology.

  18. A parametric level-set approach for topology optimization of flow domains

    DEFF Research Database (Denmark)

    Pingen, Georg; Waidmann, Matthias; Evgrafov, Anton

    2010-01-01

    of the design variables in the traditional approaches is seen as a possible cause for the slow convergence. Non-smooth material distributions are suspected to trigger premature onset of instationary flows which cannot be treated by steady-state flow models. In the present work, we study whether the convergence...... and the versatility of topology optimization methods for fluidic systems can be improved by employing a parametric level-set description. In general, level-set methods allow controlling the smoothness of boundaries, yield a non-local influence of design variables, and decouple the material description from the flow...... field discretization. The parametric level-set method used in this study utilizes a material distribution approach to represent flow boundaries, resulting in a non-trivial mapping between design variables and local material properties. Using a hydrodynamic lattice Boltzmann method, we study...

  19. DC Voltage Droop Control Implementation in the AC/DC Power Flow Algorithm: Combinational Approach

    DEFF Research Database (Denmark)

    Akhter, F.; Macpherson, D.E.; Harrison, G.P.

    2015-01-01

    of operational flexibility, as more than one VSC station controls the DC link voltage of the MTDC system. This model enables the study of the effects of DC droop control on the power flows of the combined AC/DC system for steady state studies after VSC station outages or transient conditions without needing...... to use its complete dynamic model. Further, the proposed approach can be extended to include multiple AC and DC grids for combined AC/DC power flow analysis. The algorithm is implemented by modifying the MATPOWER based MATACDC program and the results shows that the algorithm works efficiently....

  20. SenSyF Experience on Integration of EO Services in a Generic, Cloud-Based EO Exploitation Platform

    Science.gov (United States)

    Almeida, Nuno; Catarino, Nuno; Gutierrez, Antonio; Grosso, Nuno; Andrade, Joao; Caumont, Herve; Goncalves, Pedro; Villa, Guillermo; Mangin, Antoine; Serra, Romain; Johnsen, Harald; Grydeland, Tom; Emsley, Stephen; Jauch, Eduardo; Moreno, Jose; Ruiz, Antonio

    2016-08-01

    SenSyF is a cloud-based data processing framework for EO- based services. It has been pioneer in addressing Big Data issues from the Earth Observation point of view, and is a precursor of several of the technologies and methodologies that will be deployed in ESA's Thematic Exploitation Platforms and other related systems.The SenSyF system focuses on developing fully automated data management, together with access to a processing and exploitation framework, including Earth Observation specific tools. SenSyF is both a development and validation platform for data intensive applications using Earth Observation data. With SenSyF, scientific, institutional or commercial institutions developing EO- based applications and services can take advantage of distributed computational and storage resources, tailored for applications dependent on big Earth Observation data, and without resorting to deep infrastructure and technological investments.This paper describes the integration process and the experience gathered from different EO Service providers during the project.

  1. IT Confidentiality Risk Assessment for an Architecture-Based Approach

    NARCIS (Netherlands)

    Morali, A.; Zambon, Emmanuele; Etalle, Sandro; Overbeek, Paul

    2008-01-01

    Information systems require awareness of risks and a good understanding of vulnerabilities and their exploitations. In this paper, we propose a novel approach for the systematic assessment and analysis of confidentiality risks caused by disclosure of operational and functional information. The

  2. Regional Balance Model of Financial Flows through Sectoral Approaches System of National Accounts

    Directory of Open Access Journals (Sweden)

    Ekaterina Aleksandrovna Zaharchuk

    2017-03-01

    Full Text Available The main purpose of the study, the results of which are reflected in this article, is the theoretical and methodological substantiation of possibilities to build a regional balance model of financial flows consistent with the principles of the construction of the System of National Accounts (SNA. The paper summarizes the international experience of building regional accounts in the SNA as well as reflects the advantages and disadvantages of the existing techniques for constructing Social Accounting Matrix. The authors have proposed an approach to build the regional balance model of financial flows, which is based on the disaggregated tables of the formation, distribution and use of the added value of territory in the framework of institutional sectors of SNA (corporations, public administration, households. Within the problem resolution of the transition of value added from industries to sectors, the authors have offered an approach to the accounting of development, distribution and use of value added within the institutional sectors of the territories. The methods of calculation are based on the publicly available information base of statistics agencies and federal services. The authors provide the scheme of the interrelations of the indicators of the regional balance model of financial flows. It allows to coordinate mutually the movement of regional resources by the sectors of «corporation», «public administration» and «households» among themselves, and cash flows of the region — by the sectors and directions of use. As a result, they form a single account of the formation and distribution of territorial financial resources, which is a regional balance model of financial flows. This matrix shows the distribution of financial resources by income sources and sectors, where the components of the formation (compensation, taxes and gross profit, distribution (transfers and payments and use (final consumption, accumulation of value added are

  3. Structural characteristics of cohesive flow deposits, and a sedimentological approach on their flow mechanisms.

    Science.gov (United States)

    Tripsanas, E. K.; Bryant, W. R.; Prior, D. B.

    2003-04-01

    A large number of Jumbo Piston cores (up to 20 m long), acquired from the continental slope and rise of the Northwest Gulf of Mexico (Bryant Canyon area and eastern Sigsbee Escarpment), have recovered various mass-transport deposits. The main cause of slope instabilities over these areas is oversteepening of the slopes due to the seaward mobilization of the underlying allochthonous salt masses. Cohesive flow deposits were the most common recoveries in the sediment cores. Four types of cohesive flow deposits have been recognized: a) fluid debris flow, b) mud flow, c) mud-matrix dominated debris flow, and d) clast-dominated debris flow deposits. The first type is characterized by its relatively small thickness (less than 1 m), a mud matrix with small (less than 0.5 cm) and soft mud-clasts, and a faint layering. The mud-clasts reveal a normal grading and become more abundant towards the base of each layer. That reveals that their deposition resulted by several successive surges/pulses, developed in the main flow, than the sudden “freezing” of the whole flow. The main difference between mud flow and mud-matrix dominated debris flow deposits is the presence of small to large mud-clasts in the later. Both deposits consist of a chaotic mud-matrix, and a basal shear laminated zone, where the strongest shearing of the flow was exhibited. Convolute laminations, fault-like surfaces, thrust faults, and microfaults are interpreted as occurring during the “freezing” of the flows and/or by adjustments of the rested deposits. Clast-dominated debris flow deposits consist of three zones: a) an upper plug-zone, characterized by large interlocked clasts, b) a mid-zone, of higher reworked, inversely graded clasts, floating in a mud-matrix, and c) a lower shear laminated zone. The structure of the last three cohesive flow deposits indicate that they represent deposition of typical Bingham flows, consisting of an upper plug-zone in which the yield stress is not exceeded and an

  4. Accumulation of Colloidal Particles in Flow Junctions Induced by Fluid Flow and Diffusiophoresis

    Science.gov (United States)

    Shin, Sangwoo; Ault, Jesse T.; Warren, Patrick B.; Stone, Howard A.

    2017-10-01

    The flow of solutions containing solutes and colloidal particles in porous media is widely found in systems including underground aquifers, hydraulic fractures, estuarine or coastal habitats, water filtration systems, etc. In such systems, solute gradients occur when there is a local change in the solute concentration. While the effects of solute gradients have been found to be important for many applications, we observe an unexpected colloidal behavior in porous media driven by the combination of solute gradients and the fluid flow. When two flows with different solute concentrations are in contact near a junction, a sharp solute gradient is formed at the interface, which may allow strong diffusiophoresis of the particles directed against the flow. Consequently, the particles accumulate near the pore entrance, rapidly approaching the packing limit. These colloidal dynamics have important implications for the clogging of a porous medium, where particles that are orders of magnitude smaller than the pore width can accumulate and block the pores within a short period of time. We also show that this effect can be exploited as a useful tool for preconcentrating biomolecules for rapid bioassays.

  5. Accumulation of Colloidal Particles in Flow Junctions Induced by Fluid Flow and Diffusiophoresis

    Energy Technology Data Exchange (ETDEWEB)

    Shin, Sangwoo [Univ. of Hawaii at Manoa, Honolulu, HI (United States); Ault, Jesse T. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Warren, Patrick B. [Unilever R& D Port Sunlight, Wirral (United Kingdom); Stone, Howard A. [Princeton Univ., Princeton, NJ (United States)

    2017-11-16

    The flow of solutions containing solutes and colloidal particles in porous media is widely found in systems including underground aquifers, hydraulic fractures, estuarine or coastal habitats, water filtration systems, etc. In such systems, solute gradients occur when there is a local change in the solute concentration. While the effects of solute gradients have been found to be important for many applications, we observe an unexpected colloidal behavior in porous media driven by the combination of solute gradients and the fluid flow. When two flows with different solute concentrations are in contact near a junction, a sharp solute gradient is formed at the interface, which may allow strong diffusiophoresis of the particles directed against the flow. Consequently, the particles accumulate near the pore entrance, rapidly approaching the packing limit. These colloidal dynamics have important implications for the clogging of a porous medium, where particles that are orders of magnitude smaller than the pore width can accumulate and block the pores within a short period of time. As a result, we also show that this effect can be exploited as a useful tool for preconcentrating biomolecules for rapid bioassays.

  6. Modelling an exploited marine fish community with 15 parameters - results from a simple size-based model

    NARCIS (Netherlands)

    Pope, J.G.; Rice, J.C.; Daan, N.; Jennings, S.; Gislason, H.

    2006-01-01

    To measure and predict the response of fish communities to exploitation, it is necessary to understand how the direct and indirect effects of fishing interact. Because fishing and predation are size-selective processes, the potential response can be explored with size-based models. We use a

  7. Internet-Based Approaches to Building Stakeholder Networks for Conservation and Natural Resource Management

    OpenAIRE

    Kreakie, B. J.; Hychka, K. C.; Belaire, J. A.; Minor, E.; Walker, H. A.

    2015-01-01

    Social network analysis (SNA) is based on a conceptual network representation of social interactions and is an invaluable tool for conservation professionals to increase collaboration, improve information flow, and increase efficiency. We present two approaches to constructing internet-based social networks, and use an existing traditional (survey-based) case study to illustrate in a familiar context the deviations in methods and results. Internet-based approaches to SNA offer a means to over...

  8. Chemical approaches toward graphene-based nanomaterials and their applications in energy-related areas.

    Science.gov (United States)

    Luo, Bin; Liu, Shaomin; Zhi, Linjie

    2012-03-12

    A 'gold rush' has been triggered all over the world for exploiting the possible applications of graphene-based nanomaterials. For this purpose, two important problems have to be solved; one is the preparation of graphene-based nanomaterials with well-defined structures, and the other is the controllable fabrication of these materials into functional devices. This review gives a brief overview of the recent research concerning chemical and thermal approaches toward the production of well-defined graphene-based nanomaterials and their applications in energy-related areas, including solar cells, lithium ion secondary batteries, supercapacitors, and catalysis. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Intelligent gas-mixture flow sensor

    NARCIS (Netherlands)

    Lammerink, Theodorus S.J.; Dijkstra, Fred; Houkes, Z.; van Kuijk, J.C.C.; van Kuijk, Joost

    A simple way to realize a gas-mixture flow sensor is presented. The sensor is capable of measuring two parameters from a gas flow. Both the flow rate and the helium content of a helium-nitrogen gas mixture are measured. The sensor exploits two measurement principles in combination with (local)

  10. Simulation-Based Dynamic Passenger Flow Assignment Modelling for a Schedule-Based Transit Network

    Directory of Open Access Journals (Sweden)

    Xiangming Yao

    2017-01-01

    Full Text Available The online operation management and the offline policy evaluation in complex transit networks require an effective dynamic traffic assignment (DTA method that can capture the temporal-spatial nature of traffic flows. The objective of this work is to propose a simulation-based dynamic passenger assignment framework and models for such applications in the context of schedule-based rail transit systems. In the simulation framework, travellers are regarded as individual agents who are able to obtain complete information on the current traffic conditions. A combined route selection model integrated with pretrip route selection and entrip route switch is established for achieving the dynamic network flow equilibrium status. The train agent is operated strictly with the timetable and its capacity limitation is considered. A continuous time-driven simulator based on the proposed framework and models is developed, whose performance is illustrated through a large-scale network of Beijing subway. The results indicate that more than 0.8 million individual passengers and thousands of trains can be simulated simultaneously at a speed ten times faster than real time. This study provides an efficient approach to analyze the dynamic demand-supply relationship for large schedule-based transit networks.

  11. Systematic approach for synthesis of palm oil-based biorefinery

    Energy Technology Data Exchange (ETDEWEB)

    NG, Rex T. L.; NG, Denny K. S.; LAM, Hon Loong [Dept. of Chemical and Environmental Engineering, Centre of Excellence for Green Technologies, Univ. of Nottingham, Selangor, (Malaysia); TAY, Douglas H. S.; LIM, Joseph H. E. [2GGS Eco Solutions Sdn Bhd, Kuala Lumpur (Malaysia)

    2012-11-01

    Various types of palm oil biomasses are generated from palm oil mill when crude palm oil (CPO) is produced from fresh fruit bunch (FFB). In the current practice, palm oil biomasses are used as the main source of energy input in the palm oil mill to produce steam and electricity. Moreover, those biomasses are regarded as by-products and can be reclaimed easily. Therefore, there is a continuous increasing interest concerning biomasses generated from the palm oil mill as a source of renewable energy. Although various technologies have been exploited to produce bio-fuel (i.e., briquette, pellet, etc.) as well as heat and power generation, however, no systematic approach which can analyse and optimise the synthesise biorefinery is presented. In this work, a systematic approach for synthesis and optimisation of palm oil-based biorefinery which including palm oil mill and refinery with maximum economic performance is developed. The optimised network configuration with achieves the maximum economic performance can also be determined. To illustrate the proposed approach, a case study is solved in this work.

  12. Quantification of uncertainties in turbulence modeling: A comparison of physics-based and random matrix theoretic approaches

    International Nuclear Information System (INIS)

    Wang, Jian-Xun; Sun, Rui; Xiao, Heng

    2016-01-01

    Highlights: • Compared physics-based and random matrix methods to quantify RANS model uncertainty. • Demonstrated applications of both methods in channel ow over periodic hills. • Examined the amount of information introduced in the physics-based approach. • Discussed implications to modeling turbulence in both near-wall and separated regions. - Abstract: Numerical models based on Reynolds-Averaged Navier-Stokes (RANS) equations are widely used in engineering turbulence modeling. However, the RANS predictions have large model-form uncertainties for many complex flows, e.g., those with non-parallel shear layers or strong mean flow curvature. Quantification of these large uncertainties originating from the modeled Reynolds stresses has attracted attention in the turbulence modeling community. Recently, a physics-based Bayesian framework for quantifying model-form uncertainties has been proposed with successful applications to several flows. Nonetheless, how to specify proper priors without introducing unwarranted, artificial information remains challenging to the current form of the physics-based approach. Another recently proposed method based on random matrix theory provides the prior distributions with maximum entropy, which is an alternative for model-form uncertainty quantification in RANS simulations. This method has better mathematical rigorousness and provides the most non-committal prior distributions without introducing artificial constraints. On the other hand, the physics-based approach has the advantages of being more flexible to incorporate available physical insights. In this work, we compare and discuss the advantages and disadvantages of the two approaches on model-form uncertainty quantification. In addition, we utilize the random matrix theoretic approach to assess and possibly improve the specification of priors used in the physics-based approach. The comparison is conducted through a test case using a canonical flow, the flow past

  13. Numerical methodologies for investigation of moderate-velocity flow using a hybrid computational fluid dynamics - molecular dynamics simulation approach

    International Nuclear Information System (INIS)

    Ko, Soon Heum; Kim, Na Yong; Nikitopoulos, Dimitris E.; Moldovan, Dorel; Jha, Shantenu

    2014-01-01

    Numerical approaches are presented to minimize the statistical errors inherently present due to finite sampling and the presence of thermal fluctuations in the molecular region of a hybrid computational fluid dynamics (CFD) - molecular dynamics (MD) flow solution. Near the fluid-solid interface the hybrid CFD-MD simulation approach provides a more accurate solution, especially in the presence of significant molecular-level phenomena, than the traditional continuum-based simulation techniques. It also involves less computational cost than the pure particle-based MD. Despite these advantages the hybrid CFD-MD methodology has been applied mostly in flow studies at high velocities, mainly because of the higher statistical errors associated with low velocities. As an alternative to the costly increase of the size of the MD region to decrease statistical errors, we investigate a few numerical approaches that reduce sampling noise of the solution at moderate-velocities. These methods are based on sampling of multiple simulation replicas and linear regression of multiple spatial/temporal samples. We discuss the advantages and disadvantages of each technique in the perspective of solution accuracy and computational cost.

  14. Exploiting synergies in European wind and hydrogen sectors: A cost-benefit assessment

    International Nuclear Information System (INIS)

    Shaw, Suzanne; Peteves, Estathios

    2008-01-01

    This article outlines an assessment of the perspectives for exploiting synergies between European wind and hydrogen energy sectors, where wind energy conversion to hydrogen is used as a common strategy for reducing network management costs in high wind energy penetration situations, and for production of renewable hydrogen. The attractiveness of this approach, referred to here as a 'wind-hydrogen strategy', is analysed using a cost-benefit approach to evaluate the final impact at the level of the end-consumer when this strategy is implemented. The analysis is conducted for four scenarios, based on different levels of: wind energy penetration in the electricity network area, hydrogen energy price, and environmental taxation on fuels. The effect of technological learning on the outcome is also analysed for the period up to 2050. The results of the analysis indicate that the relative value of the wind energy in the electricity market compared to the hydrogen market is a deciding factor in the attractiveness of the strategy; here the wind energy penetration in the network is a key consideration. Finally, in order to exploit learning effects from linking European wind and hydrogen sectors, action would need to be taken in the short term. (author)

  15. An entropy-variables-based formulation of residual distribution schemes for non-equilibrium flows

    Science.gov (United States)

    Garicano-Mena, Jesús; Lani, Andrea; Degrez, Gérard

    2018-06-01

    In this paper we present an extension of Residual Distribution techniques for the simulation of compressible flows in non-equilibrium conditions. The latter are modeled by means of a state-of-the-art multi-species and two-temperature model. An entropy-based variable transformation that symmetrizes the projected advective Jacobian for such a thermophysical model is introduced. Moreover, the transformed advection Jacobian matrix presents a block diagonal structure, with mass-species and electronic-vibrational energy being completely decoupled from the momentum and total energy sub-system. The advantageous structure of the transformed advective Jacobian can be exploited by contour-integration-based Residual Distribution techniques: established schemes that operate on dense matrices can be substituted by the same scheme operating on the momentum-energy subsystem matrix and repeated application of scalar scheme to the mass-species and electronic-vibrational energy terms. Finally, the performance gain of the symmetrizing-variables formulation is quantified on a selection of representative testcases, ranging from subsonic to hypersonic, in inviscid or viscous conditions.

  16. Features for Exploiting Black-Box Optimization Problem Structure

    DEFF Research Database (Denmark)

    Tierney, Kevin; Malitsky, Yuri; Abell, Tinus

    2013-01-01

    landscape of BBO problems and show how an algorithm portfolio approach can exploit these general, problem indepen- dent features and outperform the utilization of any single minimization search strategy. We test our methodology on data from the GECCO Workshop on BBO Benchmarking 2012, which contains 21...

  17. Discrete Adjoint-Based Design for Unsteady Turbulent Flows On Dynamic Overset Unstructured Grids

    Science.gov (United States)

    Nielsen, Eric J.; Diskin, Boris

    2012-01-01

    A discrete adjoint-based design methodology for unsteady turbulent flows on three-dimensional dynamic overset unstructured grids is formulated, implemented, and verified. The methodology supports both compressible and incompressible flows and is amenable to massively parallel computing environments. The approach provides a general framework for performing highly efficient and discretely consistent sensitivity analysis for problems involving arbitrary combinations of overset unstructured grids which may be static, undergoing rigid or deforming motions, or any combination thereof. General parent-child motions are also accommodated, and the accuracy of the implementation is established using an independent verification based on a complex-variable approach. The methodology is used to demonstrate aerodynamic optimizations of a wind turbine geometry, a biologically-inspired flapping wing, and a complex helicopter configuration subject to trimming constraints. The objective function for each problem is successfully reduced and all specified constraints are satisfied.

  18. Comparison of different base flow separation methods in a lowland catchment

    Directory of Open Access Journals (Sweden)

    S. Uhlenbrook

    2009-11-01

    Full Text Available Assessment of water resources available in different storages and moving along different pathways in a catchment is important for its optimal use and protection, and also for the prediction of floods and low flows. Moreover, understanding of the runoff generation processes is essential for assessing the impacts of climate and land use changes on the hydrological response of a catchment. Many methods for base flow separation exist, but hardly one focuses on the specific behaviour of temperate lowland areas. This paper presents the results of a base flow separation study carried out in a lowland area in the Netherlands. In this study, field observations of precipitation, groundwater and surface water levels and discharges, together with tracer analysis are used to understand the runoff generation processes in the catchment. Several tracer and non-tracer based base flow separation methods were applied to the discharge time series, and their results are compared.

    The results show that groundwater levels react fast to precipitation events in this lowland area with shallow groundwater tables. Moreover, a good correlation was found between groundwater levels and discharges suggesting that most of the measured discharge also during floods comes from groundwater storage. It was estimated using tracer hydrological approaches that approximately 90% of the total discharge is groundwater displaced by event water mainly infiltrating in the northern part of the catchment, and only the remaining 10% is surface runoff. The impact of remote recharge causing displacement of near channel groundwater during floods could also be motivated with hydraulic approximations. The results show further that when base flow separation is meant to identify groundwater contributions to stream flow, process based methods (e.g. the rating curve method; Kliner and Knezek, 1974 are more reliable than other simple non-tracer based methods. Also, the recursive filtering method

  19. A High Order Accuracy Computational Tool for Unsteady Turbulent Flows and Acoustics, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The main objective of this research effort is to develop a higher order unsteady turbulent flow solver based on the FDV method, and to exploit its attributes of...

  20. Urban base flow with low impact development

    Science.gov (United States)

    Bhaskar, Aditi; Hogan, Dianna M.; Archfield, Stacey A.

    2016-01-01

    A novel form of urbanization, low impact development (LID), aims to engineer systems that replicate natural hydrologic functioning, in part by infiltrating stormwater close to the impervious surfaces that generate it. We sought to statistically evaluate changes in a base flow regime because of urbanization with LID, specifically changes in base flow magnitude, seasonality, and rate of change. We used a case study watershed in Clarksburg, Maryland, in which streamflow was monitored during whole-watershed urbanization from forest and agricultural to suburban residential development using LID. The 1.11-km2 watershed contains 73 infiltration-focused stormwater facilities, including bioretention facilities, dry wells, and dry swales. We examined annual and monthly flow during and after urbanization (2004–2014) and compared alterations to nearby forested and urban control watersheds. We show that total streamflow and base flow increased in the LID watershed during urbanization as compared with control watersheds. The LID watershed had more gradual storm recessions after urbanization and attenuated seasonality in base flow. These flow regime changes may be because of a reduction in evapotranspiration because of the overall decrease in vegetative cover with urbanization and the increase in point sources of recharge. Precipitation that may once have infiltrated soil, been stored in soil moisture to be eventually transpired in a forested landscape, may now be recharged and become base flow. The transfer of evapotranspiration to base flow is an unintended consequence to the water balance of LID.

  1. Time-resolved fuel injector flow characterisation based on 3D laser Doppler vibrometry

    OpenAIRE

    Crua, Cyril; Heikal, Morgan R.

    2015-01-01

    In order to enable investigations of the fuel flow inside unmodified injectors, we have developed a new experimental approach to measure time-resolved vibration spectra of diesel nozzles using a three dimensional laser vibrometer. The technique we propose is based on the triangulation of the vibrometer and fuel pressure transducer signals, and enables the quantitative characterisation of quasi-cyclic internal flows without requiring modifications to the injector, the working fluid, or limitin...

  2. Heterogeneity of cerebral blood flow: a fractal approach

    International Nuclear Information System (INIS)

    Kuikka, J.T.; Hartikainen, P.

    2000-01-01

    Aim: We demonstrate the heterogeneity of regional cerebral blood flow using a fractal approach and single-photon emission computed tomography (SPECT). Method: Tc-99m-labelled ethylcysteine dimer was injected intravenously in 10 healthy controls and in 10 patients with dementia of frontal lobe type. The head was imaged with a gamma camera and transaxial, sagittal and coronal slices were reconstructed. Two hundred fifty-six symmetrical regions of interest (ROIs) were drawn onto each hemisphere of functioning brain matter. Fractal analysis was used to examine the spatial heterogeneity of blood flow as a function of the number of ROIs. Results: Relative dispersion (=coefficient of variation of the regional flows) was fractal-like in healthy subjects and could be characterized by a fractal dimension of 1.17±0.05 (mean±SD) for the left hemisphere and 1.15±0.04 for the right hemisphere, respectively. The fractal dimension of 1.0 reflects completely homogeneous blood flow and 1.5 indicates a random blood flow distribution. Patients with dementia of frontal lobe type had a significantly lower fractal dimension of 1.04±0.03 than in healthy controls. (orig.) [de

  3. Numerical simulation of multi-dimensional two-phase flow based on flux vector splitting

    Energy Technology Data Exchange (ETDEWEB)

    Staedtke, H.; Franchello, G.; Worth, B. [Joint Research Centre - Ispra Establishment (Italy)

    1995-09-01

    This paper describes a new approach to the numerical simulation of transient, multidimensional two-phase flow. The development is based on a fully hyperbolic two-fluid model of two-phase flow using separated conservation equations for the two phases. Features of the new model include the existence of real eigenvalues, and a complete set of independent eigenvectors which can be expressed algebraically in terms of the major dependent flow parameters. This facilitates the application of numerical techniques specifically developed for high speed single-phase gas flows which combine signal propagation along characteristic lines with the conservation property with respect to mass, momentum and energy. Advantages of the new model for the numerical simulation of one- and two- dimensional two-phase flow are discussed.

  4. Modelling of two-phase flow based on separation of the flow according to velocity

    Energy Technology Data Exchange (ETDEWEB)

    Narumo, T. [VTT Energy, Espoo (Finland). Nuclear Energy

    1997-12-31

    The thesis concentrates on the development work of a physical one-dimensional two-fluid model that is based on Separation of the Flow According to Velocity (SFAV). The conventional way to model one-dimensional two-phase flow is to derive conservation equations for mass, momentum and energy over the regions occupied by the phases. In the SFAV approach, the two-phase mixture is divided into two subflows, with as distinct average velocities as possible, and momentum conservation equations are derived over their domains. Mass and energy conservation are treated equally with the conventional model because they are distributed very accurately according to the phases, but momentum fluctuations follow better the flow velocity. Submodels for non-uniform transverse profile of velocity and density, slip between the phases within each subflow and turbulence between the subflows have been derived. The model system is hyperbolic in any sensible flow conditions over the whole range of void fraction. Thus, it can be solved with accurate numerical methods utilizing the characteristics. The characteristics agree well with the used experimental data on two-phase flow wave phenomena Furthermore, the characteristics of the SFAV model are as well in accordance with their physical counterparts as of the best virtual-mass models that are typically optimized for special flow regimes like bubbly flow. The SFAV model has proved to be applicable in describing two-phase flow physically correctly because both the dynamics and steady-state behaviour of the model has been considered and found to agree well with experimental data This makes the SFAV model especially suitable for the calculation of fast transients, taking place in versatile form e.g. in nuclear reactors. 45 refs. The thesis includes also five previous publications by author.

  5. Fully Exploiting The Potential Of The Periodic Table Through Pattern Recognition.

    Science.gov (United States)

    Schultz, Emeric

    2005-01-01

    An approach to learning chemical facts that starts with the periodic table and depends primarily on recognizing and completing patterns and following a few simple rules is described. This approach exploits the exceptions that arise and uses them as opportunities for further concept development.

  6. Assisting children born of sexual exploitation and abuse

    Directory of Open Access Journals (Sweden)

    Lauren Rumble

    2007-01-01

    Full Text Available The UN Secretary-General has issued a strategy tosupport victims of sexual exploitation and abuse by UNstaff. It includes a controversial proposal to introduceDNA sampling for all UN staff. Unless this suggestionis adopted, an important opportunity to implementa truly survivor-centred approach may be lost.

  7. On exploiting wavelet bases in statistical region-based segmentation

    DEFF Research Database (Denmark)

    Stegmann, Mikkel Bille; Forchhammer, Søren

    2002-01-01

    Statistical region-based segmentation methods such as the Active Appearance Models establish dense correspondences by modelling variation of shape and pixel intensities in low-resolution 2D images. Unfortunately, for high-resolution 2D and 3D images, this approach is rendered infeasible due to ex...... 9-7 wavelet on cardiac MRIs and human faces show that the segmentation accuracy is minimally degraded at compression ratios of 1:10 and 1:20, respectively....

  8. A Stochastic Approach for Blurred Image Restoration and Optical Flow Computation on Field Image Sequence

    Institute of Scientific and Technical Information of China (English)

    高文; 陈熙霖

    1997-01-01

    The blur in target images caused by camera vibration due to robot motion or hand shaking and by object(s) moving in the background scene is different to deal with in the computer vision system.In this paper,the authors study the relation model between motion and blur in the case of object motion existing in video image sequence,and work on a practical computation algorithm for both motion analysis and blut image restoration.Combining the general optical flow and stochastic process,the paper presents and approach by which the motion velocity can be calculated from blurred images.On the other hand,the blurred image can also be restored using the obtained motion information.For solving a problem with small motion limitation on the general optical flow computation,a multiresolution optical flow algoritm based on MAP estimation is proposed. For restoring the blurred image ,an iteration algorithm and the obtained motion velocity are used.The experiment shows that the proposed approach for both motion velocity computation and blurred image restoration works well.

  9. An extended continuous estimation of distribution algorithm for solving the permutation flow-shop scheduling problem

    Science.gov (United States)

    Shao, Zhongshi; Pi, Dechang; Shao, Weishi

    2017-11-01

    This article proposes an extended continuous estimation of distribution algorithm (ECEDA) to solve the permutation flow-shop scheduling problem (PFSP). In ECEDA, to make a continuous estimation of distribution algorithm (EDA) suitable for the PFSP, the largest order value rule is applied to convert continuous vectors to discrete job permutations. A probabilistic model based on a mixed Gaussian and Cauchy distribution is built to maintain the exploration ability of the EDA. Two effective local search methods, i.e. revolver-based variable neighbourhood search and Hénon chaotic-based local search, are designed and incorporated into the EDA to enhance the local exploitation. The parameters of the proposed ECEDA are calibrated by means of a design of experiments approach. Simulation results and comparisons based on some benchmark instances show the efficiency of the proposed algorithm for solving the PFSP.

  10. Exploiting the Lab-on-Valve Concept for Determination of Trace Levels of Metals in Complex Matrices with Detection by ETAAS and ICPMS

    DEFF Research Database (Denmark)

    Hansen, Elo Harald; Wang, Jianhua

    approach [1,2]. Coupled to detection by ETAAS and ICPMS, and illustrated by recent exploits in the authors’ laboratory, it is shown that this methodology eliminates the problems encountered in conventional on-line column preconcentration systems and at the same time improves the overall operational......Termed the third generation of flow injection analysis, the Sequential Injection (SI)-Lab-on-Valve (LOV) concept has proven to entail specific advantages and to allow novel and unique applications. Both in term of its use in the automation and micro-miniaturization of suitable on-line sample...

  11. EBITDA/EBIT and cash flow based ICRs: A comparative approach in the agro-food system in Italy

    Directory of Open Access Journals (Sweden)

    Mattia Iotti

    2012-05-01

    Full Text Available The interest coverage ratios (ICRs are used to quantify the ability of firms to pay financial debts; ICRs are then considered by banks such as covenants in the financing term sheet, and are used by researchers and the rating agencies to estimate the probability of default of firms. Typically, ICRs calculation is based on profit margins, such as EBITDA and EBIT; EBITDA and EBIT approximate, but do not directly express, cash flows available to pay financial debts. The article aims to evaluate whether there are significant differences in results using ICRs based on EBITDA or EBIT and ICRs based on different definitions of cash flow (CF. The application is made to a sample of firms characterized by high absorption of capital operating in the Italian agro-food sector. The article highlights that there are statistically significant differences using ICRs EBITDA and EBIT based and ICRs based on different CF definitions.

  12. A 3D model retrieval approach based on Bayesian networks lightfield descriptor

    Science.gov (United States)

    Xiao, Qinhan; Li, Yanjun

    2009-12-01

    A new 3D model retrieval methodology is proposed by exploiting a novel Bayesian networks lightfield descriptor (BNLD). There are two key novelties in our approach: (1) a BN-based method for building lightfield descriptor; and (2) a 3D model retrieval scheme based on the proposed BNLD. To overcome the disadvantages of the existing 3D model retrieval methods, we explore BN for building a new lightfield descriptor. Firstly, 3D model is put into lightfield, about 300 binary-views can be obtained along a sphere, then Fourier descriptors and Zernike moments descriptors can be calculated out from binaryviews. Then shape feature sequence would be learned into a BN model based on BN learning algorithm; Secondly, we propose a new 3D model retrieval method by calculating Kullback-Leibler Divergence (KLD) between BNLDs. Beneficial from the statistical learning, our BNLD is noise robustness as compared to the existing methods. The comparison between our method and the lightfield descriptor-based approach is conducted to demonstrate the effectiveness of our proposed methodology.

  13. Exploiting Thread Parallelism for Ocean Modeling on Cray XC Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Sarje, Abhinav [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Jacobsen, Douglas W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Williams, Samuel W. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Ringler, Todd [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Oliker, Leonid [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2016-05-01

    The incorporation of increasing core counts in modern processors used to build state-of-the-art supercomputers is driving application development towards exploitation of thread parallelism, in addition to distributed memory parallelism, with the goal of delivering efficient high-performance codes. In this work we describe the exploitation of threading and our experiences with it with respect to a real-world ocean modeling application code, MPAS-Ocean. We present detailed performance analysis and comparisons of various approaches and configurations for threading on the Cray XC series supercomputers.

  14. Handheld Fluorescence Microscopy based Flow Analyzer.

    Science.gov (United States)

    Saxena, Manish; Jayakumar, Nitin; Gorthi, Sai Siva

    2016-03-01

    Fluorescence microscopy has the intrinsic advantages of favourable contrast characteristics and high degree of specificity. Consequently, it has been a mainstay in modern biological inquiry and clinical diagnostics. Despite its reliable nature, fluorescence based clinical microscopy and diagnostics is a manual, labour intensive and time consuming procedure. The article outlines a cost-effective, high throughput alternative to conventional fluorescence imaging techniques. With system level integration of custom-designed microfluidics and optics, we demonstrate fluorescence microscopy based imaging flow analyzer. Using this system we have imaged more than 2900 FITC labeled fluorescent beads per minute. This demonstrates high-throughput characteristics of our flow analyzer in comparison to conventional fluorescence microscopy. The issue of motion blur at high flow rates limits the achievable throughput in image based flow analyzers. Here we address the issue by computationally deblurring the images and show that this restores the morphological features otherwise affected by motion blur. By further optimizing concentration of the sample solution and flow speeds, along with imaging multiple channels simultaneously, the system is capable of providing throughput of about 480 beads per second.

  15. METHODOLOGICAL APPROACHES TO THE ANALYSIS OF EFFICIENCY OF CASH FLOW MANAGEMENT IN INVESTMENT ACTIVITY OF THE ENTERPRISES

    Directory of Open Access Journals (Sweden)

    I. Magdych

    2015-06-01

    Full Text Available The article explores the methodological approaches to the analysis of cash flows in investment activity of the enterprise; the system of motion net cash flows, reflecting the impact of cash management efficiency on the amount and source of investment cash flows of the enterprise; analytical model of definition of effectiveness of cash management of the enterprise is proposed, based on the selected principals of modeling, comprehensive analysis of cash flows in investing activities and their optimization for the purpose of maximization of social and economic benefit. The research performed here allowed generalization and definition of stages of analysis for investing cash flow of the enterprise with the appropriate reasoning. It is necessary that research is going concern in this direction of effectiveness valuation of cash flow management in investing activity of the enterprise.

  16. An efficient statistical-based approach for road traffic congestion monitoring

    KAUST Repository

    Abdelhafid, Zeroual

    2017-12-14

    In this paper, we propose an effective approach which has to detect traffic congestion. The detection strategy is based on the combinational use of piecewise switched linear traffic (PWSL) model with exponentially-weighted moving average (EWMA) chart. PWSL model describes traffic flow dynamics. Then, PWSL residuals are used as the input of EWMA chart to detect traffic congestions. The evaluation results of the developed approach using data from a portion of the I210-W highway in Califorina showed the efficiency of the PWSL-EWMA approach in in detecting traffic congestions.

  17. An efficient statistical-based approach for road traffic congestion monitoring

    KAUST Repository

    Abdelhafid, Zeroual; Harrou, Fouzi; Sun, Ying

    2017-01-01

    In this paper, we propose an effective approach which has to detect traffic congestion. The detection strategy is based on the combinational use of piecewise switched linear traffic (PWSL) model with exponentially-weighted moving average (EWMA) chart. PWSL model describes traffic flow dynamics. Then, PWSL residuals are used as the input of EWMA chart to detect traffic congestions. The evaluation results of the developed approach using data from a portion of the I210-W highway in Califorina showed the efficiency of the PWSL-EWMA approach in in detecting traffic congestions.

  18. Physically consistent data assimilation method based on feedback control for patient-specific blood flow analysis.

    Science.gov (United States)

    Ii, Satoshi; Adib, Mohd Azrul Hisham Mohd; Watanabe, Yoshiyuki; Wada, Shigeo

    2018-01-01

    This paper presents a novel data assimilation method for patient-specific blood flow analysis based on feedback control theory called the physically consistent feedback control-based data assimilation (PFC-DA) method. In the PFC-DA method, the signal, which is the residual error term of the velocity when comparing the numerical and reference measurement data, is cast as a source term in a Poisson equation for the scalar potential field that induces flow in a closed system. The pressure values at the inlet and outlet boundaries are recursively calculated by this scalar potential field. Hence, the flow field is physically consistent because it is driven by the calculated inlet and outlet pressures, without any artificial body forces. As compared with existing variational approaches, although this PFC-DA method does not guarantee the optimal solution, only one additional Poisson equation for the scalar potential field is required, providing a remarkable improvement for such a small additional computational cost at every iteration. Through numerical examples for 2D and 3D exact flow fields, with both noise-free and noisy reference data as well as a blood flow analysis on a cerebral aneurysm using actual patient data, the robustness and accuracy of this approach is shown. Moreover, the feasibility of a patient-specific practical blood flow analysis is demonstrated. Copyright © 2017 John Wiley & Sons, Ltd.

  19. A Bayesian Hierarchical Modeling Approach to Predicting Flow in Ungauged Basins

    Science.gov (United States)

    Gronewold, A.; Alameddine, I.; Anderson, R. M.

    2009-12-01

    Recent innovative approaches to identifying and applying regression-based relationships between land use patterns (such as increasing impervious surface area and decreasing vegetative cover) and rainfall-runoff model parameters represent novel and promising improvements to predicting flow from ungauged basins. In particular, these approaches allow for predicting flows under uncertain and potentially variable future conditions due to rapid land cover changes, variable climate conditions, and other factors. Despite the broad range of literature on estimating rainfall-runoff model parameters, however, the absence of a robust set of modeling tools for identifying and quantifying uncertainties in (and correlation between) rainfall-runoff model parameters represents a significant gap in current hydrological modeling research. Here, we build upon a series of recent publications promoting novel Bayesian and probabilistic modeling strategies for quantifying rainfall-runoff model parameter estimation uncertainty. Our approach applies alternative measures of rainfall-runoff model parameter joint likelihood (including Nash-Sutcliffe efficiency, among others) to simulate samples from the joint parameter posterior probability density function. We then use these correlated samples as response variables in a Bayesian hierarchical model with land use coverage data as predictor variables in order to develop a robust land use-based tool for forecasting flow in ungauged basins while accounting for, and explicitly acknowledging, parameter estimation uncertainty. We apply this modeling strategy to low-relief coastal watersheds of Eastern North Carolina, an area representative of coastal resource waters throughout the world because of its sensitive embayments and because of the abundant (but currently threatened) natural resources it hosts. Consequently, this area is the subject of several ongoing studies and large-scale planning initiatives, including those conducted through the United

  20. Small organic molecule based flow battery

    Science.gov (United States)

    Huskinson, Brian; Marshak, Michael; Aziz, Michael J.; Gordon, Roy G.; Betley, Theodore A.; Aspuru-Guzik, Alan; Er, Suleyman; Suh, Changwon

    2018-05-08

    The invention provides an electrochemical cell based on a new chemistry for a flow battery for large scale, e.g., gridscale, electrical energy storage. Electrical energy is stored chemically at an electrochemical electrode by the protonation of small organic molecules called quinones to hydroquinones. The proton is provided by a complementary electrochemical reaction at the other electrode. These reactions are reversed to deliver electrical energy. A flow battery based on this concept can operate as a closed system. The flow battery architecture has scaling advantages over solid electrode batteries for large scale energy storage.

  1. XML-based approaches for the integration of heterogeneous bio-molecular data.

    Science.gov (United States)

    Mesiti, Marco; Jiménez-Ruiz, Ernesto; Sanz, Ismael; Berlanga-Llavori, Rafael; Perlasca, Paolo; Valentini, Giorgio; Manset, David

    2009-10-15

    The today's public database infrastructure spans a very large collection of heterogeneous biological data, opening new opportunities for molecular biology, bio-medical and bioinformatics research, but raising also new problems for their integration and computational processing. In this paper we survey the most interesting and novel approaches for the representation, integration and management of different kinds of biological data by exploiting XML and the related recommendations and approaches. Moreover, we present new and interesting cutting edge approaches for the appropriate management of heterogeneous biological data represented through XML. XML has succeeded in the integration of heterogeneous biomolecular information, and has established itself as the syntactic glue for biological data sources. Nevertheless, a large variety of XML-based data formats have been proposed, thus resulting in a difficult effective integration of bioinformatics data schemes. The adoption of a few semantic-rich standard formats is urgent to achieve a seamless integration of the current biological resources.

  2. Is There a Difference in Credit Constraints Between Private and Listed Companies in Brazil? Empirical Evidence by The Cash Flow Sensitivity Approach

    Directory of Open Access Journals (Sweden)

    Alan Nader Ackel Ghani

    2015-04-01

    Full Text Available This article analyzes the credit constraints, using the cash flow sensitivity approach, of private and listed companies between 2007 and 2010. According to this approach, the econometric results show that the credit constraints are the same for either private or listed companies. This paper seeks to contribute to the literature because the study of credit constraints of private companies based on cash flow sensitivity in Brazil has been rare.

  3. Cultural Work as a Site of Struggle: Freelancers and Exploitation

    Directory of Open Access Journals (Sweden)

    Nicole S. Cohen

    2012-05-01

    Full Text Available This paper argues that Marxist political economy is a useful framework for understanding contemporary conditions of cultural work. Drawing on Karl Marx’s foundational concepts, labour process theory, and a case study of freelance writers, I argue that the debate over autonomy and control in cultural work ignores exploitation in labour-capital relationships, which is a crucial process shaping cultural work. To demonstrate the benefits of this approach, I discuss two methods media firms use to extract surplus value from freelance writers: exploitation of unpaid labour time and exploitation of intellectual property through aggressive copyright regimes. I argue that a Marxist perspective can uncover the dynamics that are transforming cultural industries and workers’ experiences. From this perspective, cultural work is understood as a site of struggle.

  4. The exploitation argument against commercial surrogacy.

    Science.gov (United States)

    Wilkinson, Stephen

    2003-04-01

    This paper discusses the exploitation argument against commercial surrogacy: the claim that commercial surrogacy is morally objectionable because it is exploitative. The following questions are addressed. First, what exactly does the exploitation argument amount to? Second, is commercial surrogacy in fact exploitative? Third, if it were exploitative, would this provide a sufficient reason to prohibit (or otherwise legislatively discourage) it? The focus throughout is on the exploitation of paid surrogates, although it is noted that other parties (e.g. 'commissioning parents') may also be the victims of exploitation. It is argued that there are good reasons for believing that commercial surrogacy is often exploitative. However, even if we accept this, the exploitation argument for prohibiting (or otherwise legislatively discouraging) commercial surrogacy remains quite weak. One reason for this is that prohibition may well 'backfire' and lead to potential surrogates having to do other things that are more exploitative and/or more harmful than paid surrogacy. It is concluded therefore that those who oppose exploitation should (rather than attempting to stop particular practices like commercial surrogacy) concentrate on: (a) improving the conditions under which paid surrogates 'work'; and (b) changing the background conditions (in particular, the unequal distribution of power and wealth) which generate exploitative relationships.

  5. Quantification of ozone uptake at the stand level in a Pinus canariensis forest in Tenerife, Canary Islands: An approach based on sap flow measurements

    International Nuclear Information System (INIS)

    Wieser, Gerhard; Luis, Vanessa C.; Cuevas, Emilio

    2006-01-01

    Ozone uptake was studied in a pine forest in Tenerife, Canary Islands, an ecotone with strong seasonal changes in climate. Ambient ozone concentration showed a pronounced seasonal course with high concentrations during the dry and warm period and low concentrations during the wet and cold season. Ozone uptake by contrast showed no clear seasonal trend. This is because canopy conductance significantly decreased with soil water availability and vapour pressure deficit. Mean daily ozone uptake averaged 1.9 nmol m -2 s -1 during the wet and cold season, and 1.5 nmol m -2 s -1 during the warm and dry period. The corresponding daily mean ambient ozone concentrations were 42 and 51 nl l -1 , respectively. Thus we conclude that in Mediterranean type forest ecosystems the flux based approach is more capable for risk assessment than an external, concentration based approach. - Sap flow measurements can be used for estimating ozone uptake at the stand level and for parameterisation of O 3 uptake models

  6. Exploiting neurovascular coupling: a Bayesian sequential Monte Carlo approach applied to simulated EEG fNIRS data

    Science.gov (United States)

    Croce, Pierpaolo; Zappasodi, Filippo; Merla, Arcangelo; Chiarelli, Antonio Maria

    2017-08-01

    Objective. Electrical and hemodynamic brain activity are linked through the neurovascular coupling process and they can be simultaneously measured through integration of electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS). Thanks to the lack of electro-optical interference, the two procedures can be easily combined and, whereas EEG provides electrophysiological information, fNIRS can provide measurements of two hemodynamic variables, such as oxygenated and deoxygenated hemoglobin. A Bayesian sequential Monte Carlo approach (particle filter, PF) was applied to simulated recordings of electrical and neurovascular mediated hemodynamic activity, and the advantages of a unified framework were shown. Approach. Multiple neural activities and hemodynamic responses were simulated in the primary motor cortex of a subject brain. EEG and fNIRS recordings were obtained by means of forward models of volume conduction and light propagation through the head. A state space model of combined EEG and fNIRS data was built and its dynamic evolution was estimated through a Bayesian sequential Monte Carlo approach (PF). Main results. We showed the feasibility of the procedure and the improvements in both electrical and hemodynamic brain activity reconstruction when using the PF on combined EEG and fNIRS measurements. Significance. The investigated procedure allows one to combine the information provided by the two methodologies, and, by taking advantage of a physical model of the coupling between electrical and hemodynamic response, to obtain a better estimate of brain activity evolution. Despite the high computational demand, application of such an approach to in vivo recordings could fully exploit the advantages of this combined brain imaging technology.

  7. Exploitability Assessment with TEASER

    Science.gov (United States)

    2017-05-01

    for architectural neutral taint analysis on top of LLVM and QEMU. POC Proof of Concept : Demonstration of an exploit on a program . vii RCE Remote Code...bug with a Proof of Concept (POC), or input to a program demonstrating the ability to use a bug to exploit the application, to demonstrate the...often leads to either computationally difficult constraint solving problems or taint explosion. Given the computational difficulty of exploit

  8. Advanced Approach of Multiagent Based Buoy Communication.

    Science.gov (United States)

    Gricius, Gediminas; Drungilas, Darius; Andziulis, Arunas; Dzemydiene, Dale; Voznak, Miroslav; Kurmis, Mindaugas; Jakovlev, Sergej

    2015-01-01

    Usually, a hydrometeorological information system is faced with great data flows, but the data levels are often excessive, depending on the observed region of the water. The paper presents advanced buoy communication technologies based on multiagent interaction and data exchange between several monitoring system nodes. The proposed management of buoy communication is based on a clustering algorithm, which enables the performance of the hydrometeorological information system to be enhanced. The experiment is based on the design and analysis of the inexpensive but reliable Baltic Sea autonomous monitoring network (buoys), which would be able to continuously monitor and collect temperature, waviness, and other required data. The proposed approach of multiagent based buoy communication enables all the data from the costal-based station to be monitored with limited transition speed by setting different tasks for the agent-based buoy system according to the clustering information.

  9. Who’s Who at the Border? A rights-based approach to identifying human trafficking at international borders

    Directory of Open Access Journals (Sweden)

    Marika McAdam

    2013-09-01

    Full Text Available International borders are widely touted as bastions in the fight against trafficking in persons. This article acknowledges the important role border officials play in preventing human trafficking, but calls for expectations to be tempered by deference to the conceptual complexity of cross-border trafficking and the migration processes involved. The fact that many trafficked victims begin their journeys as irregular or smuggled migrants highlights the challenge posed to border officials in identifying trafficked persons among the people they encounter. Indicators of trafficking generally relate to the exploitation phase, leaving border officials with little guidance as to how persons vulnerable to trafficking can be accurately identified before any exploitation has occurred. Ultimately, this paper advocates a pragmatic rights-based approach in designating anti-trafficking functions to border officials. A rights-based approach to border control acknowledges the core work of border officials as being to uphold border integrity, while ensuring that their performance of this role does not jeopardise the rights of those they intercept nor result in missed opportunities for specialists to identify trafficked persons and other vulnerable people among them.

  10. An open, object-based modeling approach for simulating subsurface heterogeneity

    Science.gov (United States)

    Bennett, J.; Ross, M.; Haslauer, C. P.; Cirpka, O. A.

    2017-12-01

    Characterization of subsurface heterogeneity with respect to hydraulic and geochemical properties is critical in hydrogeology as their spatial distribution controls groundwater flow and solute transport. Many approaches of characterizing subsurface heterogeneity do not account for well-established geological concepts about the deposition of the aquifer materials; those that do (i.e. process-based methods) often require forcing parameters that are difficult to derive from site observations. We have developed a new method for simulating subsurface heterogeneity that honors concepts of sequence stratigraphy, resolves fine-scale heterogeneity and anisotropy of distributed parameters, and resembles observed sedimentary deposits. The method implements a multi-scale hierarchical facies modeling framework based on architectural element analysis, with larger features composed of smaller sub-units. The Hydrogeological Virtual Reality simulator (HYVR) simulates distributed parameter models using an object-based approach. Input parameters are derived from observations of stratigraphic morphology in sequence type-sections. Simulation outputs can be used for generic simulations of groundwater flow and solute transport, and for the generation of three-dimensional training images needed in applications of multiple-point geostatistics. The HYVR algorithm is flexible and easy to customize. The algorithm was written in the open-source programming language Python, and is intended to form a code base for hydrogeological researchers, as well as a platform that can be further developed to suit investigators' individual needs. This presentation will encompass the conceptual background and computational methods of the HYVR algorithm, the derivation of input parameters from site characterization, and the results of groundwater flow and solute transport simulations in different depositional settings.

  11. An ethnomethodological approach to examine exploitation in the context of capacity, trust and experience of commercial surrogacy in India

    Science.gov (United States)

    2013-01-01

    The socio-ethical concerns regarding exploitation in commercial surrogacy are premised on asymmetric vulnerability and the commercialization of women’s reproductive capacity to suit individualistic motives. In examining the exploitation argument, this article reviews the social contract theory that describes an individual as an ‘economic man’ with moral and/or political motivations to satisfy individual desires. This study considers the critique by feminists, who argue that patriarchal and medical control prevails in the surrogacy contracts. It also explores the exploitative dynamics amongst actors in the light of Baier’s conceptualization of trust and human relationship, within which both justice and exploitation thrive, and Foucault’s concept of bio-power. Drawing on these concepts, this paper aims to investigate the manifestations of exploitation in commercial surrogacy in the context of trust, power and experiences of actors, using a case study of one clinic in India. The actors’ experiences are evaluated at different stages of the surrogacy process: recruitment, medical procedures, living in the surrogate home, bonding with the child and amongst actors, financial dealings, relinquishment and post-relinquishment. This study applies ethnomethodology to identify phenomena as perceived by the actors in a situation, giving importance to their interpretations of the rules that make collective activity possible. The methods include semi-structured interviews, discussions, participant observation and explanation of the phenomena from the actors’ perspectives. Between August 2009 and April 2010, 13 surrogate mothers (SMs), 4 intended parents (IPs) and 2 medical practitioners (MPs) from one clinic in Western India were interviewed. This study reveals that asymmetries of capacity amongst the MPs, SMs, IPs and surrogate agents (SAs) lead to a network of trust and designation of powers through rules, bringing out the relevance of Baier’s conceptualization

  12. An ethnomethodological approach to examine exploitation in the context of capacity, trust and experience of commercial surrogacy in India.

    Science.gov (United States)

    Saravanan, Sheela

    2013-08-20

    The socio-ethical concerns regarding exploitation in commercial surrogacy are premised on asymmetric vulnerability and the commercialization of women's reproductive capacity to suit individualistic motives. In examining the exploitation argument, this article reviews the social contract theory that describes an individual as an 'economic man' with moral and/or political motivations to satisfy individual desires. This study considers the critique by feminists, who argue that patriarchal and medical control prevails in the surrogacy contracts. It also explores the exploitative dynamics amongst actors in the light of Baier's conceptualization of trust and human relationship, within which both justice and exploitation thrive, and Foucault's concept of bio-power. Drawing on these concepts, this paper aims to investigate the manifestations of exploitation in commercial surrogacy in the context of trust, power and experiences of actors, using a case study of one clinic in India. The actors' experiences are evaluated at different stages of the surrogacy process: recruitment, medical procedures, living in the surrogate home, bonding with the child and amongst actors, financial dealings, relinquishment and post-relinquishment.This study applies ethnomethodology to identify phenomena as perceived by the actors in a situation, giving importance to their interpretations of the rules that make collective activity possible. The methods include semi-structured interviews, discussions, participant observation and explanation of the phenomena from the actors' perspectives. Between August 2009 and April 2010, 13 surrogate mothers (SMs), 4 intended parents (IPs) and 2 medical practitioners (MPs) from one clinic in Western India were interviewed.This study reveals that asymmetries of capacity amongst the MPs, SMs, IPs and surrogate agents (SAs) lead to a network of trust and designation of powers through rules, bringing out the relevance of Baier's conceptualization of asymmetric

  13. Two questions about surrogacy and exploitation.

    Science.gov (United States)

    Wertheimer, Alan

    1992-01-01

    In this article I will consider two related questions about surrogacy and exploitation: (1) Is surrogacy exploitative? (2) If surrogacy is exploitative, what is the moral force of this exploitation? Briefly stated, I shall argue that whether surrogacy is exploitative depends on whether exploitation must be harmful to the exploited party or whether (as I think) there can be mutually advantageous exploitation. It also depends on some facts about surrogacy about which we have little reliable evidence and on our philosophical view on what counts as a harm to the surrogate. Our answer to the second question will turn in part on the account of exploitation we invoke in answering the first question and in part on the way in which we resolve some other questions about the justification of state interference. I shall suggest, however, that if surrogacy is a form of voluntary and mutually advantageous exploitation, then there is a strong presumption that surrogacy contracts should be permitted and even enforceable, although that presumption may be overridden on other grounds.

  14. Multi-GPU unsteady 2D flow simulation coupled with a state-to-state chemical kinetics

    Science.gov (United States)

    Tuttafesta, Michele; Pascazio, Giuseppe; Colonna, Gianpiero

    2016-10-01

    In this work we are presenting a GPU version of a CFD code for high enthalpy reacting flow, using the state-to-state approach. In supersonic and hypersonic flows, thermal and chemical non-equilibrium is one of the fundamental aspects that must be taken into account for the accurate characterization of the plasma and state-to-state kinetics is the most accurate approach used for this kind of problems. This model consists in writing a continuity equation for the population of each vibrational level of the molecules in the mixture, determining at the same time the species densities and the distribution of the population in internal levels. An explicit scheme is employed here to integrate the governing equations, so as to exploit the GPU structure and obtain an efficient algorithm. The best performances are obtained for reacting flows in state-to-state approach, reaching speedups of the order of 100, thanks to the use of an operator splitting scheme for the kinetics equations.

  15. Online traffic flow model applying dynamic flow-density relation

    International Nuclear Information System (INIS)

    Kim, Y.

    2002-01-01

    This dissertation describes a new approach of the online traffic flow modelling based on the hydrodynamic traffic flow model and an online process to adapt the flow-density relation dynamically. The new modelling approach was tested based on the real traffic situations in various homogeneous motorway sections and a motorway section with ramps and gave encouraging simulation results. This work is composed of two parts: first the analysis of traffic flow characteristics and second the development of a new online traffic flow model applying these characteristics. For homogeneous motorway sections traffic flow is classified into six different traffic states with different characteristics. Delimitation criteria were developed to separate these states. The hysteresis phenomena were analysed during the transitions between these traffic states. The traffic states and the transitions are represented on a states diagram with the flow axis and the density axis. For motorway sections with ramps the complicated traffic flow is simplified and classified into three traffic states depending on the propagation of congestion. The traffic states are represented on a phase diagram with the upstream demand axis and the interaction strength axis which was defined in this research. The states diagram and the phase diagram provide a basis for the development of the dynamic flow-density relation. The first-order hydrodynamic traffic flow model was programmed according to the cell-transmission scheme extended by the modification of flow dependent sending/receiving functions, the classification of cells and the determination strategy for the flow-density relation in the cells. The unreasonable results of macroscopic traffic flow models, which may occur in the first and last cells in certain conditions are alleviated by applying buffer cells between the traffic data and the model. The sending/receiving functions of the cells are determined dynamically based on the classification of the

  16. Exploit Kit traffic analysis

    OpenAIRE

    Καπίρης, Σταμάτης; Kapiris, Stamatis

    2017-01-01

    Exploit kits have become one of the most widespread and destructive threat that Internet users face on a daily basis. Since the first actor, which has been categorized as exploit kit, namely MPack, appeared in 2006, we have seen a new era on exploit kit variants compromising popular websites, infecting hosts and delivering destructive malware, following an exponentially evolvement to date. With the growing threat landscape, large enterprises to domestic networks, have starte...

  17. A Galerkin least squares approach to viscoelastic flow.

    Energy Technology Data Exchange (ETDEWEB)

    Rao, Rekha R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Schunk, Peter Randall [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-10-01

    A Galerkin/least-squares stabilization technique is applied to a discrete Elastic Viscous Stress Splitting formulation of for viscoelastic flow. From this, a possible viscoelastic stabilization method is proposed. This method is tested with the flow of an Oldroyd-B fluid past a rigid cylinder, where it is found to produce inaccurate drag coefficients. Furthermore, it fails for relatively low Weissenberg number indicating it is not suited for use as a general algorithm. In addition, a decoupled approach is used as a way separating the constitutive equation from the rest of the system. A Pressure Poisson equation is used when the velocity and pressure are sought to be decoupled, but this fails to produce a solution when inflow/outflow boundaries are considered. However, a coupled pressure-velocity equation with a decoupled constitutive equation is successful for the flow past a rigid cylinder and seems to be suitable as a general-use algorithm.

  18. MINIMUM QUANTITY LUBRICANT FLOW ANALYSIS IN END MILLING PROCESSES: A COMPUTATIONAL FLUID DYNAMICS APPROACH

    Directory of Open Access Journals (Sweden)

    M. S. Najiha

    2012-12-01

    Full Text Available This paper presents a two-dimensional steady-state incompressible analysis for the minimum quantity of lubricant flow in milling operations using a computational fluid dynamics (CFD approach. The analysis of flow and heat transfer in a four-teeth milling cutter operation was undertaken. The domain of the rotating cutter along with the spray nozzle is defined. Operating cutting and boundary conditions are taken from the literature. A steady-state, pressure-based, planar analysis was performed with a viscous, realizable k-ε model. A mixture of oils and air were sprayed on the tool, which is considered to be rotating and is at a temperature near the melting temperature of the workpiece. Flow fields are obtained from the study. The vector plot of the flow field shows that the flow is not evenly distributed over the cutter surface, as well as the uneven distribution of the lubricant in the direction of the cutter rotation. It can be seen that the cutting fluid has not completely penetrated the tool edges. The turbulence created by the cutter rotation in the proximity of the tool throws oil drops out of the cutting zone. The nozzle position in relation to the feed direction is very important in order to obtain the optimum effect of the MQL flow.

  19. Flow-through solid-phase based optical sensor for the multisyringe flow injection trace determination of orthophosphate in waters with chemiluminescence detection

    International Nuclear Information System (INIS)

    Morais, Ines P.A.; Miro, Manuel; Manera, Matias; Estela, Jose Manuel; Cerda, Victor; Souto, M. Renata S.; Rangel, Antonio O.S.S.

    2004-01-01

    In this work, a novel flow-through solid-phase based chemiluminescence (CL) optical sensor is described for the trace determination of orthophosphate in waters exploiting the multisyringe flow injection analysis (MSFIA) concept with multicommutation. The proposed time-based injection flow system relies upon the in-line derivatisation of the analyte with ammonium molybdate in the presence of vanadate, and the transient immobilisation of the resulting heteropolyacid in a N-vinylpyrrolidone/divinylbenzene copolymer packed spiral shape flow-through cell located in front of the window of a photomultiplier tube. The simultaneous injection of well-defined slugs of luminol in alkaline medium and methanol solution towards the packed reactor is afterwards performed by proper switching of the solenoid valves. Then, the light emission from the luminol oxidation by the oxidant species retained onto the sorbent material is readily detected. At the same time, the generated molybdenum-blue compound is eluted by the minute amount of injected methanol, rendering the system prepared for a new measuring cycle. Therefore, the devised sensor enables the integration of the solid-phase CL reaction with elution and detection of the emitted light without the typical drawbacks of the molybdenum-blue based spectrophotometric procedures regarding the excess of molybdate anion, which causes high background signals due to its self-reduction. The noteworthy features of the developed CL-MSFIA system are the feasibility to accommodate reactions with different pH requirements and the ability to determine trace levels of orthophosphate in high silicate content samples (Si/P ratios up to 500). Under the optimised conditions, a dynamic linear range from 5 to 50 μg P l -1 for a 1.8 ml sample, repeatability better than 3.0% and a quantification limit of 4 μg P l -1 were attained. The flowing stream system handles 11 analysis h -1 and has been successfully applied to the determination of trace levels of

  20. Space shuttle booster multi-engine base flow analysis

    Science.gov (United States)

    Tang, H. H.; Gardiner, C. R.; Anderson, W. A.; Navickas, J.

    1972-01-01

    A comprehensive review of currently available techniques pertinent to several prominent aspects of the base thermal problem of the space shuttle booster is given along with a brief review of experimental results. A tractable engineering analysis, capable of predicting the power-on base pressure, base heating, and other base thermal environmental conditions, such as base gas temperature, is presented and used for an analysis of various space shuttle booster configurations. The analysis consists of a rational combination of theoretical treatments of the prominent flow interaction phenomena in the base region. These theories consider jet mixing, plume flow, axisymmetric flow effects, base injection, recirculating flow dynamics, and various modes of heat transfer. Such effects as initial boundary layer expansion at the nozzle lip, reattachment, recompression, choked vent flow, and nonisoenergetic mixing processes are included in the analysis. A unified method was developed and programmed to numerically obtain compatible solutions for the various flow field components in both flight and ground test conditions. Preliminary prediction for a 12-engine space shuttle booster base thermal environment was obtained for a typical trajectory history. Theoretical predictions were also obtained for some clustered-engine experimental conditions. Results indicate good agreement between the data and theoretical predicitons.

  1. Explorative and exploitative learning strategies in technology-based alliance networks

    NARCIS (Netherlands)

    Vanhaverbeke, W.P.M.; Beerkens, B.E.; Duysters, G.M.

    2003-01-01

    This paper aims to improve our understanding of how exploitative and explorative learning of firms is enhanced through their social capital. Both types of learning differ considerably from each other and we argue that the distinction between them may be an important contingency factor in explaining

  2. Flow whitelisting in SCADA networks

    DEFF Research Database (Denmark)

    Barbosa, Rafael Ramos Regis; Sadre, Ramin; Pras, Aiko

    2013-01-01

    and the Internet. This paper describes an approach for improving the security of SCADA networks using flow whitelisting. A flow whitelist describes legitimate traffic based on four properties of network packets: client address, server address, server-side port and transport protocol. The proposed approach...

  3. Drifting while stepping in place in old adults: Association of self-motion perception with reference frame reliance and ground optic flow sensitivity.

    Science.gov (United States)

    Agathos, Catherine P; Bernardin, Delphine; Baranton, Konogan; Assaiante, Christine; Isableu, Brice

    2017-04-07

    Optic flow provides visual self-motion information and is shown to modulate gait and provoke postural reactions. We have previously reported an increased reliance on the visual, as opposed to the somatosensory-based egocentric, frame of reference (FoR) for spatial orientation with age. In this study, we evaluated FoR reliance for self-motion perception with respect to the ground surface. We examined how effects of ground optic flow direction on posture may be enhanced by an intermittent podal contact with the ground, and reliance on the visual FoR and aging. Young, middle-aged and old adults stood quietly (QS) or stepped in place (SIP) for 30s under static stimulation, approaching and receding optic flow on the ground and a control condition. We calculated center of pressure (COP) translation and optic flow sensitivity was defined as the ratio of COP translation velocity over absolute optic flow velocity: the visual self-motion quotient (VSQ). COP translation was more influenced by receding flow during QS and by approaching flow during SIP. In addition, old adults drifted forward while SIP without any imposed visual stimulation. Approaching flow limited this natural drift and receding flow enhanced it, as indicated by the VSQ. The VSQ appears to be a motor index of reliance on the visual FoR during SIP and is associated with greater reliance on the visual and reduced reliance on the egocentric FoR. Exploitation of the egocentric FoR for self-motion perception with respect to the ground surface is compromised by age and associated with greater sensitivity to optic flow. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.

  4. Synthetic Aperture Flow Imaging Using a Dual Beamformer Approach

    DEFF Research Database (Denmark)

    Li, Ye

    Color flow mapping systems have become widely used in clinical applications. It provides an opportunity to visualize the velocity profile over a large region in the vessel, which makes it possible to diagnose, e.g., occlusion of veins, heart valve deficiencies, and other hemodynamic problems....... However, while the conventional ultrasound imaging of making color flow mapping provides useful information in many circumstances, the spatial velocity resolution and frame rate are limited. The entire velocity distribution consists of image lines from different directions, and each image line...... on the current commercial ultrasound scanner. The motivation for this project is to develop a method lowering the amount of calculations and still maintaining beamforming quality sufficient for flow estimation. Synthetic aperture using a dual beamformer approach is investigated using Field II simulations...

  5. A computational approach to modeling cellular-scale blood flow in complex geometry

    Science.gov (United States)

    Balogh, Peter; Bagchi, Prosenjit

    2017-04-01

    We present a computational methodology for modeling cellular-scale blood flow in arbitrary and highly complex geometry. Our approach is based on immersed-boundary methods, which allow modeling flows in arbitrary geometry while resolving the large deformation and dynamics of every blood cell with high fidelity. The present methodology seamlessly integrates different modeling components dealing with stationary rigid boundaries of complex shape, moving rigid bodies, and highly deformable interfaces governed by nonlinear elasticity. Thus it enables us to simulate 'whole' blood suspensions flowing through physiologically realistic microvascular networks that are characterized by multiple bifurcating and merging vessels, as well as geometrically complex lab-on-chip devices. The focus of the present work is on the development of a versatile numerical technique that is able to consider deformable cells and rigid bodies flowing in three-dimensional arbitrarily complex geometries over a diverse range of scenarios. After describing the methodology, a series of validation studies are presented against analytical theory, experimental data, and previous numerical results. Then, the capability of the methodology is demonstrated by simulating flows of deformable blood cells and heterogeneous cell suspensions in both physiologically realistic microvascular networks and geometrically intricate microfluidic devices. It is shown that the methodology can predict several complex microhemodynamic phenomena observed in vascular networks and microfluidic devices. The present methodology is robust and versatile, and has the potential to scale up to very large microvascular networks at organ levels.

  6. The exploitation of living resources in the Dutch Wadden Sea : a historical overview

    NARCIS (Netherlands)

    Wolff, W J

    An overview, based on written sources and personal observations, is presented of exploitation of living resources in and around the Dutch Wadden Sea during the past few centuries. It is concluded that before about 1900 exploitation was almost unrestricted. Exploitation of plants has been documented

  7. The ATLAS Event Service: A New Approach to Event Processing

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00070566; De, Kaushik; Guan, Wen; Maeno, Tadashi; Nilsson, Paul; Oleynik, Danila; Panitkin, Sergey; Tsulaia, Vakhtang; van Gemmeren, Peter; Wenaus, Torre

    2015-01-01

    The ATLAS Event Service (ES) implements a new fine grained approach to HEP event processing, designed to be agile and efficient in exploiting transient, short-lived resources such as HPC hole-filling, spot market commercial clouds, and volunteer computing. Input and output control and data flows, bookkeeping, monitoring, and data storage are all managed at the event level in an implementation capable of supporting ATLAS-scale distributed processing throughputs (about 4M CPU-hours/day). Input data flows utilize remote data repositories with no data locality or pre­staging requirements, minimizing the use of costly storage in favor of strongly leveraging powerful networks. Object stores provide a highly scalable means of remotely storing the quasi-continuous, fine grained outputs that give ES based applications a very light data footprint on a processing resource, and ensure negligible losses should the resource suddenly vanish. We will describe the motivations for the ES system, its unique features and capabi...

  8. Stochastic Individual-Based Modeling of Bacterial Growth and Division Using Flow Cytometry

    Directory of Open Access Journals (Sweden)

    Míriam R. García

    2018-01-01

    Full Text Available A realistic description of the variability in bacterial growth and division is critical to produce reliable predictions of safety risks along the food chain. Individual-based modeling of bacteria provides the theoretical framework to deal with this variability, but it requires information about the individual behavior of bacteria inside populations. In this work, we overcome this problem by estimating the individual behavior of bacteria from population statistics obtained with flow cytometry. For this objective, a stochastic individual-based modeling framework is defined based on standard assumptions during division and exponential growth. The unknown single-cell parameters required for running the individual-based modeling simulations, such as cell size growth rate, are estimated from the flow cytometry data. Instead of using directly the individual-based model, we make use of a modified Fokker-Plank equation. This only equation simulates the population statistics in function of the unknown single-cell parameters. We test the validity of the approach by modeling the growth and division of Pediococcus acidilactici within the exponential phase. Estimations reveal the statistics of cell growth and division using only data from flow cytometry at a given time. From the relationship between the mother and daughter volumes, we also predict that P. acidilactici divide into two successive parallel planes.

  9. The Geohazards Exploitation Platform: an advanced cloud-based environment for the Earth Science community

    Science.gov (United States)

    Manunta, Michele; Casu, Francesco; Zinno, Ivana; De Luca, Claudio; Pacini, Fabrizio; Caumont, Hervé; Brito, Fabrice; Blanco, Pablo; Iglesias, Ruben; López, Álex; Briole, Pierre; Musacchio, Massimo; Buongiorno, Fabrizia; Stumpf, Andre; Malet, Jean-Philippe; Brcic, Ramon; Rodriguez Gonzalez, Fernando; Elias, Panagiotis

    2017-04-01

    The idea to create advanced platforms for the Earth Observation community, where the users can find data but also state-of-art algorithms, processing tools, computing facilities, and instruments for dissemination and sharing, has been launched several years ago. The initiatives developed in this context have been supported firstly by the Framework Programmes of European Commission and the European Space Agency (ESA) and, progressively, by the Copernicus programme. In particular, ESA created and supported the Grid Processing on Demand (G-POD) environment, where the users can access to advanced processing tools implemented in a GRID environment, satellite data and computing facilities. All these components are located in the same datacentre to significantly reduce and make negligible the time to move the satellite data from the archive. From the experience of G-POD was born the idea of ESA to have an ecosystem of Thematic Exploitation Platforms (TEP) focused on the integration of Ground Segment capabilities and ICT technologies to maximize the exploitation of EO data from past and future missions. A TEP refers to a computing platform that deals with a set of user scenarios involving scientists, data providers and ICT developers, aggregated around an Earth Science thematic area. Among the others, the Geohazards Exploitation Platform (GEP) aims at providing on-demand and systematic processing services to address the need of the geohazards community for common information layers and to integrate newly developed processors for scientists and other expert users. Within GEP, the community benefits from a cloud-based environment, specifically designed for the advanced exploitation of EO data. A partner can bring its own tools and processing chains, but also has access in the same workspace to large satellite datasets and shared data processing tools. GEP is currently in the pre-operations phase under a consortium led by Terradue Srl and six pilot projects concerning

  10. Monitoring of Oil Exploitation Infrastructure by Combining Unsupervised Pixel-Based Classification of Polarimetric SAR and Object-Based Image Analysis

    Directory of Open Access Journals (Sweden)

    Simon Plank

    2014-12-01

    Full Text Available In developing countries, there is a high correlation between the dependence of oil exports and violent conflicts. Furthermore, even in countries which experienced a peaceful development of their oil industry, land use and environmental issues occur. Therefore, independent monitoring of oil field infrastructure may support problem solving. Earth observation data enables fast monitoring of large areas which allows comparing the real amount of land used by the oil exploitation and the companies’ contractual obligations. The target feature of this monitoring is the infrastructure of the oil exploitation, oil well pads—rectangular features of bare land covering an area of approximately 50–60 m × 100 m. This article presents an automated feature extraction procedure based on the combination of a pixel-based unsupervised classification of polarimetric synthetic aperture radar data (PolSAR and an object-based post-classification. The method is developed and tested using dual-polarimetric TerraSAR-X imagery acquired over the Doba basin in south Chad. The advantages of PolSAR are independence of the cloud coverage (vs. optical imagery and the possibility of detailed land use classification (vs. single-pol SAR. The PolSAR classification uses the polarimetric Wishart probability density function based on the anisotropy/entropy/alpha decomposition. The object-based post-classification refinement, based on properties of the feature targets such as shape and area, increases the user’s accuracy of the methodology by an order of a magnitude. The final achieved user’s and producer’s accuracy is 59%–71% in each case (area based accuracy assessment. Considering only the numbers of correctly/falsely detected oil well pads, the user’s and producer’s accuracies increase to even 74%–89%. In an iterative training procedure the best suited polarimetric speckle filter and processing parameters of the developed feature extraction procedure are

  11. VALUE-BASED APPROACH TO MANAGING CURRENT ASSETS OF CORPORATE CONSTRUCTION COMPANIES

    Directory of Open Access Journals (Sweden)

    Galyna Shapoval

    2017-09-01

    Full Text Available In modern conditions of management, the value of an enterprise becomes the main indicator, which is learned not only by scientists, but also by owners of enterprise and potential investors. Current assets take a very important place among the factors that affect the value of an enterprise, so management of current assets becomes more acute from the point of their impact on enterprise value. The purpose of the paper is to develop a system of value-based management of corporate construction companies’ current assets. The main tasks are: the study of current assets impact on the value of corporate construction companies, the definition of value-based approach to managing current assets of corporate enterprises and development of value-based management system of corporate construction companies’ current assets by elements. General scientific and special research methods were used while writing the work. Value-based management of current assets involves value-based management of the elements of current assets. The value-based inventory management includes the following stages of management: the assessment of reliability and choice of supplier according to the criterion of cash flow maximization, the classification of stocks in management accounting according to the rhythm of supply and the establishment of periodicity of supplies in accordance with the needs of the construction process. The value-based management of accounts receivable includes the following stages of management: assessment of the efficiency of investment of working capital into accounts receivable, the assessment of customers' loyalty and the definition of credit conditions and monitoring of receivables by construction and debt instruments. Value-based cash management involves determining the required level of cash to ensure the continuity of the construction process, assessing the effectiveness of cash use according to the criterion of maximizing cash flow, as well as budget

  12. Distributional and Knowledge-Based Approaches for Computing Portuguese Word Similarity

    Directory of Open Access Journals (Sweden)

    Hugo Gonçalo Oliveira

    2018-02-01

    Full Text Available Identifying similar and related words is not only key in natural language understanding but also a suitable task for assessing the quality of computational resources that organise words and meanings of a language, compiled by different means. This paper, which aims to be a reference for those interested in computing word similarity in Portuguese, presents several approaches for this task and is motivated by the recent availability of state-of-the-art distributional models of Portuguese words, which add to several lexical knowledge bases (LKBs for this language, available for a longer time. The previous resources were exploited to answer word similarity tests, which also became recently available for Portuguese. We conclude that there are several valid approaches for this task, but not one that outperforms all the others in every single test. Distributional models seem to capture relatedness better, while LKBs are better suited for computing genuine similarity, but, in general, better results are obtained when knowledge from different sources is combined.

  13. Daily river flow prediction based on Two-Phase Constructive Fuzzy Systems Modeling: A case of hydrological - meteorological measurements asymmetry

    Science.gov (United States)

    Bou-Fakhreddine, Bassam; Mougharbel, Imad; Faye, Alain; Abou Chakra, Sara; Pollet, Yann

    2018-03-01

    Accurate daily river flow forecast is essential in many applications of water resources such as hydropower operation, agricultural planning and flood control. This paper presents a forecasting approach to deal with a newly addressed situation where hydrological data exist for a period longer than that of meteorological data (measurements asymmetry). In fact, one of the potential solutions to resolve measurements asymmetry issue is data re-sampling. It is a matter of either considering only the hydrological data or the balanced part of the hydro-meteorological data set during the forecasting process. However, the main disadvantage is that we may lose potentially relevant information from the left-out data. In this research, the key output is a Two-Phase Constructive Fuzzy inference hybrid model that is implemented over the non re-sampled data. The introduced modeling approach must be capable of exploiting the available data efficiently with higher prediction efficiency relative to Constructive Fuzzy model trained over re-sampled data set. The study was applied to Litani River in the Bekaa Valley - Lebanon by using 4 years of rainfall and 24 years of river flow daily measurements. A Constructive Fuzzy System Model (C-FSM) and a Two-Phase Constructive Fuzzy System Model (TPC-FSM) are trained. Upon validating, the second model has shown a primarily competitive performance and accuracy with the ability to preserve a higher day-to-day variability for 1, 3 and 6 days ahead. In fact, for the longest lead period, the C-FSM and TPC-FSM were able of explaining respectively 84.6% and 86.5% of the actual river flow variation. Overall, the results indicate that TPC-FSM model has provided a better tool to capture extreme flows in the process of streamflow prediction.

  14. Hardware architecture and associated programming flow for the design of digital fault-tolerant systems

    International Nuclear Information System (INIS)

    Peyret, Thomas

    2014-01-01

    Whether in automotive with heat stress or in aerospace and nuclear field subjected to cosmic, neutron and gamma radiation, the environment can lead to the development of faults in electronic systems. These faults, which can be transient or permanent, will lead to erroneous results that are unacceptable in some application contexts. The use of so-called rad-hard components is sometimes compromised due to their high costs and supply problems associated with export rules. This thesis proposes a joint hardware and software approach independent of integration technology for using digital programmable devices in environments that generate faults. Our approach includes the definition of a Coarse Grained Re-configurable Architecture (CGRA) able to execute entire application code but also all the hardware and software mechanisms to make it tolerant to transient and permanent faults. This is achieved by the combination of redundancy and dynamic reconfiguration of the CGRA based on a library of configurations generated by a complete conception flow. This implemented flow relies on a flow to map a code represented as a Control and Data Flow Graph (CDFG) on the CGRA architecture by obtaining directly a large number of different configurations and allows to exploit the full potential of architecture. This work, which has been validated through experiments with applications in the field of signal and image processing, has been the subject of two publications in international conferences and of two patents. (author) [fr

  15. Evaluation of near-wall solution approaches for large-eddy simulations of flow in a centrifugal pump impeller

    Directory of Open Access Journals (Sweden)

    Zhi-Feng Yao

    2016-01-01

    Full Text Available The turbulent flow in a centrifugal pump impeller is bounded by complex surfaces, including blades, a hub and a shroud. The primary challenge of the flow simulation arises from the generation of a boundary layer between the surface of the impeller and the moving fluid. The principal objective is to evaluate the near-wall solution approaches that are typically used to deal with the flow in the boundary layer for the large-eddy simulation (LES of a centrifugal pump impeller. Three near-wall solution approaches –the wall-function approach, the wall-resolved approach and the hybrid Reynolds averaged Navier–Stoke (RANS and LES approach – are tested. The simulation results are compared with experimental results conducted through particle imaging velocimetry (PIV and laser Doppler velocimetry (LDV. It is found that the wall-function approach is more sparing of computational resources, while the other two approaches have the important advantage of providing highly accurate boundary layer flow prediction. The hybrid RANS/LES approach is suitable for predicting steady-flow features, such as time-averaged velocities and hydraulic losses. Despite the fact that the wall-resolved approach is expensive in terms of computing resources, it exhibits a strong ability to capture a small-scale vortex and predict instantaneous velocity in the near-wall region in the impeller. The wall-resolved approach is thus recommended for the transient simulation of flows in centrifugal pump impellers.

  16. Exploitation of commercial remote sensing images: reality ignored?

    Science.gov (United States)

    Allen, Paul C.

    1999-12-01

    The remote sensing market is on the verge of being awash in commercial high-resolution images. Market estimates are based on the growing numbers of planned commercial remote sensing electro-optical, radar, and hyperspectral satellites and aircraft. EarthWatch, Space Imaging, SPOT, and RDL among others are all working towards launch and service of one to five meter panchromatic or radar-imaging satellites. Additionally, new advances in digital air surveillance and reconnaissance systems, both manned and unmanned, are also expected to expand the geospatial customer base. Regardless of platform, image type, or location, each system promises images with some combination of increased resolution, greater spectral coverage, reduced turn-around time (request-to- delivery), and/or reduced image cost. For the most part, however, market estimates for these new sources focus on the raw digital images (from collection to the ground station) while ignoring the requirements for a processing and exploitation infrastructure comprised of exploitation tools, exploitation training, library systems, and image management systems. From this it would appear the commercial imaging community has failed to learn the hard lessons of national government experience choosing instead to ignore reality and replicate the bias of collection over processing and exploitation. While this trend may be not impact the small quantity users that exist today it will certainly adversely affect the mid- to large-sized users of the future.

  17. Advanced Approach of Multiagent Based Buoy Communication

    Directory of Open Access Journals (Sweden)

    Gediminas Gricius

    2015-01-01

    Full Text Available Usually, a hydrometeorological information system is faced with great data flows, but the data levels are often excessive, depending on the observed region of the water. The paper presents advanced buoy communication technologies based on multiagent interaction and data exchange between several monitoring system nodes. The proposed management of buoy communication is based on a clustering algorithm, which enables the performance of the hydrometeorological information system to be enhanced. The experiment is based on the design and analysis of the inexpensive but reliable Baltic Sea autonomous monitoring network (buoys, which would be able to continuously monitor and collect temperature, waviness, and other required data. The proposed approach of multiagent based buoy communication enables all the data from the costal-based station to be monitored with limited transition speed by setting different tasks for the agent-based buoy system according to the clustering information.

  18. Population genetics of four heavily exploited shark species around the Arabian Peninsula

    KAUST Repository

    Spaet, Julia L.Y.

    2015-05-01

    The northwestern Indian Ocean harbors a number of larger marine vertebrate taxa that warrant the investigation of genetic population structure given remarkable spatial heterogeneity in biological characteristics such as distribution, behavior, and morphology. Here, we investigate the genetic population structure of four commercially exploited shark species with different biological characteristics (Carcharhinus limbatus, Carcharhinus sorrah, Rhizoprionodon acutus, and Sphyrna lewini) between the Red Sea and all other water bodies surrounding the Arabian Peninsula. To assess intraspecific patterns of connectivity, we constructed statistical parsimony networks among haplotypes and estimated (1) population structure; and (2) time of most recent population expansion, based on mitochondrial control region DNA and a total of 20 microsatellites. Our analysis indicates that, even in smaller, less vagile shark species, there are no contemporary barriers to gene flow across the study region, while historical events, for example, Pleistocene glacial cycles, may have affected connectivity in C. sorrah and R. acutus. A parsimony network analysis provided evidence that Arabian S. lewini may represent a population segment that is distinct from other known stocks in the Indian Ocean, raising a new layer of conservation concern. Our results call for urgent regional cooperation to ensure the sustainable exploitation of sharks in the Arabian region.

  19. Teotihuacan, tepeapulco, and obsidian exploitation.

    Science.gov (United States)

    Charlton, T H

    1978-06-16

    Current cultural ecological models of the development of civilization in central Mexico emphasize the role of subsistence production techniques and organization. The recent use of established and productive archeological surface survey techniques along natural corridors of communication between favorable niches for cultural development within the Central Mexican symbiotic region resulted in the location of sites that indicate an early development of a decentralized resource exploitation, manufacturing, and exchange network. The association of the development of this system with Teotihuacán indicates the importance such nonsubsistence production and exchange had in the evolution of this first central Mexican civilization. The later expansion of Teotihuacán into more distant areas of Mesoamerica was based on this resource exploitation model. Later civilizations centered at Tula and Tenochtitlán also used such a model in their expansion.

  20. Approaches for cytogenetic and molecular analyses of small flow-sorted cell populations from childhood leukemia bone marrow samples

    DEFF Research Database (Denmark)

    Obro, Nina Friesgaard; Madsen, Hans O.; Ryder, Lars Peter

    2011-01-01

    defined cell populations with subsequent analyses of leukemia-associated cytogenetic and molecular marker. The approaches described here optimize the use of the same tube of unfixed, antibody-stained BM cells for flow-sorting of small cell populations and subsequent exploratory FISH and PCR-based analyses....

  1. Hermite-Pade approximation approach to hydromagnetic flows in convergent-divergent channels

    International Nuclear Information System (INIS)

    Makinde, O.D.

    2005-10-01

    The problem of two-dimensional, steady, nonlinear flow of an incompressible conducting viscous fluid in convergent-divergent channels under the influence of an externally applied homogeneous magnetic field is studied using a special type of Hermite-Pade approximation approach. This semi-numerical scheme offers some advantages over solutions obtained by using traditional methods such as finite differences, spectral method, shooting method, etc. It reveals the analytical structure of the solution function and the important properties of overall flow structure including velocity field, flow reversal control and bifurcations are discussed. (author)

  2. Analyzing Unsaturated Flow Patterns in Fractured Rock Using an Integrated Modeling Approach

    International Nuclear Information System (INIS)

    Y.S. Wu; G. Lu; K. Zhang; L. Pan; G.S. Bodvarsson

    2006-01-01

    Characterizing percolation patterns in unsaturated fractured rock has posed a greater challenge to modeling investigations than comparable saturated zone studies, because of the heterogeneous nature of unsaturated media and the great number of variables impacting unsaturated flow. This paper presents an integrated modeling methodology for quantitatively characterizing percolation patterns in the unsaturated zone of Yucca Mountain, Nevada, a proposed underground repository site for storing high-level radioactive waste. The modeling approach integrates a wide variety of moisture, pneumatic, thermal, and isotopic geochemical field data into a comprehensive three-dimensional numerical model for modeling analyses. It takes into account the coupled processes of fluid and heat flow and chemical isotopic transport in Yucca Mountain's highly heterogeneous, unsaturated fractured tuffs. Modeling results are examined against different types of field-measured data and then used to evaluate different hydrogeological conceptualizations and their results of flow patterns in the unsaturated zone. In particular, this model provides a much clearer understanding of percolation patterns and flow behavior through the unsaturated zone, both crucial issues in assessing repository performance. The integrated approach for quantifying Yucca Mountain's flow system is demonstrated to provide a practical modeling tool for characterizing flow and transport processes in complex subsurface systems

  3. Child trafficking and commercial sexual exploitation: a review of promising prevention policies and programs.

    Science.gov (United States)

    Rafferty, Yvonne

    2013-10-01

    Child trafficking, including commercial sexual exploitation (CSE), is one of the fastest growing and most lucrative criminal activities in the world. The global enslavement of children affects countless numbers of victims who are trafficked within their home countries or transported away from their homes and treated as commodities to be bought, sold, and resold for labor or sexual exploitation. All over the world, girls are particularly likely to be trafficked into the sex trade: Girls and women constitute 98% of those who are trafficked for CSE. Health and safety standards in exploitative settings are generally extremely low, and the degree of experienced violence has been linked with adverse physical, psychological, and social-emotional development. The human-rights-based approach to child trafficking provides a comprehensive conceptual framework whereby victim-focused and law enforcement responses can be developed, implemented, and evaluated. This article highlights promising policies and programs designed to prevent child trafficking and CSE by combating demand for sex with children, reducing supply, and strengthening communities. The literature reviewed includes academic publications as well as international and governmental and nongovernmental reports. Implications for social policy and future research are presented. © 2013 American Orthopsychiatric Association.

  4. A Framework for Exploiting Internet of Things for Context-Aware Trust-Based Personalized Services

    Directory of Open Access Journals (Sweden)

    Abayomi Otebolaku

    2018-01-01

    Full Text Available In the last years, we have witnessed the introduction of the Internet of Things (IoT as an integral part of the Internet with billions of interconnected and addressable everyday objects. On one hand, these objects generate a massive volume of data that can be exploited to gain useful insights into our day-to-day needs. On the other hand, context-aware recommender systems (CARSs are intelligent systems that assist users to make service consumption choices that satisfy their preferences based on their contextual situations. However, one of the key challenges facing the development and deployment of CARSs is the lack of functionality for providing dynamic and reliable context information required by the recommendation decision process. Thus, data obtained from IoT objects and other sources can be exploited to build CARSs that satisfy users’ preferences, improve quality of experience, and boost recommendation accuracy. This article describes various components of a conceptual IoT-based framework for context-aware personalized recommendations. The framework addresses the weakness whereby CARSs rely on static and limited contexts from user’s mobile phone by providing additional components for reliable and dynamic context information, using IoT context sources. The core of the framework consists of a context classification and reasoning management and a dynamic user profile model, incorporating trust to improve the accuracy of context-aware personalized recommendations. Experimental evaluations show that incorporating context and trust into personalized recommendation process can improve accuracy.

  5. A new measure of interpersonal exploitativeness

    Directory of Open Access Journals (Sweden)

    Amy B. Brunell

    2013-05-01

    Full Text Available Measures of exploitativeness evidence problems with validity and reliability. The present set of studies assessed a new measure (the Interpersonal Exploitativeness Scale that defines exploitativeness in terms of reciprocity. In Studies 1 and 2, 33 items were administered to participants. Exploratory and Confirmatory Factor Analysis demonstrated that a single factor consisting of six items adequately assess interpersonal exploitativeness. Study 3 results revealed that the Interpersonal Exploitativeness Scale was positively associated with normal narcissism, pathological narcissism, psychological entitlement, and negative reciprocity and negatively correlated with positive reciprocity. In Study 4, participants competed in a commons dilemma. Those who scored higher on the Interpersonal Exploitativeness Scale were more likely to harvest a greater share of resources over time, even while controlling for other relevant variables, such as entitlement. Together, these studies show the Interpersonal Exploitativeness Scale to be a valid and reliable measure of interpersonal exploitativeness. The authors discuss the implications of these studies.

  6. Towards breaking the spatial resolution barriers: An optical flow and super-resolution approach for sea ice motion estimation

    Science.gov (United States)

    Petrou, Zisis I.; Xian, Yang; Tian, YingLi

    2018-04-01

    Estimation of sea ice motion at fine scales is important for a number of regional and local level applications, including modeling of sea ice distribution, ocean-atmosphere and climate dynamics, as well as safe navigation and sea operations. In this study, we propose an optical flow and super-resolution approach to accurately estimate motion from remote sensing images at a higher spatial resolution than the original data. First, an external example learning-based super-resolution method is applied on the original images to generate higher resolution versions. Then, an optical flow approach is applied on the higher resolution images, identifying sparse correspondences and interpolating them to extract a dense motion vector field with continuous values and subpixel accuracies. Our proposed approach is successfully evaluated on passive microwave, optical, and Synthetic Aperture Radar data, proving appropriate for multi-sensor applications and different spatial resolutions. The approach estimates motion with similar or higher accuracy than the original data, while increasing the spatial resolution of up to eight times. In addition, the adopted optical flow component outperforms a state-of-the-art pattern matching method. Overall, the proposed approach results in accurate motion vectors with unprecedented spatial resolutions of up to 1.5 km for passive microwave data covering the entire Arctic and 20 m for radar data, and proves promising for numerous scientific and operational applications.

  7. Density-based global sensitivity analysis of sheet-flow travel time: Kinematic wave-based formulations

    Science.gov (United States)

    Hosseini, Seiyed Mossa; Ataie-Ashtiani, Behzad; Simmons, Craig T.

    2018-04-01

    Despite advancements in developing physics-based formulations to estimate the sheet-flow travel time (tSHF), the quantification of the relative impacts of influential parameters on tSHF has not previously been considered. In this study, a brief review of the physics-based formulations to estimate tSHF including kinematic wave (K-W) theory in combination with Manning's roughness (K-M) and with Darcy-Weisbach friction formula (K-D) over single and multiple planes is provided. Then, the relative significance of input parameters to the developed approaches is quantified by a density-based global sensitivity analysis (GSA). The performance of K-M considering zero-upstream and uniform flow depth (so-called K-M1 and K-M2), and K-D formulae to estimate the tSHF over single plane surface were assessed using several sets of experimental data collected from the previous studies. The compatibility of the developed models to estimate tSHF over multiple planes considering temporal rainfall distributions of Natural Resources Conservation Service, NRCS (I, Ia, II, and III) are scrutinized by several real-world examples. The results obtained demonstrated that the main controlling parameters of tSHF through K-D and K-M formulae are the length of surface plane (mean sensitivity index T̂i = 0.72) and flow resistance (mean T̂i = 0.52), respectively. Conversely, the flow temperature and initial abstraction ratio of rainfall have the lowest influence on tSHF (mean T̂i is 0.11 and 0.12, respectively). The significant role of the flow regime on the estimation of tSHF over a single and a cascade of planes are also demonstrated. Results reveal that the K-D formulation provides more precise tSHF over the single plane surface with an average percentage of error, APE equal to 9.23% (the APE for K-M1 and K-M2 formulae were 13.8%, and 36.33%, respectively). The superiority of Manning-jointed formulae in estimation of tSHF is due to the incorporation of effects from different flow regimes as

  8. A numerical approach to the simulation of one-phase and two phase reactor coolant flow around nuclear fuel spacers

    International Nuclear Information System (INIS)

    Stosic, Z.V.; Stevanovic, V.D.

    2001-01-01

    A methodology for the simulation and analysis of one-phase and two-phase coolant flows around one or a row of spacers is presented. It is based on the multidimensional two-fluid mass, momentum and energy balance equations and application of adequate turbulence models. Necessary closure laws for interfacial transfer processes are presented. The stated general approach enables simulation and analyses of reactor coolant flow around spacers on different scale levels of the rod bundle geometry: detailed modelling of coolant flow around spacers and investigation of the influence of spacer's geometry on the coolant thermal-hydraulics, as well as prediction of global thermal-hydraulic parameters within the whole rod bundle with the investigation of the influence of rows of spacers on the bulk thermal-hydraulic processes. Sample problems are included illustrating these different modelling approaches. (author)

  9. An unstructured finite volume solver for two phase water/vapour flows based on an elliptic oriented fractional step method

    International Nuclear Information System (INIS)

    Mechitoua, N.; Boucker, M.; Lavieville, J.; Pigny, S.; Serre, G.

    2003-01-01

    Based on experience gained at EDF and Cea, a more general and robust 3-dimensional (3D) multiphase flow solver has been being currently developed for over three years. This solver, based on an elliptic oriented fractional step approach, is able to simulate multicomponent/multiphase flows. Discretization follows a 3D full unstructured finite volume approach, with a collocated arrangement of all variables. The non linear behaviour between pressure and volume fractions and a symmetric treatment of all fields are taken into account in the iterative procedure, within the time step. It greatly enforces the realizability of volume fractions (i.e 0 < α < 1), without artificial numerical needs. Applications to widespread test cases as static sedimentation, water hammer and phase separation are shown to assess the accuracy and the robustness of the flow solver in different flow conditions, encountered in nuclear reactors pipes. (authors)

  10. Flow Logic for Process Calculi

    DEFF Research Database (Denmark)

    Nielson, Hanne Riis; Nielson, Flemming; Pilegaard, Henrik

    2012-01-01

    Flow Logic is an approach to statically determining the behavior of programs and processes. It borrows methods and techniques from Abstract Interpretation, Data Flow Analysis and Constraint Based Analysis while presenting the analysis in a style more reminiscent of Type Systems. Traditionally...... developed for programming languages, this article provides a tutorial development of the approach of Flow Logic for process calculi based on a decade of research. We first develop a simple analysis for the π-calculus; this consists of the specification, semantic soundness (in the form of subject reduction......, and finally, we extend it to a relational analysis. A Flow Logic is a program logic---in the same sense that a Hoare’s logic is. We conclude with an executive summary presenting the highlights of the approach from this perspective including a discussion of theoretical properties as well as implementation...

  11. Category Theory Approach to Solution Searching Based on Photoexcitation Transfer Dynamics

    Directory of Open Access Journals (Sweden)

    Makoto Naruse

    2017-07-01

    Full Text Available Solution searching that accompanies combinatorial explosion is one of the most important issues in the age of artificial intelligence. Natural intelligence, which exploits natural processes for intelligent functions, is expected to help resolve or alleviate the difficulties of conventional computing paradigms and technologies. In fact, we have shown that a single-celled organism such as an amoeba can solve constraint satisfaction problems and related optimization problems as well as demonstrate experimental systems based on non-organic systems such as optical energy transfer involving near-field interactions. However, the fundamental mechanisms and limitations behind solution searching based on natural processes have not yet been understood. Herein, we present a theoretical background of solution searching based on optical excitation transfer from a category-theoretic standpoint. One important indication inspired by the category theory is that the satisfaction of short exact sequences is critical for an adequate computational operation that determines the flow of time for the system and is termed as “short-exact-sequence-based time.” In addition, the octahedral and braid structures known in triangulated categories provide a clear understanding of the underlying mechanisms, including a quantitative indication of the difficulties of obtaining solutions based on homology dimension. This study contributes to providing a fundamental background of natural intelligence.

  12. Proper orthogonal decomposition-based estimations of the flow field from particle image velocimetry wall-gradient measurements in the backward-facing step flow

    International Nuclear Information System (INIS)

    Nguyen, Thien Duy; Wells, John Craig; Mokhasi, Paritosh; Rempfer, Dietmar

    2010-01-01

    In this paper, particle image velocimetry (PIV) results from the recirculation zone of a backward-facing step flow, of which the Reynolds number is 2800 based on bulk velocity upstream of the step and step height (h = 16.5 mm), are used to demonstrate the capability of proper orthogonal decomposition (POD)-based measurement models. Three-component PIV velocity fields are decomposed by POD into a set of spatial basis functions and a set of temporal coefficients. The measurement models are built to relate the low-order POD coefficients, determined from an ensemble of 1050 PIV fields by the 'snapshot' method, to the time-resolved wall gradients, measured by a near-wall measurement technique called stereo interfacial PIV. These models are evaluated in terms of reconstruction and prediction of the low-order temporal POD coefficients of the velocity fields. In order to determine the estimation coefficients of the measurement models, linear stochastic estimation (LSE), quadratic stochastic estimation (QSE), principal component regression (PCR) and kernel ridge regression (KRR) are applied. We denote such approaches as LSE-POD, QSE-POD, PCR-POD and KRR-POD. In addition to comparing the accuracy of measurement models, we introduce multi-time POD-based estimations in which past and future information of the wall-gradient events is used separately or combined. The results show that the multi-time estimation approaches can improve the prediction process. Among these approaches, the proposed multi-time KRR-POD estimation with an optimized window of past wall-gradient information yields the best prediction. Such a multi-time KRR-POD approach offers a useful tool for real-time flow estimation of the velocity field based on wall-gradient data

  13. Robust optimization-based DC optimal power flow for managing wind generation uncertainty

    Science.gov (United States)

    Boonchuay, Chanwit; Tomsovic, Kevin; Li, Fangxing; Ongsakul, Weerakorn

    2012-11-01

    Integrating wind generation into the wider grid causes a number of challenges to traditional power system operation. Given the relatively large wind forecast errors, congestion management tools based on optimal power flow (OPF) need to be improved. In this paper, a robust optimization (RO)-based DCOPF is proposed to determine the optimal generation dispatch and locational marginal prices (LMPs) for a day-ahead competitive electricity market considering the risk of dispatch cost variation. The basic concept is to use the dispatch to hedge against the possibility of reduced or increased wind generation. The proposed RO-based DCOPF is compared with a stochastic non-linear programming (SNP) approach on a modified PJM 5-bus system. Primary test results show that the proposed DCOPF model can provide lower dispatch cost than the SNP approach.

  14. Transformation of Commercial Flows into Physical Flows of Electricity – Flow Based Method

    Directory of Open Access Journals (Sweden)

    M. Adamec

    2009-01-01

    Full Text Available We are witnesses of large – scale electricity transport between European countries under the umbrella of the UCTE organization. This is due to the inabilyof generators to satisfy the growing consumption in some regions. In this content, we distinguish between two types of flow. The first type is physical flow, which causes costs in the transmission grid, whilst the second type is commercial flow, which provides revenues for the market participants. The old methods for allocating transfer capacity fail to take this duality into account. The old methods that allocate transmission border capacity to “virtual” commercial flows which, in fact, will not flow over this border, do not lead to optimal allocation. Some flows are uselessly rejected and conversely, some accepted flows can cause congestion on another border. The Flow Based Allocation method (FBA is a method which aims to solve this problem.Another goal of FBA is to ensure sustainable development of expansion of transmission capacity. Transmission capacity is important, because it represents a way to establish better transmission system stability, and it provides a distribution channel for electricity to customers abroad. For optimal development, it is necessary to ensure the right division of revenue allocation among the market participants.This paper contains a brief description of the FBA method. Problems of revenue maximization and optimal revenue distribution are mentioned. 

  15. Error analysis of satellite attitude determination using a vision-based approach

    Science.gov (United States)

    Carozza, Ludovico; Bevilacqua, Alessandro

    2013-09-01

    Improvements in communication and processing technologies have opened the doors to exploit on-board cameras to compute objects' spatial attitude using only the visual information from sequences of remote sensed images. The strategies and the algorithmic approach used to extract such information affect the estimation accuracy of the three-axis orientation of the object. This work presents a method for analyzing the most relevant error sources, including numerical ones, possible drift effects and their influence on the overall accuracy, referring to vision-based approaches. The method in particular focuses on the analysis of the image registration algorithm, carried out through on-purpose simulations. The overall accuracy has been assessed on a challenging case study, for which accuracy represents the fundamental requirement. In particular, attitude determination has been analyzed for small satellites, by comparing theoretical findings to metric results from simulations on realistic ground-truth data. Significant laboratory experiments, using a numerical control unit, have further confirmed the outcome. We believe that our analysis approach, as well as our findings in terms of error characterization, can be useful at proof-of-concept design and planning levels, since they emphasize the main sources of error for visual based approaches employed for satellite attitude estimation. Nevertheless, the approach we present is also of general interest for all the affine applicative domains which require an accurate estimation of three-dimensional orientation parameters (i.e., robotics, airborne stabilization).

  16. EXPLOITATION OF GRANITE BOULDER

    Directory of Open Access Journals (Sweden)

    Ivan Cotman

    1994-12-01

    Full Text Available The processes of forming, petrography, features, properties and exploitation of granite boulders are described. The directional drilling and black powder blasting is the succesful method in exploitation of granite boulders (boulder technology (the paper is published in Croatian.

  17. A Fuzzy Logic-Based Approach for Estimation of Dwelling Times of Panama Metro Stations

    Directory of Open Access Journals (Sweden)

    Aranzazu Berbey Alvarez

    2015-04-01

    Full Text Available Passenger flow modeling and station dwelling time estimation are significant elements for railway mass transit planning, but system operators usually have limited information to model the passenger flow. In this paper, an artificial-intelligence technique known as fuzzy logic is applied for the estimation of the elements of the origin-destination matrix and the dwelling time of stations in a railway transport system. The fuzzy inference engine used in the algorithm is based in the principle of maximum entropy. The approach considers passengers’ preferences to assign a level of congestion in each car of the train in function of the properties of the station platforms. This approach is implemented to estimate the passenger flow and dwelling times of the recently opened Line 1 of the Panama Metro. The dwelling times obtained from the simulation are compared to real measurements to validate the approach.

  18. Rationalising predictors of child sexual exploitation and sex-trading.

    Science.gov (United States)

    Klatt, Thimna; Cavner, Della; Egan, Vincent

    2014-02-01

    Although there is evidence for specific risk factors leading to child sexual exploitation and prostitution, these influences overlap and have rarely been examined concurrently. The present study examined case files for 175 young persons who attended a voluntary organization in Leicester, United Kingdom, which supports people who are sexually exploited or at risk of sexual exploitation. Based on the case files, the presence or absence of known risk factors for becoming a sex worker was coded. Data were analyzed using t-test, logistic regression, and smallest space analysis. Users of the voluntary organization's services who had been sexually exploited exhibited a significantly greater number of risk factors than service users who had not been victims of sexual exploitation. The logistic regression produced a significant model fit. However, of the 14 potential predictors--many of which were associated with each other--only four variables significantly predicted actual sexual exploitation: running away, poverty, drug and/or alcohol use, and having friends or family members in prostitution. Surprisingly, running away was found to significantly decrease the odds of becoming involved in sexual exploitation. Smallest space analysis of the data revealed 5 clusters of risk factors. Two of the clusters, which reflected a desperation and need construct and immature or out-of-control lifestyles, were significantly associated with sexual exploitation. Our research suggests that some risk factors (e.g. physical and emotional abuse, early delinquency, and homelessness) for becoming involved in sexual exploitation are common but are part of the problematic milieu of the individuals affected and not directly associated with sex trading itself. Our results also indicate that it is important to engage with the families and associates of young persons at risk of becoming (or remaining) a sex worker if one wants to reduce the numbers of persons who engage in this activity. Copyright

  19. Improving Simulations of Extreme Flows by Coupling a Physically-based Hydrologic Model with a Machine Learning Model

    Science.gov (United States)

    Mohammed, K.; Islam, A. S.; Khan, M. J. U.; Das, M. K.

    2017-12-01

    With the large number of hydrologic models presently available along with the global weather and geographic datasets, streamflows of almost any river in the world can be easily modeled. And if a reasonable amount of observed data from that river is available, then simulations of high accuracy can sometimes be performed after calibrating the model parameters against those observed data through inverse modeling. Although such calibrated models can succeed in simulating the general trend or mean of the observed flows very well, more often than not they fail to adequately simulate the extreme flows. This causes difficulty in tasks such as generating reliable projections of future changes in extreme flows due to climate change, which is obviously an important task due to floods and droughts being closely connected to people's lives and livelihoods. We propose an approach where the outputs of a physically-based hydrologic model are used as an input to a machine learning model to try and better simulate the extreme flows. To demonstrate this offline-coupling approach, the Soil and Water Assessment Tool (SWAT) was selected as the physically-based hydrologic model, the Artificial Neural Network (ANN) as the machine learning model and the Ganges-Brahmaputra-Meghna (GBM) river system as the study area. The GBM river system, located in South Asia, is the third largest in the world in terms of freshwater generated and forms the largest delta in the world. The flows of the GBM rivers were simulated separately in order to test the performance of this proposed approach in accurately simulating the extreme flows generated by different basins that vary in size, climate, hydrology and anthropogenic intervention on stream networks. Results show that by post-processing the simulated flows of the SWAT models with ANN models, simulations of extreme flows can be significantly improved. The mean absolute errors in simulating annual maximum/minimum daily flows were minimized from 4967

  20. Study of the parabolic and elliptic approaches validities for a turbulent co-flowing jet

    Directory of Open Access Journals (Sweden)

    Mahmoud Houda

    2012-01-01

    Full Text Available An axisymmetric turbulent jet discharged in a co-flowing stream was studied with the aid of parabolic and elliptic approaches. The simulations were performed with two in-house codes. Detailed comparisons of data show good agreement with the corresponding experiments; and different behaviors of jet dilution were found in initial region at different ranges of velocities ratios. It has been found that the two approaches give practically the same results for the velocities ratios Ru ≤ 1.5. Further from this value, the elliptic approach highlights the appearance of the fall velocity zone and that’s due to the presence of a trough low pressure. This fall velocity has not been detected by the parabolic approach and that’s due to the jet entrainment by the ambient flow. The intensity of this entrainment is directly related to the difference between the primary (jet and the secondary flow (co-flow. In fact, by increasing the velocities ratios Ru, the sucked flux by the outer stream becomes more important; the fall velocity intensifies and changes into a recirculation zone for Ru ≥ 5.

  1. Quantification of ozone uptake at the stand level in a Pinus canariensis forest in Tenerife, Canary Islands: An approach based on sap flow measurements

    Energy Technology Data Exchange (ETDEWEB)

    Wieser, Gerhard [Division of Alpine Timberline Ecophysiology, Federal Research and Training Centre for Forests, Natural Hazards and Landscape, Rennweg 1, A-6020 Innsbruck (Austria)]. E-mail: gerhard.wieser@uibk.ac.at; Luis, Vanessa C. [Department of Plant Biology, Plant Physiology, University of La Laguna, Avda. Astrofisico Francisco Sanchez s/n, E-38207 La Laguna, Tenerife (Spain); Cuevas, Emilio [Izana Atmospheric Observatory, National Institute of Meteorology, La Marina, E-38071 Santa Cruz de Tenerife (Spain)

    2006-04-15

    Ozone uptake was studied in a pine forest in Tenerife, Canary Islands, an ecotone with strong seasonal changes in climate. Ambient ozone concentration showed a pronounced seasonal course with high concentrations during the dry and warm period and low concentrations during the wet and cold season. Ozone uptake by contrast showed no clear seasonal trend. This is because canopy conductance significantly decreased with soil water availability and vapour pressure deficit. Mean daily ozone uptake averaged 1.9 nmol m{sup -2} s{sup -1} during the wet and cold season, and 1.5 nmol m{sup -2} s{sup -1} during the warm and dry period. The corresponding daily mean ambient ozone concentrations were 42 and 51 nl l{sup -1}, respectively. Thus we conclude that in Mediterranean type forest ecosystems the flux based approach is more capable for risk assessment than an external, concentration based approach. - Sap flow measurements can be used for estimating ozone uptake at the stand level and for parameterisation of O{sub 3} uptake models.

  2. The trafficking of women and girls in Taiwan: characteristics of victims, perpetrators, and forms of exploitation.

    Science.gov (United States)

    Huang, Lanying

    2017-11-09

    Prior to the passing of 2009 Human Trafficking Prevention Act (HTPA), human trafficking was underestimated in Taiwan. In the past, domestic trafficking in women and girls often targeted vulnerable groups such as young girls from poor families or minority groups. Since the 1990s, an increasing flow of immigrant women, mainly from Vietnam and Indonesia and some from China, into Taiwan has created a new group of Human Trafficking victims. The current study intends to identify, describe, and categorize reported and prosecuted human trafficking cases involving women and girls according to the HTPA in Taiwan. Using the court proceedings of prosecuted trafficking in women and girls cases under Taiwan's HTPA from all 21 districts in Taiwan from 2009 to 2012 under the title keyword of 'Human Trafficking', this current study aims to categorize different patterns of existing trafficking in women and girls in Taiwan. The analysis is based on 37 court cases, involving 195 victimized women and girls and 118 perpetrators. This study identifies six forms of Human Trafficking victims according to their country of origin, vulnerability status, and means of transport. This study found that women and girls suffer from both labor and sexual exploitation, from mainly domestic male perpetrators. While sexual exploitation is more evenly distributed among citizens and immigrants and affects both adults and minors, labor exploitation seems to be an exclusive phenomenon among women immigrant workers in the data. Human Trafficking cases in Taiwan share many of the similarities of Human Trafficking in other regions, which are highly associated with gender inequality and gender-based vulnerability.

  3. Microbial Eco-Physiology of the human intestinal tract: a flow cytometric approach

    NARCIS (Netherlands)

    Amor, Ben K.

    2004-01-01

    This thesis describes a multifaceted approach to further enhance our view of the complex human intestinal microbial ecosystem. This approach combines me advantages of flow cyrometry (FCM), a single cell and high-throughput technology, and molecular techniques that have proven themselves to be

  4. The catchment based approach using catchment system engineering

    Science.gov (United States)

    Jonczyk, Jennine; Quinn, Paul; Barber, Nicholas; Wilkinson, Mark

    2015-04-01

    The catchment based approach (CaBa) has been championed as a potential mechanism for delivery of environmental directives such as the Water Framework Directive in the UK. However, since its launch in 2013, there has been only limited progress towards achieving sustainable, holistic management, with only a few of examples of good practice ( e.g. from the Tyne Rivers trust). Common issues with developing catchment plans over a national scale include limited data and resources to identify issues and source of those issues, how to systematically identify suitable locations for measures or suites of measures that will have the biggest downstream impact and how to overcome barriers for implementing solutions. Catchment System Engineering (CSE) is an interventionist approach to altering the catchment scale runoff regime through the manipulation of hydrological flow pathways throughout the catchment. A significant component of the runoff generation can be managed by targeting hydrological flow pathways at source, such as overland flow, field drain and ditch function, greatly reducing erosive soil losses. Coupled with management of farm nutrients at source, many runoff attenuation features or measures can be co-located to achieve benefits for water quality and biodiversity. A catchment, community-led mitigation measures plan using the CSE approach will be presented from a catchment in Northumberland, Northern England that demonstrate a generic framework for identification of multi-purpose features that slow, store and filter runoff at strategic locations in the landscape. Measures include within-field barriers, edge of field traps and within-ditch measures. Progress on the implementation of measures will be reported alongside potential impacts on the runoff regime at both local and catchment scale and costs.

  5. The Combination of Micro Diaphragm Pumps and Flow Sensors for Single Stroke Based Liquid Flow Control.

    Science.gov (United States)

    Jenke, Christoph; Pallejà Rubio, Jaume; Kibler, Sebastian; Häfner, Johannes; Richter, Martin; Kutter, Christoph

    2017-04-03

    With the combination of micropumps and flow sensors, highly accurate and secure closed-loop controlled micro dosing systems for liquids are possible. Implementing a single stroke based control mode with piezoelectrically driven micro diaphragm pumps can provide a solution for dosing of volumes down to nanoliters or variable average flow rates in the range of nL/min to μL/min. However, sensor technologies feature a yet undetermined accuracy for measuring highly pulsatile micropump flow. Two miniaturizable in-line sensor types providing electrical readout-differential pressure based flow sensors and thermal calorimetric flow sensors-are evaluated for their suitability of combining them with mircopumps. Single stroke based calibration of the sensors was carried out with a new method, comparing displacement volumes and sensor flow volumes. Limitations of accuracy and performance for single stroke based flow control are described. Results showed that besides particle robustness of sensors, controlling resistive and capacitive damping are key aspects for setting up reproducible and reliable liquid dosing systems. Depending on the required average flow or defined volume, dosing systems with an accuracy of better than 5% for the differential pressure based sensor and better than 6.5% for the thermal calorimeter were achieved.

  6. Numerical simulations and mathematical models of flows in complex geometries

    DEFF Research Database (Denmark)

    Hernandez Garcia, Anier

    The research work of the present thesis was mainly aimed at exploiting one of the strengths of the Lattice Boltzmann methods, namely, the ability to handle complicated geometries to accurately simulate flows in complex geometries. In this thesis, we perform a very detailed theoretical analysis...... and through the Chapman-Enskog multi-scale expansion technique the dependence of the kinetic viscosity on each scheme is investigated. Seeking for optimal numerical schemes to eciently simulate a wide range of complex flows a variant of the finite element, off-lattice Boltzmann method [5], which uses...... the characteristic based integration is also implemented. Using the latter scheme, numerical simulations are conducted in flows of different complexities: flow in a (real) porous network and turbulent flows in ducts with wall irregularities. From the simulations of flows in porous media driven by pressure gradients...

  7. Simulation model for centrifugal pump in flow networks based on internal characteristics

    International Nuclear Information System (INIS)

    Sun, Ji-Lin; Xue, Ruo-Jun; Peng, Min-Jun

    2018-01-01

    For the simulation of centrifugal pump in flow network system, in general three approaches can be used, the fitting model, the numerical method and the internal characteristics model. The fitting model is simple and rapid thus widely used. The numerical method can provide more detailed information in comparison with the fitting model, but increases implementation complexity and computational cost. In real-time simulations of flow networks, to simulate the condition out of the rated condition, especially for the volume flow rate, which the accuracy of fitting model is incredible, a new method for simulating centrifugal pumps was proposed in this research. The method based on the theory head and hydraulic loss in centrifugal pumps, and cavitation is also to be considered. The simulation results are verified with experimental benchmark data from an actual pump. The comparison confirms that the proposed method could fit the flow-head curves well, and the responses of main parameters in dynamic-state operations are consistent with theoretical analyses.

  8. Accelerated Simplified Swarm Optimization with Exploitation Search Scheme for Data Clustering.

    Directory of Open Access Journals (Sweden)

    Wei-Chang Yeh

    Full Text Available Data clustering is commonly employed in many disciplines. The aim of clustering is to partition a set of data into clusters, in which objects within the same cluster are similar and dissimilar to other objects that belong to different clusters. Over the past decade, the evolutionary algorithm has been commonly used to solve clustering problems. This study presents a novel algorithm based on simplified swarm optimization, an emerging population-based stochastic optimization approach with the advantages of simplicity, efficiency, and flexibility. This approach combines variable vibrating search (VVS and rapid centralized strategy (RCS in dealing with clustering problem. VVS is an exploitation search scheme that can refine the quality of solutions by searching the extreme points nearby the global best position. RCS is developed to accelerate the convergence rate of the algorithm by using the arithmetic average. To empirically evaluate the performance of the proposed algorithm, experiments are examined using 12 benchmark datasets, and corresponding results are compared with recent works. Results of statistical analysis indicate that the proposed algorithm is competitive in terms of the quality of solutions.

  9. Spectrally-balanced chromatic approach-lighting system

    Science.gov (United States)

    Chase, W. D.

    1977-01-01

    Approach lighting system employing combinations of red and blue lights reduces problem of color-based optical illusions. System exploits inherent chromatic aberration of eye to create three-dimensional effect, giving pilot visual clues of position.

  10. Internet-Based Approaches to Building Stakeholder Networks for Conservation and Natural Resource Management.

    Science.gov (United States)

    Social network analysis (SNA) is based on a conceptual network representation of social interactions and is an invaluable tool for conservation professionals to increase collaboration, improve information flow, and increase efficiency. We present two approaches to constructing in...

  11. Comparison of differential pressure model based on flow regime for gas/liquid two-phase flow

    International Nuclear Information System (INIS)

    Dong, F; Zhang, F S; Li, W; Tan, C

    2009-01-01

    Gas/liquid two-phase flow in horizontal pipe is very common in many industry processes, because of the complexity and variability, the real-time parameter measurement of two-phase flow, such as the measurement of flow regime and flow rate, becomes a difficult issue in the field of engineering and science. The flow regime recognition plays a fundamental role in gas/liquid two-phase flow measurement, other parameters of two-phase flow can be measured more easily and correctly based on the correct flow regime recognition result. A multi-sensor system is introduced to make the flow regime recognition and the mass flow rate measurement. The fusion system is consisted of temperature sensor, pressure sensor, cross-section information system and v-cone flow meter. After the flow regime recognition by cross-section information system, comparison of four typical differential pressure (DP) models is discussed based on the DP signal of v-cone flow meter. Eventually, an optimum DP model has been chosen for each flow regime. The experiment result of mass flow rate measurement shows it is efficient to classify the DP models by flow regime.

  12. File-based data flow in the CMS Filter Farm

    Science.gov (United States)

    Andre, J.-M.; Andronidis, A.; Bawej, T.; Behrens, U.; Branson, J.; Chaze, O.; Cittolin, S.; Darlea, G.-L.; Deldicque, C.; Dobson, M.; Dupont, A.; Erhan, S.; Gigi, D.; Glege, F.; Gomez-Ceballos, G.; Hegeman, J.; Holzner, A.; Jimenez-Estupiñán, R.; Masetti, L.; Meijers, F.; Meschi, E.; Mommsen, R. K.; Morovic, S.; Nunez-Barranco-Fernandez, C.; O'Dell, V.; Orsini, L.; Paus, C.; Petrucci, A.; Pieri, M.; Racz, A.; Roberts, P.; Sakulin, H.; Schwick, C.; Stieger, B.; Sumorok, K.; Veverka, J.; Zaza, S.; Zejdl, P.

    2015-12-01

    During the LHC Long Shutdown 1, the CMS Data Acquisition system underwent a partial redesign to replace obsolete network equipment, use more homogeneous switching technologies, and prepare the ground for future upgrades of the detector front-ends. The software and hardware infrastructure to provide input, execute the High Level Trigger (HLT) algorithms and deal with output data transport and storage has also been redesigned to be completely file- based. This approach provides additional decoupling between the HLT algorithms and the input and output data flow. All the metadata needed for bookkeeping of the data flow and the HLT process lifetimes are also generated in the form of small “documents” using the JSON encoding, by either services in the flow of the HLT execution (for rates etc.) or watchdog processes. These “files” can remain memory-resident or be written to disk if they are to be used in another part of the system (e.g. for aggregation of output data). We discuss how this redesign improves the robustness and flexibility of the CMS DAQ and the performance of the system currently being commissioned for the LHC Run 2.

  13. File-Based Data Flow in the CMS Filter Farm

    Energy Technology Data Exchange (ETDEWEB)

    Andre, J.M.; et al.

    2015-12-23

    During the LHC Long Shutdown 1, the CMS Data Acquisition system underwent a partial redesign to replace obsolete network equipment, use more homogeneous switching technologies, and prepare the ground for future upgrades of the detector front-ends. The software and hardware infrastructure to provide input, execute the High Level Trigger (HLT) algorithms and deal with output data transport and storage has also been redesigned to be completely file- based. This approach provides additional decoupling between the HLT algorithms and the input and output data flow. All the metadata needed for bookkeeping of the data flow and the HLT process lifetimes are also generated in the form of small “documents” using the JSON encoding, by either services in the flow of the HLT execution (for rates etc.) or watchdog processes. These “files” can remain memory-resident or be written to disk if they are to be used in another part of the system (e.g. for aggregation of output data). We discuss how this redesign improves the robustness and flexibility of the CMS DAQ and the performance of the system currently being commissioned for the LHC Run 2.

  14. Adaptive probabilistic collocation based Kalman filter for unsaturated flow problem

    Science.gov (United States)

    Man, J.; Li, W.; Zeng, L.; Wu, L.

    2015-12-01

    The ensemble Kalman filter (EnKF) has gained popularity in hydrological data assimilation problems. As a Monte Carlo based method, a relatively large ensemble size is usually required to guarantee the accuracy. As an alternative approach, the probabilistic collocation based Kalman filter (PCKF) employs the Polynomial Chaos to approximate the original system. In this way, the sampling error can be reduced. However, PCKF suffers from the so called "cure of dimensionality". When the system nonlinearity is strong and number of parameters is large, PCKF is even more computationally expensive than EnKF. Motivated by recent developments in uncertainty quantification, we propose a restart adaptive probabilistic collocation based Kalman filter (RAPCKF) for data assimilation in unsaturated flow problem. During the implementation of RAPCKF, the important parameters are identified and active PCE basis functions are adaptively selected. The "restart" technology is used to alleviate the inconsistency between model parameters and states. The performance of RAPCKF is tested by unsaturated flow numerical cases. It is shown that RAPCKF is more efficient than EnKF with the same computational cost. Compared with the traditional PCKF, the RAPCKF is more applicable in strongly nonlinear and high dimensional problems.

  15. Reduced order model of draft tube flow

    International Nuclear Information System (INIS)

    Rudolf, P; Štefan, D

    2014-01-01

    Swirling flow with compact coherent structures is very good candidate for proper orthogonal decomposition (POD), i.e. for decomposition into eigenmodes, which are the cornerstones of the flow field. Present paper focuses on POD of steady flows, which correspond to different operating points of Francis turbine draft tube flow. Set of eigenmodes is built using a limited number of snapshots from computational simulations. Resulting reduced order model (ROM) describes whole operating range of the draft tube. ROM enables to interpolate in between the operating points exploiting the knowledge about significance of particular eigenmodes and thus reconstruct the velocity field in any operating point within the given range. Practical example, which employs axisymmetric simulations of the draft tube flow, illustrates accuracy of ROM in regions without vortex breakdown together with need for higher resolution of the snapshot database close to location of sudden flow changes (e.g. vortex breakdown). ROM based on POD interpolation is very suitable tool for insight into flow physics of the draft tube flows (especially energy transfers in between different operating points), for supply of data for subsequent stability analysis or as an initialization database for advanced flow simulations

  16. Economic value of ecological information in ecosystem-based natural resource management depends on exploitation history.

    Science.gov (United States)

    Essington, Timothy E; Sanchirico, James N; Baskett, Marissa L

    2018-02-13

    Ecosystem approaches to natural resource management are seen as a way to provide better outcomes for ecosystems and for people, yet the nature and strength of interactions among ecosystem components is usually unknown. Here we characterize the economic benefits of ecological knowledge through a simple model of fisheries that target a predator (piscivore) and its prey. We solve for the management (harvest) trajectory that maximizes net present value (NPV) for different ecological interactions and initial conditions that represent different levels of exploitation history. Optimal management trajectories generally approached similar harvest levels, but the pathways toward those levels varied considerably by ecological scenario. Application of the wrong harvest trajectory, which would happen if one type of ecological interaction were assumed but in fact another were occurring, generally led to only modest reductions in NPV. However, the risks were not equal across fleets: risks of incurring large losses of NPV and missing management targets were much higher in the fishery targeting piscivores, especially when piscivores were heavily depleted. Our findings suggest that the ecosystem approach might provide the greatest benefits when used to identify system states where management performs poorly with imperfect knowledge of system linkages so that management strategies can be adopted to avoid those states. Copyright © 2018 the Author(s). Published by PNAS.

  17. Acceleration of Gas Flow Simulations in Dual-Continuum Porous Media Based on the Mass-Conservation POD Method

    KAUST Repository

    Wang, Yi

    2017-09-12

    Reduced-order modeling approaches for gas flow in dual-porosity dual-permeability porous media are studied based on the proper orthogonal decomposition (POD) method combined with Galerkin projection. The typical modeling approach for non-porous-medium liquid flow problems is not appropriate for this compressible gas flow in a dual-continuum porous media. The reason is that non-zero mass transfer for the dual-continuum system can be generated artificially via the typical POD projection, violating the mass-conservation nature and causing the failure of the POD modeling. A new POD modeling approach is proposed considering the mass conservation of the whole matrix fracture system. Computation can be accelerated as much as 720 times with high precision (reconstruction errors as slow as 7.69 × 10−4%~3.87% for the matrix and 8.27 × 10−4%~2.84% for the fracture).

  18. Acceleration of Gas Flow Simulations in Dual-Continuum Porous Media Based on the Mass-Conservation POD Method

    KAUST Repository

    Wang, Yi; Sun, Shuyu; Yu, Bo

    2017-01-01

    Reduced-order modeling approaches for gas flow in dual-porosity dual-permeability porous media are studied based on the proper orthogonal decomposition (POD) method combined with Galerkin projection. The typical modeling approach for non-porous-medium liquid flow problems is not appropriate for this compressible gas flow in a dual-continuum porous media. The reason is that non-zero mass transfer for the dual-continuum system can be generated artificially via the typical POD projection, violating the mass-conservation nature and causing the failure of the POD modeling. A new POD modeling approach is proposed considering the mass conservation of the whole matrix fracture system. Computation can be accelerated as much as 720 times with high precision (reconstruction errors as slow as 7.69 × 10−4%~3.87% for the matrix and 8.27 × 10−4%~2.84% for the fracture).

  19. Silicon microfluidic flow focusing devices for the production of size-controlled PLGA based drug loaded microparticles.

    Science.gov (United States)

    Keohane, Kieran; Brennan, Des; Galvin, Paul; Griffin, Brendan T

    2014-06-05

    The increasing realisation of the impact of size and surface properties on the bio-distribution of drug loaded colloidal particles has driven the application of micro fabrication technologies for the precise engineering of drug loaded microparticles. This paper demonstrates an alternative approach for producing size controlled drug loaded PLGA based microparticles using silicon Microfluidic Flow Focusing Devices (MFFDs). Based on the precise geometry and dimensions of the flow focusing channel, microparticle size was successfully optimised by modifying the polymer type, disperse phase (Qd) flow rate, and continuous phase (Qc) flow rate. The microparticles produced ranged in sizes from 5 to 50 μm and were highly monodisperse (coefficient of variation <5%). A comparison of Ciclosporin (CsA) loaded PLGA microparticles produced by MFFDs vs conventional production techniques was also performed. MFFDs produced microparticles with a narrower size distribution profile, relative to the conventional approaches. In-vitro release kinetics of CsA was found to be influenced by the production technique, with the MFFD approach demonstrating the slowest rate of release over 7 days (4.99 ± 0.26%). Finally, MFFDs were utilised to produce pegylated microparticles using the block co-polymer, PEG-PLGA. In contrast to the smooth microparticles produced using PLGA, PEG-PLGA microparticles displayed a highly porous surface morphology and rapid CsA release, with 85 ± 6.68% CsA released after 24h. The findings from this study demonstrate the utility of silicon MFFDs for the precise control of size and surface morphology of PLGA based microparticles with potential drug delivery applications. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. Pressure from particle image velocimetry for convective flows: a Taylor’s hypothesis approach

    International Nuclear Information System (INIS)

    De Kat, R; Ganapathisubramani, B

    2013-01-01

    Taylor’s hypothesis is often applied in turbulent flow analysis to map temporal information into spatial information. Recent efforts in deriving pressure from particle image velocimetry (PIV) have proposed multiple approaches, each with its own weakness and strength. Application of Taylor’s hypothesis allows us to counter the weakness of an Eulerian approach that is described by de Kat and van Oudheusden (2012 Exp. Fluids 52 1089–106). Two different approaches of using Taylor’s hypothesis in determining planar pressure are investigated: one where pressure is determined from volumetric PIV data and one where pressure is determined from time-resolved stereoscopic PIV data. A performance assessment on synthetic data shows that application of Taylor’s hypothesis can improve determination of pressure from PIV data significantly compared with a time-resolved volumetric approach. The technique is then applied to time-resolved PIV data taken in a cross-flow plane of a turbulent jet (Ganapathisubramani et al 2007 Exp. Fluids 42 923–39). Results appear to indicate that pressure can indeed be obtained from PIV data in turbulent convective flows using the Taylor’s hypothesis approach, where there are no other methods to determine pressure. The role of convection velocity in determination of pressure is also discussed. (paper)

  1. River food webs: an integrative approach to bottom-up flow webs, top-down impact webs, and trophic position.

    Science.gov (United States)

    Benke, Arthur C

    2018-03-31

    The majority of food web studies are based on connectivity, top-down impacts, bottom-up flows, or trophic position (TP), and ecologists have argued for decades which is best. Rarely have any two been considered simultaneously. The present study uses a procedure that integrates the last three approaches based on taxon-specific secondary production and gut analyses. Ingestion flows are quantified to create a flow web and the same data are used to quantify TP for all taxa. An individual predator's impacts also are estimated using the ratio of its ingestion (I) of each prey to prey production (P) to create an I/P web. This procedure was applied to 41 invertebrate taxa inhabiting submerged woody habitat in a southeastern U.S. river. A complex flow web starting with five basal food resources had 462 flows >1 mg·m -2 ·yr -1 , providing far more information than a connectivity web. Total flows from basal resources to primary consumers/omnivores were dominated by allochthonous amorphous detritus and ranged from 1 to >50,000 mg·m -2 ·yr -1 . Most predator-prey flows were much lower (1,000  mg·m -2 ·yr -1 . The I/P web showed that 83% of individual predator impacts were weak (90%). Quantitative estimates of TP ranged from 2 to 3.7, contrasting sharply with seven integer-based trophic levels based on longest feeding chain. Traditional omnivores (TP = 2.4-2.9) played an important role by consuming more prey and exerting higher impacts on primary consumers than strict predators (TP ≥ 3). This study illustrates how simultaneous quantification of flow pathways, predator impacts, and TP together provide an integrated characterization of natural food webs. © 2018 by the Ecological Society of America.

  2. Exploiting VM/XA

    International Nuclear Information System (INIS)

    Boeheim, C.

    1990-03-01

    The Stanford Linear Accelerator Center has recently completed a conversion to IBM's VM/XA SP Release 2 operating system. The primary physics application had been constrained by the previous 16 megabyte memory limit. Work is underway to enable this application to exploit the new features of VM/XA. This paper presents a brief tutorial on how to convert an application to exploit VM/XA and discusses some of the SLAC experiences in doing so. 13 figs

  3. Internet-Based Asthma Education -- A Novel Approach to Compliance: A case Report

    Directory of Open Access Journals (Sweden)

    Cindy O'hara

    2006-01-01

    Full Text Available Asthma costs Canadians over $1.2 billion per annum and, despite advances, many asthmatic patients still have poor control. An action plan, symptom diary and measurement of peak expiratory flow have been shown to improve clinical outcomes. Effective educational interventions are an important component of good care. However, many rural sites lack not only access to education but physician care as well. It is reasonable, therefore, that an Internet-based asthma management program may be used as an approach. In the present case report, a novel approach that may increase access in these poorly serviced areas is presented. In an Internet-based asthma management program, patients are reviewed by a physician, receive education and are given a unique password that provides program access. Patients record symptoms and peak expiratory flow rates. The present case report shows that a patient can be assisted through an exacerbation, thus averting emergency intervention and stabilizing control, even when travelling on another continent.

  4. Effective deep learning training for single-image super-resolution in endomicroscopy exploiting video-registration-based reconstruction.

    Science.gov (United States)

    Ravì, Daniele; Szczotka, Agnieszka Barbara; Shakir, Dzhoshkun Ismail; Pereira, Stephen P; Vercauteren, Tom

    2018-06-01

    Probe-based confocal laser endomicroscopy (pCLE) is a recent imaging modality that allows performing in vivo optical biopsies. The design of pCLE hardware, and its reliance on an optical fibre bundle, fundamentally limits the image quality with a few tens of thousands fibres, each acting as the equivalent of a single-pixel detector, assembled into a single fibre bundle. Video registration techniques can be used to estimate high-resolution (HR) images by exploiting the temporal information contained in a sequence of low-resolution (LR) images. However, the alignment of LR frames, required for the fusion, is computationally demanding and prone to artefacts. In this work, we propose a novel synthetic data generation approach to train exemplar-based Deep Neural Networks (DNNs). HR pCLE images with enhanced quality are recovered by the models trained on pairs of estimated HR images (generated by the video registration algorithm) and realistic synthetic LR images. Performance of three different state-of-the-art DNNs techniques were analysed on a Smart Atlas database of 8806 images from 238 pCLE video sequences. The results were validated through an extensive image quality assessment that takes into account different quality scores, including a Mean Opinion Score (MOS). Results indicate that the proposed solution produces an effective improvement in the quality of the obtained reconstructed image. The proposed training strategy and associated DNNs allows us to perform convincing super-resolution of pCLE images.

  5. Boys are not exempt: Sexual exploitation of adolescents in sub-Saharan Africa.

    Science.gov (United States)

    Adjei, Jones K; Saewyc, Elizabeth M

    2017-03-01

    Research on youth sexual exploitation in Africa has largely neglected the experiences of exploited boys. To date, much of the research in sub-Saharan Africa continues to consider boys mainly as exploiters but not as exploited. Using the only publicly available population-based surveys from the National Survey of Adolescents, conducted in four sub-Saharan African countries - Burkina Faso, Ghana, Malawi, and Uganda-we assessed factors associated with transactional sexual behaviour among never-married adolescent boys and girls. We also examined whether boys' reported sexual exploitation was linked to similar risky sexual behaviours as has been noted among girls in sub-Saharan Africa. Results from our analyses indicated that even though adolescent girls have a somewhat higher likelihood of reporting sexual abuse and exploitation, the odds of trading sex were significantly elevated for previously traumatized boys (that is those with a history of sexual and physical abuse) but not for their female counterparts. Just like adolescent girls, transactional sexual behaviour was associated with the risk of having concurrent multiple sexual partners for boys. These findings support the reality of boys' sexual exploitation within the African context, and further highlight the importance of including males in general and boys in particular in population-based studies on sexual health, risk, and protective factors in the sub-Saharan African region. Understanding the factors linked to sexual exploitation for both boys and girls will help in developing policies and programs that could improve the overall sexual and reproductive health outcomes among adolescents and youth in sub-Saharan Africa. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Side Information and Noise Learning for Distributed Video Coding using Optical Flow and Clustering

    DEFF Research Database (Denmark)

    Luong, Huynh Van; Rakêt, Lars Lau; Huang, Xin

    2012-01-01

    Distributed video coding (DVC) is a coding paradigm which exploits the source statistics at the decoder side to reduce the complexity at the encoder. The coding efficiency of DVC critically depends on the quality of side information generation and accuracy of noise modeling. This paper considers...... Transform Domain Wyner-Ziv (TDWZ) coding and proposes using optical flow to improve side information generation and clustering to improve noise modeling. The optical flow technique is exploited at the decoder side to compensate weaknesses of block based methods, when using motion-compensation to generate...... side information frames. Clustering is introduced to capture cross band correlation and increase local adaptivity in the noise modeling. This paper also proposes techniques to learn from previously decoded (WZ) frames. Different techniques are combined by calculating a number of candidate soft side...

  7. Environmental flow assessments for transformed estuaries

    Science.gov (United States)

    Sun, Tao; Zhang, Heyue; Yang, Zhifeng; Yang, Wei

    2015-01-01

    Here, we propose an approach to environmental flow assessment that considers spatial pattern variations in potential habitats affected by river discharges and tidal currents in estuaries. The approach comprises four steps: identifying and simulating the distributions of critical environmental factors for habitats of typical species in an estuary; mapping of suitable habitats based on spatial distributions of the Habitat Suitability Index (HSI) and adopting the habitat aggregation index to understand fragmentation of potential suitable habitats; defining variations in water requirements for a certain species using trade-off analysis for different protection objectives; and recommending environmental flows in the estuary considering the compatibility and conflict of freshwater requirements for different species. This approach was tested using a case study in the Yellow River Estuary. Recommended environmental flows were determined by incorporating the requirements of four types of species into the assessments. Greater variability in freshwater inflows could be incorporated into the recommended environmental flows considering the adaptation of potential suitable habitats with variations in the flow regime. Environmental flow allocations should be conducted in conjunction with land use conflict management in estuaries. Based on the results presented here, the proposed approach offers flexible assessment of environmental flow for aquatic ecosystems that may be subject to future change.

  8. The Combination of Micro Diaphragm Pumps and Flow Sensors for Single Stroke Based Liquid Flow Control

    Directory of Open Access Journals (Sweden)

    Christoph Jenke

    2017-04-01

    Full Text Available With the combination of micropumps and flow sensors, highly accurate and secure closed-loop controlled micro dosing systems for liquids are possible. Implementing a single stroke based control mode with piezoelectrically driven micro diaphragm pumps can provide a solution for dosing of volumes down to nanoliters or variable average flow rates in the range of nL/min to μL/min. However, sensor technologies feature a yet undetermined accuracy for measuring highly pulsatile micropump flow. Two miniaturizable in-line sensor types providing electrical readout—differential pressure based flow sensors and thermal calorimetric flow sensors—are evaluated for their suitability of combining them with mircopumps. Single stroke based calibration of the sensors was carried out with a new method, comparing displacement volumes and sensor flow volumes. Limitations of accuracy and performance for single stroke based flow control are described. Results showed that besides particle robustness of sensors, controlling resistive and capacitive damping are key aspects for setting up reproducible and reliable liquid dosing systems. Depending on the required average flow or defined volume, dosing systems with an accuracy of better than 5% for the differential pressure based sensor and better than 6.5% for the thermal calorimeter were achieved.

  9. Directional synthetic aperture flow imaging using a dual stage beamformer approach

    DEFF Research Database (Denmark)

    Li, Ye; Jensen, Jørgen Arendt

    2011-01-01

    . The new method has been studied using the Field II simulations and experimental flow rig measurements. A linear array transducer with 7 MHz center frequency is used, and 64 elements are active to transmit and receive signals. The data is processed in two stages. The first stage has a fixed focus point......A new method for directional synthetic aperture flow imaging using a dual stage beamformer approach is presented. The velocity estimation is angle independent and the amount of calculations is reduced compared to full synthetic aperture, but still maintains all the advantages at the same time....... In the second stage, focal points are considered as virtual sources and data is beamformed along the flow direction. Then the velocities are estimated by finding the spatial shift between two signals. In the experimental measurements the angle between the transmit beam and flow vessel was 70 and a laminar flow...

  10. Approaches to large scale unsaturated flow in heterogeneous, stratified, and fractured geologic media

    International Nuclear Information System (INIS)

    Ababou, R.

    1991-08-01

    This report develops a broad review and assessment of quantitative modeling approaches and data requirements for large-scale subsurface flow in radioactive waste geologic repository. The data review includes discussions of controlled field experiments, existing contamination sites, and site-specific hydrogeologic conditions at Yucca Mountain. Local-scale constitutive models for the unsaturated hydrodynamic properties of geologic media are analyzed, with particular emphasis on the effect of structural characteristics of the medium. The report further reviews and analyzes large-scale hydrogeologic spatial variability from aquifer data, unsaturated soil data, and fracture network data gathered from the literature. Finally, various modeling strategies toward large-scale flow simulations are assessed, including direct high-resolution simulation, and coarse-scale simulation based on auxiliary hydrodynamic models such as single equivalent continuum and dual-porosity continuum. The roles of anisotropy, fracturing, and broad-band spatial variability are emphasized. 252 refs

  11. Simulating groundwater flow in karst aquifers with distributed parameter models—Comparison of porous-equivalent media and hybrid flow approaches

    Science.gov (United States)

    Kuniansky, Eve L.

    2016-09-22

    Understanding karst aquifers, for purposes of their management and protection, poses unique challenges. Karst aquifers are characterized by groundwater flow through conduits (tertiary porosity), and (or) layers with interconnected pores (secondary porosity) and through intergranular porosity (primary or matrix porosity). Since the late 1960s, advances have been made in the development of numerical computer codes and the use of mathematical model applications towards the understanding of dual (primary [matrix] and secondary [fractures and conduits]) porosity groundwater flow processes, as well as characterization and management of karst aquifers. The Floridan aquifer system (FAS) in Florida and parts of Alabama, Georgia, and South Carolina is composed of a thick sequence of predominantly carbonate rocks. Karst features are present over much of its area, especially in Florida where more than 30 first-magnitude springs occur, numerous sinkholes and submerged conduits have been mapped, and numerous circular lakes within sinkhole depressions are present. Different types of mathematical models have been applied for simulation of the FAS. Most of these models are distributed parameter models based on the assumption that, like a sponge, water flows through connected pores within the aquifer system and can be simulated with the same mathematical methods applied to flow through sand and gravel aquifers; these models are usually referred to as porous-equivalent media models. The partial differential equation solved for groundwater flow is the potential flow equation of fluid mechanics, which is used when flow is dominated by potential energy and has been applied for many fluid problems in which kinetic energy terms are dropped from the differential equation solved. In many groundwater model codes (basic MODFLOW), it is assumed that the water has a constant temperature and density and that flow is laminar, such that kinetic energy has minimal impact on flow. Some models have

  12. A stochastic programming approach to manufacturing flow control

    OpenAIRE

    Haurie, Alain; Moresino, Francesco

    2012-01-01

    This paper proposes and tests an approximation of the solution of a class of piecewise deterministic control problems, typically used in the modeling of manufacturing flow processes. This approximation uses a stochastic programming approach on a suitably discretized and sampled system. The method proceeds through two stages: (i) the Hamilton-Jacobi-Bellman (HJB) dynamic programming equations for the finite horizon continuous time stochastic control problem are discretized over a set of sample...

  13. FY 1974 Report on results of Sunshine Project. Research and development of methods for wide-area thermal structure exploitation; 1974 nendo koiki netsu kozo chosaho no kenkyu kaihatsu seika hokokusho

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1975-05-28

    The project of (research and development of methods for wide-area thermal structure exploitation) is to pursue (1) development of heat flow rate analyzers, and researches on the methods for measurement/analysis of heat flow rates, (2) researches on geological thermometers with the cores collected from test bores as the samples, (3) observation/analysis of microearthquakes, and application of the results to the geothermal structure exploitation, and (4) development, on a trial basis, small, light air drilling machines. The method for clearly elucidating thermal structures has not been well established. Therefore, a test bore was drilled to measure temperature and collect the core samples in the wide-area geothermal district at Kita Yahata-daira as the model district, for measurement of heat flows, and analysis of the cores to estimate temperature at which they are formed. Microearthquakes, which frequently occur in geothermal areas, are also observed for extended periods. This project of (research and development of methods for wide-area thermal structure exploitation) is aimed at, based on the above efforts, to estimate/elucidate depth of thermal sources by the geological, geophysical and geochemical studies, relationships between geological structures and high-temperature zones, geothermal propagation structures, and geothermal reserves. This report summarizes the R and D results in the initial year of the 6-year plan. (NEDO)

  14. A strategic flight conflict avoidance approach based on a memetic algorithm

    Directory of Open Access Journals (Sweden)

    Guan Xiangmin

    2014-02-01

    Full Text Available Conflict avoidance (CA plays a crucial role in guaranteeing the airspace safety. The current approaches, mostly focusing on a short-term situation which eliminates conflicts via local adjustment, cannot provide a global solution. Recently, long-term conflict avoidance approaches, which are proposed to provide solutions via strategically planning traffic flow from a global view, have attracted more attentions. With consideration of the situation in China, there are thousands of flights per day and the air route network is large and complex, which makes the long-term problem to be a large-scale combinatorial optimization problem with complex constraints. To minimize the risk of premature convergence being faced by current approaches and obtain higher quality solutions, in this work, we present an effective strategic framework based on a memetic algorithm (MA, which can markedly improve search capability via a combination of population-based global search and local improvements made by individuals. In addition, a specially designed local search operator and an adaptive local search frequency strategy are proposed to improve the solution quality. Furthermore, a fast genetic algorithm (GA is presented as the global optimization method. Empirical studies using real traffic data of the Chinese air route network and daily flight plans show that our approach outperformed the existing approaches including the GA based approach and the cooperative coevolution based approach as well as some well-known memetic algorithm based approaches.

  15. Differences in the metabolic rates of exploited and unexploited fish populations: a signature of recreational fisheries induced evolution?

    Directory of Open Access Journals (Sweden)

    Jan-Michael Hessenauer

    Full Text Available Non-random mortality associated with commercial and recreational fisheries have the potential to cause evolutionary changes in fish populations. Inland recreational fisheries offer unique opportunities for the study of fisheries induced evolution due to the ability to replicate study systems, limited gene flow among populations, and the existence of unexploited reference populations. Experimental research has demonstrated that angling vulnerability is heritable in Largemouth Bass Micropterus salmoides, and is correlated with elevated resting metabolic rates (RMR and higher fitness. However, whether such differences are present in wild populations is unclear. This study sought to quantify differences in RMR among replicated exploited and unexploited populations of Largemouth Bass. We collected age-0 Largemouth Bass from two Connecticut drinking water reservoirs unexploited by anglers for almost a century, and two exploited lakes, then transported and reared them in the same pond. Field RMR of individuals from each population was quantified using intermittent-flow respirometry. Individuals from unexploited reservoirs had a significantly higher mean RMR (6% than individuals from exploited populations. These findings are consistent with expectations derived from artificial selection by angling on Largemouth Bass, suggesting that recreational angling may act as an evolutionary force influencing the metabolic rates of fishes in the wild. Reduced RMR as a result of fisheries induced evolution may have ecosystem level effects on energy demand, and be common in exploited recreational populations globally.

  16. The economics of exploiting gas hydrates

    International Nuclear Information System (INIS)

    Döpke, Lena-Katharina; Requate, Till

    2014-01-01

    We investigate the optimal exploitation of methane hydrates, a recent discovery of methane resources under the sea floor, mainly located along the continental margins. Combustion of methane (releasing CO2) and leakage through blow-outs (releasing CH4) contribute to the accumulation of greenhouse gases. A second externality arises since removing solid gas hydrates from the sea bottom destabilizes continental margins and thus increases the risk of marine earthquakes. We show that in such a model three regimes can occur: i) resource exploitation will be stopped in finite time, and some of the resource will stay in situ, ii) the resource will be used up completely in finite time, and iii) the resource will be exhausted in infinite time. We also show how to internalize the externalities by policy instruments. - Highlights: • We set up a model of optimal has hydrate exploitation • We incorporate to types of damages: contribution to global warming and geo-hazards • We characterize optimal exploitation paths and study decentralization with an exploitation tax. • Three regimes can occur: • i) exploitation in finite time and some of the stock remaining in situ, • ii) exploitation in finite time and the resource will be exhausted, • iii) exploitation and exhaustion in infinite time

  17. A Model-Based Prognostics Approach Applied to Pneumatic Valves

    Data.gov (United States)

    National Aeronautics and Space Administration — Within the area of systems health management, the task of prognostics centers on predicting when components will fail. Model-based prognostics exploits domain...

  18. A Model-based Prognostics Approach Applied to Pneumatic Valves

    Data.gov (United States)

    National Aeronautics and Space Administration — Within the area of systems health management, the task of prognostics centers on predicting when components will fail. Model-based prognostics exploits domain...

  19. Free surface profiles in river flows: Can standard energy-based gradually-varied flow computations be pursued?

    Science.gov (United States)

    Cantero, Francisco; Castro-Orgaz, Oscar; Garcia-Marín, Amanda; Ayuso, José Luis; Dey, Subhasish

    2015-10-01

    Is the energy equation for gradually-varied flow the best approximation for the free surface profile computations in river flows? Determination of flood inundation in rivers and natural waterways is based on the hydraulic computation of flow profiles. This is usually done using energy-based gradually-varied flow models, like HEC-RAS, that adopts a vertical division method for discharge prediction in compound channel sections. However, this discharge prediction method is not so accurate in the context of advancements over the last three decades. This paper firstly presents a study of the impact of discharge prediction on the gradually-varied flow computations by comparing thirteen different methods for compound channels, where both energy and momentum equations are applied. The discharge, velocity distribution coefficients, specific energy, momentum and flow profiles are determined. After the study of gradually-varied flow predictions, a new theory is developed to produce higher-order energy and momentum equations for rapidly-varied flow in compound channels. These generalized equations enable to describe the flow profiles with more generality than the gradually-varied flow computations. As an outcome, results of gradually-varied flow provide realistic conclusions for computations of flow in compound channels, showing that momentum-based models are in general more accurate; whereas the new theory developed for rapidly-varied flow opens a new research direction, so far not investigated in flows through compound channels.

  20. In-vivo brain blood flow imaging based on laser speckle contrast imaging and synchrotron radiation microangiography

    International Nuclear Information System (INIS)

    Miao, Peng; Feng, Shihan; Zhang, Qi; Lin, Xiaojie; Xie, Bohua; Liu, Chenwei; Yang, Guo-Yuan

    2014-01-01

    Abstract In-vivo imaging of blood flow in the cortex and sub-cortex is still a challenge in biological and pathological studies of cerebral vascular diseases. Laser speckle contrast imaging (LSCI) only provides cortex blood flow information. Traditional synchrotron radiation micro-angiography (SRA) provides sub-cortical vasculature information with high resolution. In this study, a bolus front-tracking method was developed to extract blood flow information based on SRA. Combining LSCI and SRA, arterial blood flow in the ipsilateral cortex and sub-cortex was monitored after experimental intracerebral hemorrhage of mice. At 72 h after injury, a significant blood flow increase was observed in the lenticulostriate artery along with blood flow decrease in cortical branches of the middle cerebral artery. This combined strategy provides a new approach for the investigation of brain vasculature and blood flow changes in preclinical studies. (paper)

  1. Automatic SIMD vectorization of SSA-based control flow graphs

    CERN Document Server

    Karrenberg, Ralf

    2015-01-01

    Ralf Karrenberg presents Whole-Function Vectorization (WFV), an approach that allows a compiler to automatically create code that exploits data-parallelism using SIMD instructions. Data-parallel applications such as particle simulations, stock option price estimation or video decoding require the same computations to be performed on huge amounts of data. Without WFV, one processor core executes a single instance of a data-parallel function. WFV transforms the function to execute multiple instances at once using SIMD instructions. The author describes an advanced WFV algorithm that includes a v

  2. Modeling of isothermal bubbly flow with interfacial area transport equation and bubble number density approach

    Energy Technology Data Exchange (ETDEWEB)

    Sari, Salih [Hacettepe University, Department of Nuclear Engineering, Beytepe, 06800 Ankara (Turkey); Erguen, Sule [Hacettepe University, Department of Nuclear Engineering, Beytepe, 06800 Ankara (Turkey); Barik, Muhammet; Kocar, Cemil; Soekmen, Cemal Niyazi [Hacettepe University, Department of Nuclear Engineering, Beytepe, 06800 Ankara (Turkey)

    2009-03-15

    In this study, isothermal turbulent bubbly flow is mechanistically modeled. For the modeling, Fluent version 6.3.26 is used as the computational fluid dynamics solver. First, the mechanistic models that simulate the interphase momentum transfer between the gas (bubbles) and liquid (continuous) phases are investigated, and proper models for the known flow conditions are selected. Second, an interfacial area transport equation (IATE) solution is added to Fluent's solution scheme in order to model the interphase momentum transfer mechanisms. In addition to solving IATE, bubble number density (BND) approach is also added to Fluent and this approach is also used in the simulations. Different source/sink models derived for the IATE and BND models are also investigated. The simulations of experiments based on the available data in literature are performed by using IATE and BND models in two and three-dimensions. The results show that the simulations performed by using IATE and BND models agree with each other and with the experimental data. The simulations performed in three-dimensions give better agreement with the experimental data.

  3. Modeling of isothermal bubbly flow with interfacial area transport equation and bubble number density approach

    International Nuclear Information System (INIS)

    Sari, Salih; Erguen, Sule; Barik, Muhammet; Kocar, Cemil; Soekmen, Cemal Niyazi

    2009-01-01

    In this study, isothermal turbulent bubbly flow is mechanistically modeled. For the modeling, Fluent version 6.3.26 is used as the computational fluid dynamics solver. First, the mechanistic models that simulate the interphase momentum transfer between the gas (bubbles) and liquid (continuous) phases are investigated, and proper models for the known flow conditions are selected. Second, an interfacial area transport equation (IATE) solution is added to Fluent's solution scheme in order to model the interphase momentum transfer mechanisms. In addition to solving IATE, bubble number density (BND) approach is also added to Fluent and this approach is also used in the simulations. Different source/sink models derived for the IATE and BND models are also investigated. The simulations of experiments based on the available data in literature are performed by using IATE and BND models in two and three-dimensions. The results show that the simulations performed by using IATE and BND models agree with each other and with the experimental data. The simulations performed in three-dimensions give better agreement with the experimental data

  4. RF cavity design exploiting a new derivative-free trust region optimization approach

    Directory of Open Access Journals (Sweden)

    Abdel-Karim S.O. Hassan

    2015-11-01

    Full Text Available In this article, a novel derivative-free (DF surrogate-based trust region optimization approach is proposed. In the proposed approach, quadratic surrogate models are constructed and successively updated. The generated surrogate model is then optimized instead of the underlined objective function over trust regions. Truncated conjugate gradients are employed to find the optimal point within each trust region. The approach constructs the initial quadratic surrogate model using few data points of order O(n, where n is the number of design variables. The proposed approach adopts weighted least squares fitting for updating the surrogate model instead of interpolation which is commonly used in DF optimization. This makes the approach more suitable for stochastic optimization and for functions subject to numerical error. The weights are assigned to give more emphasis to points close to the current center point. The accuracy and efficiency of the proposed approach are demonstrated by applying it to a set of classical bench-mark test problems. It is also employed to find the optimal design of RF cavity linear accelerator with a comparison analysis with a recent optimization technique.

  5. Coupling on-line preconcentration by ion-exchange with ETAAS. A novel flow injection approach based on the use of a renewable microcolumn as demonstrated for the determination of nickel in environmental and biological samples

    DEFF Research Database (Denmark)

    Wang, Jianhua; Hansen, Elo Harald

    2000-01-01

    microcolumn incorporated within an integrated micro FI-system, the column is loaded with a defined volume of small beads of an SP Sephadex C-25 cation-exchange resin and subsequently exposed to a metered amount of sample solution. However, instead of eluting the retained analyte from the organic ion-exchange......A novel way of exploiting flow injection/sequential injection (FIA/SIA) on-line ion-exchange preconcentration with detection by electrothermal atomic absorption spectrometry (ETAAS) is described and demonstrated for the determination of trace-levels of nickel. Based on the use of a renewable...... resin, the beads are along with 30 mul of carrier (buffer) solution transported via air segmentation directly into the graphite tube, where they are ashed during the pyrolysis and atomization process. The ETAAS determination is performed in parallel with the preconcentration process of the ensuing...

  6. Imouraren mining exploitation : Complementary studies Synthetic report Volum B - Mines

    International Nuclear Information System (INIS)

    1980-01-01

    The object of the current study is to determine the main technical characteristics of the reference project of a mine that can supply the necessary ore quantity at a production of 3000 tonnes uranium per year, along 10 years. The project is one of the possible solutions for exploiting the mine. The current study permits to establish : investment and functioning cost estimation, overall project of the mining exploitation program, necessary strength estimation, average ore grades evaluation and variations of these grades, utilities needs, production vizing program, main exploitation methods and necessary materials. Reference project study of the mine serves as base to the economics studies and studies optimization [fr

  7. Exploitation in International Paid Surrogacy Arrangements

    OpenAIRE

    Wilkinson, Stephen

    2015-01-01

    Abstract Many critics have suggested that international paid surrogacy is exploitative. Taking such concerns as its starting point, this article asks: (1) how defensible is the claim that international paid surrogacy is exploitative and what could be done to make it less exploitative? (2) In the light of the answer to (1), how strong is the case for prohibiting it? Exploitation could in principle be dealt with by improving surrogates' pay and conditions. However, doing so may exacerbate probl...

  8. Exploiting Proximity-Based Mobile Apps for Large-Scale Location Privacy Probing

    Directory of Open Access Journals (Sweden)

    Shuang Zhao

    2018-01-01

    Full Text Available Proximity-based apps have been changing the way people interact with each other in the physical world. To help people extend their social networks, proximity-based nearby-stranger (NS apps that encourage people to make friends with nearby strangers have gained popularity recently. As another typical type of proximity-based apps, some ridesharing (RS apps allowing drivers to search nearby passengers and get their ridesharing requests also become popular due to their contribution to economy and emission reduction. In this paper, we concentrate on the location privacy of proximity-based mobile apps. By analyzing the communication mechanism, we find that many apps of this type are vulnerable to large-scale location spoofing attack (LLSA. We accordingly propose three approaches to performing LLSA. To evaluate the threat of LLSA posed to proximity-based mobile apps, we perform real-world case studies against an NS app named Weibo and an RS app called Didi. The results show that our approaches can effectively and automatically collect a huge volume of users’ locations or travel records, thereby demonstrating the severity of LLSA. We apply the LLSA approaches against nine popular proximity-based apps with millions of installations to evaluate the defense strength. We finally suggest possible countermeasures for the proposed attacks.

  9. Lagrangian structure of flows in the Chesapeake Bay: challenges and perspectives on the analysis of estuarine flows

    Directory of Open Access Journals (Sweden)

    M. Branicki

    2010-03-01

    Full Text Available In this work we discuss applications of Lagrangian techniques to study transport properties of flows generated by shallow water models of estuarine flows. We focus on the flow in the Chesapeake Bay generated by Quoddy (see Lynch and Werner, 1991, a finite-element (shallow water model adopted to the bay by Gross et al. (2001. The main goal of this analysis is to outline the potential benefits of using Lagrangian tools for both understanding transport properties of such flows, and for validating the model output and identifying model deficiencies. We argue that the currently available 2-D Lagrangian tools, including the stable and unstable manifolds of hyperbolic trajectories and techniques exploiting 2-D finite-time Lyapunov exponent fields, are of limited use in the case of partially mixed estuarine flows. A further development and efficient implementation of three-dimensional Lagrangian techniques, as well as improvements in the shallow-water modelling of 3-D velocity fields, are required for reliable transport analysis in such flows. Some aspects of the 3-D trajectory structure in the Chesapeake Bay, based on the Quoddy output, are also discussed.

  10. A scalable approach to modeling groundwater flow on massively parallel computers

    International Nuclear Information System (INIS)

    Ashby, S.F.; Falgout, R.D.; Tompson, A.F.B.

    1995-12-01

    We describe a fully scalable approach to the simulation of groundwater flow on a hierarchy of computing platforms, ranging from workstations to massively parallel computers. Specifically, we advocate the use of scalable conceptual models in which the subsurface model is defined independently of the computational grid on which the simulation takes place. We also describe a scalable multigrid algorithm for computing the groundwater flow velocities. We axe thus able to leverage both the engineer's time spent developing the conceptual model and the computing resources used in the numerical simulation. We have successfully employed this approach at the LLNL site, where we have run simulations ranging in size from just a few thousand spatial zones (on workstations) to more than eight million spatial zones (on the CRAY T3D)-all using the same conceptual model

  11. Profits and Exploitation: A Reappraisal

    OpenAIRE

    Yoshihara, Naoki; Veneziani, Roberto

    2011-01-01

    This paper provides a mathematical analysis of the Marxian theory of the exploitation of labour in general equilibrium models. The two main definitions of Marxian exploitation in the literature, proposed by Morishima (1974) and Roemer (1982), respectively, are analysed in the context of general convex economies. It is shown that, contrary to the received view, in general these definitions do not preserve the so-called Fundamental Marxian Theorem (FMT), which states that the exploitation of la...

  12. Numerical optimization using flow equations

    Science.gov (United States)

    Punk, Matthias

    2014-12-01

    We develop a method for multidimensional optimization using flow equations. This method is based on homotopy continuation in combination with a maximum entropy approach. Extrema of the optimizing functional correspond to fixed points of the flow equation. While ideas based on Bayesian inference such as the maximum entropy method always depend on a prior probability, the additional step in our approach is to perform a continuous update of the prior during the homotopy flow. The prior probability thus enters the flow equation only as an initial condition. We demonstrate the applicability of this optimization method for two paradigmatic problems in theoretical condensed matter physics: numerical analytic continuation from imaginary to real frequencies and finding (variational) ground states of frustrated (quantum) Ising models with random or long-range antiferromagnetic interactions.

  13. Exploitation and Utilization of Oilfield Geothermal Resources in China

    Directory of Open Access Journals (Sweden)

    Shejiao Wang

    2016-09-01

    Full Text Available Geothermal energy is a clean, green renewable resource, which can be utilized for power generation, heating, cooling, and could effectively replace oil, gas, and coal. In recent years, oil companies have put more efforts into exploiting and utilizing geothermal energy with advanced technologies for heat-tracing oil gathering and transportation, central heating, etc., which has not only reduced resource waste, but also improved large-scale and industrial resource utilization levels, and has achieved remarkable economic and social benefits. Based on the analysis of oilfield geothermal energy development status, resource potential, and exploitation and utilization modes, the advantages and disadvantages of harnessing oilfield geothermal resource have been discussed. Oilfield geothermal energy exploitation and utilization have advantages in resources, technical personnel, technology, and a large number of abandoned wells that could be reconstructed and utilized. Due to the high heat demand in oilfields, geothermal energy exploitation and utilization can effectively replace oil, gas, coal, and other fossil fuels, and has bright prospects. The key factors limiting oilfield geothermal energy exploitation and utilization are also pointed out in this paper, including immature technologies, lack of overall planning, lack of standards in resource assessment, and economic assessment, lack of incentive policies, etc.

  14. Multi Scale Multi Temporal Near Real Time Approach for Volcanic Eruptions monitoring, Test Case: Mt Etna eruption 2017

    Science.gov (United States)

    Buongiorno, M. F.; Silvestri, M.; Musacchio, M.

    2017-12-01

    In this work a complete processing chain from the detection of the beginning of eruption to the estimation of lava flow temperature on active volcanoes using remote sensing data is presented showing the results for the Mt. Etna eruption on March 2017. The early detection of new eruption is based on the potentiality ensured by geostationary very low spatial resolution satellite (3x3 km in nadiral view), the hot spot/lava flow evolution is derived by S2 polar medium/high spatial resolution (20x20 mt) while the surface temperature is estimated by polar medium/low spatial resolution such as L8, ASTER and S3 (from 90 mt up to 1km).This approach merges two outcome derived by activity performed for monitoring purposes within INGV R&D activities and the results obtained by Geohazards Exploitation Platform ESA funded project (GEP) aimed to the development of shared platform for providing services based on EO data. Because the variety of phenomena to be analyzed a multi temporal multi scale approach has been used to implement suitable and robust algorithms for the different sensors. With the exception of Sentinel 2 (MSI) data, for which the algorithm used is based on NIR-SWIR bands, we exploit the MIR-TIR channels of L8, ASTER, S3 and SEVIRI for generating automatically the surface thermal state analysis. The developed procedure produces time series data and allows to extract information from each single co-registered pixel, to highlight variation of temperatures within specific areas. The final goal is to implement an easy tool which enables scientists and users to extract valuable information from satellite time series at different scales produced by ESA and EUMETSAT in the frame of Europe's Copernicus program and other Earth observation satellites programs such as LANDSAT (USGS) and GOES (NOAA).

  15. Flow-based determination of methionine in pharmaceutical formulations exploiting TGA-capped CdTe quantum dots for enhancing the luminol-KIO{sub 4} chemiluminescence

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Min, E-mail: mzhou8367@sina.com [Key Laboratory of Eco-Environment-Related Polymer Materials, Ministry of Education, Key Laboratory of Polymer Materials of Gansu Province, Key Laboratory of Bioelectrochemistry & Environmental Analysis of Gansu Province, College of Chemistry and Chemical Engineering, Northwest Normal University, Lanzhou 730070 (China); Wang, Ailian [Key Laboratory of Eco-Environment-Related Polymer Materials, Ministry of Education, Key Laboratory of Polymer Materials of Gansu Province, Key Laboratory of Bioelectrochemistry & Environmental Analysis of Gansu Province, College of Chemistry and Chemical Engineering, Northwest Normal University, Lanzhou 730070 (China); Jiuquan Enviromental Protection Bureau, Jiuquan 735000 (China); Li, Cong; Luo, Xiaowei; Ma, Yongjun [Key Laboratory of Eco-Environment-Related Polymer Materials, Ministry of Education, Key Laboratory of Polymer Materials of Gansu Province, Key Laboratory of Bioelectrochemistry & Environmental Analysis of Gansu Province, College of Chemistry and Chemical Engineering, Northwest Normal University, Lanzhou 730070 (China)

    2017-03-15

    A novel flow-injection chemiluminescence method (FI-CL) was established for the determination of methionine in this paper, based on its strong enhancement on CL intensity of the luminol-KIO{sub 4} system catalyzed by thioglycolic acid-capped CdTe quantum dots in alkaline media. Under the optimized conditions, the relative CL intensity was in proportion to methionine concentration in the range from 1.0×10{sup −8} to 1.0×10{sup −5} g mL{sup −1} with a detection limit of 6.6×10{sup −9} g mL{sup −1} (3σ). The relative standard deviation (RSD) of the CL intensity for 1.0×10{sup −6} g mL{sup −1} standard methionine solution was 0.97% (n=11). The proposed method was successfully applied to determine methionine in commercial pharmaceutical formulations with recoveries between 98.0% and 101.9%. The possible CL mechanism was discussed as well. - Graphical abstract: Methionine in commercial pharmaceutical formulations was determined by flow-injection chemiluminescence and the possible chemiluminescence mechanism was discussed as well.

  16. A porous flow approach to model thermal non-equilibrium applicable to melt migration

    Science.gov (United States)

    Schmeling, Harro; Marquart, Gabriele; Grebe, Michael

    2018-01-01

    We develop an approach for heat exchange between a fluid and a solid phase of a porous medium where the temperatures of the fluid and matrix are not in thermal equilibrium. The formulation considers moving of the fluid within a resting or deforming porous matrix in an Eulerian coordinate system. The approach can be applied, for example, to partially molten systems or to brine transport in porous rocks. We start from an existing theory for heat exchange where the energy conservation equations for the fluid and the solid phases are separated and coupled by a heat exchange term. This term is extended to account for the full history of heat exchange. It depends on the microscopic geometry of the fluid phase. For the case of solid containing hot, fluid-filled channels, we derive an expression based on a time-dependent Fourier approach for periodic half-waves. On the macroscopic scale, the temporal evolution of the heat exchange leads to a convolution integral along the flow path of the solid, which simplifies considerably in case of a resting matrix. The evolution of the temperature in both phases with time is derived by inserting the heat exchange term into the energy equations. We explore the effects of thermal non-equilibrium between fluid and solid by considering simple cases with sudden temperature differences between fluid and solid as initial or boundary conditions, and by varying the fluid velocity with respect to the resting porous solid. Our results agree well with an analytical solution for non-moving fluid and solid. The temperature difference between solid and fluid depends on the Peclet number based on the Darcy velocity. For Peclet numbers larger than 1, the temperature difference after one diffusion time reaches 5 per cent of \\tilde{T} or more (\\tilde{T} is a scaling temperature, e.g. the initial temperature difference). Thus, our results imply that thermal non-equilibrium can play an important role for melt migration through partially molten systems

  17. Immaterial Boys? A Large-Scale Exploration of Gender-Based Differences in Child Sexual Exploitation Service Users.

    Science.gov (United States)

    Cockbain, Ella; Ashby, Matthew; Brayley, Helen

    2017-10-01

    Child sexual exploitation is increasingly recognized nationally and internationally as a pressing child protection, crime prevention, and public health issue. In the United Kingdom, for example, a recent series of high-profile cases has fueled pressure on policy makers and practitioners to improve responses. Yet, prevailing discourse, research, and interventions around child sexual exploitation have focused overwhelmingly on female victims. This study was designed to help redress fundamental knowledge gaps around boys affected by sexual exploitation. This was achieved through rigorous quantitative analysis of individual-level data for 9,042 users of child sexual exploitation services in the United Kingdom. One third of the sample were boys, and gender was associated with statistically significant differences on many variables. The results of this exploratory study highlight the need for further targeted research and more nuanced and inclusive counter-strategies.

  18. Using Free Flow Energy Cumulation in Wind and Hydro Power Production

    Directory of Open Access Journals (Sweden)

    Lev Ktitorov

    2016-09-01

    Full Text Available When approaching a conventional wind turbine, the air flow is slowed down and widened. This results in a loss of turbine efficiency. In order to exploit wind or water flow power as effectively as possible, it was suggested that the turbine should be placed inside a shroud, which consists of 4 wing-shaped surfaces. Two internal airfoils improve the turbine performance by speeding up the flow acting on the turbine blades, two external wings create a field of low pressure behind the turbine, thus, helping to draw more mass flow to the turbine and avoid the loss of efficiency due to flow deceleration.  The system accumulates kinetic energy of the flow in a small volume where the smaller (and, therefore, cheaper turbine can be installed. A smaller system can be installed inside the bigger one, which would help to accumulate even more kinetic energy on the turbine. We call this method the kinetic energy summation with local flow redistribution. Both experiments and CFD simulations demonstrate a significant increase in velocity and generated mechanical power in comparison of those for a bare turbine.

  19. Second Order Cone Programming (SOCP) Relaxation Based Optimal Power Flow with Hybrid VSC-HVDC Transmission and Active Distribution Networks

    DEFF Research Database (Denmark)

    Ding, Tao; Li, Cheng; Yang, Yongheng

    2017-01-01

    The detailed topology of renewable resource bases may have the impact on the optimal power flow of the VSC-HVDC transmission network. To address this issue, this paper develops an optimal power flow with the hybrid VSC-HVDC transmission and active distribution networks to optimally schedule...... the generation output and voltage regulation of both networks, which leads to a non-convex programming model. Furthermore, the non-convex power flow equations are based on the Second Order Cone Programming (SOCP) relaxation approach. Thus, the proposed model can be relaxed to a SOCP that can be tractably solved...

  20. BIO-EXPLOITATION STATUS OF BOMBAY DUCK (Harpadon nehereus HAMILTON, 1822 ON TRAWL FISHERY IN TARAKAN WATERS

    Directory of Open Access Journals (Sweden)

    Duto Nugroho

    2015-06-01

    Full Text Available North Kalimantan Province, notably Tarakan City marine waters, is one of the important fishing ground in boundary area among Sulu Sulawesi Marine Ecoregion. It produces approximately 100 mt/annum of Bombay duck (Harpadon nehereus with valued of US$ 750,000. The sustainability of this fishery is a crucially concern given the following: substantial economic contribution, significant dependence of small-scale fishers on this species for their livelihoods. The fishing intensities considerable and growing threats to their habitats. To evaluate the vulnerability of individual species to over exploitation, the spawning potential ratio (SPR approach applied to describe the status of its existing fisheries. This approach provides the ability to determine fishing mortality as reference points to enhance its sustainability. The objective of this study is to understand this fish biomass resilience to harvesting. The calculated SPR based on the value of estimated length of first capture or Lc at 208 mm is equivalent to the SPR of 28%. With a base line of stocks are generally thought to risk recruitment declining when SPR <20%, recent finding indicated that the existing fishery can be generally described as nearly fully exploited. In recognition of this sector’s has an ecological importance and socio-economic significance, the sustainable development of Bombay duck fisheries should be initiated through developing local fishery committee to provide a their local fishery management plan.

  1. Self-optimisation and model-based design of experiments for developing a C–H activation flow process

    Directory of Open Access Journals (Sweden)

    Alexander Echtermeyer

    2017-01-01

    Full Text Available A recently described C(sp3–H activation reaction to synthesise aziridines was used as a model reaction to demonstrate the methodology of developing a process model using model-based design of experiments (MBDoE and self-optimisation approaches in flow. The two approaches are compared in terms of experimental efficiency. The self-optimisation approach required the least number of experiments to reach the specified objectives of cost and product yield, whereas the MBDoE approach enabled a rapid generation of a process model.

  2. Comparisons of complex network based models and real train flow model to analyze Chinese railway vulnerability

    International Nuclear Information System (INIS)

    Ouyang, Min; Zhao, Lijing; Hong, Liu; Pan, Zhezhe

    2014-01-01

    Recently numerous studies have applied complex network based models to study the performance and vulnerability of infrastructure systems under various types of attacks and hazards. But how effective are these models to capture their real performance response is still a question worthy of research. Taking the Chinese railway system as an example, this paper selects three typical complex network based models, including purely topological model (PTM), purely shortest path model (PSPM), and weight (link length) based shortest path model (WBSPM), to analyze railway accessibility and flow-based vulnerability and compare their results with those from the real train flow model (RTFM). The results show that the WBSPM can produce the train routines with 83% stations and 77% railway links identical to the real routines and can approach the RTFM the best for railway vulnerability under both single and multiple component failures. The correlation coefficient for accessibility vulnerability from WBSPM and RTFM under single station failures is 0.96 while it is 0.92 for flow-based vulnerability; under multiple station failures, where each station has the same failure probability fp, the WBSPM can produce almost identical vulnerability results with those from the RTFM under almost all failures scenarios when fp is larger than 0.62 for accessibility vulnerability and 0.86 for flow-based vulnerability

  3. Duality based optical flow algorithms with applications

    DEFF Research Database (Denmark)

    Rakêt, Lars Lau

    We consider the popular TV-L1 optical flow formulation, and the so-called duality based algorithm for minimizing the TV-L1 energy. The original formulation is extended to allow for vector valued images, and minimization results are given. In addition we consider different definitions of total...... variation regularization, and related formulations of the optical flow problem that may be used with a duality based algorithm. We present a highly optimized algorithmic setup to estimate optical flows, and give five novel applications. The first application is registration of medical images, where X......-ray images of different hands, taken using different imaging devices are registered using a TV-L1 optical flow algorithm. We propose to regularize the input images, using sparsity enhancing regularization of the image gradient to improve registration results. The second application is registration of 2D...

  4. Exploitation of cloud computing in management of construction projects in Slovakia

    Directory of Open Access Journals (Sweden)

    Mandičák Tomáš

    2016-12-01

    Full Text Available The issue of cloud computing is a highly topical issue. Cloud computing represents a new model for information technology (IT services based on the exploitation of Web (it represents a cloud and other application platforms, as well as software as a service. In general, the exploitation of cloud computing in construction project management has several advantages, as demonstrated by several research reports. Currently, research quantifying the exploitation of cloud computing in the Slovak construction industry has not yet been carried out. The article discusses the issue of exploitation of cloud computing in construction project management in Slovakia. The main objective of the research is to confirm whether factors such as size of construction enterprise, owner of construction enterprise and participant of construction project have any impact on the exploitation level of cloud computing in construction project management. It includes confirmation of differences in use between different participants of the construction project or between construction enterprises broken down by size and shareholders.

  5. Flowbca : A flow-based cluster algorithm in Stata

    NARCIS (Netherlands)

    Meekes, J.; Hassink, W.H.J.

    In this article, we introduce the Stata implementation of a flow-based cluster algorithm written in Mata. The main purpose of the flowbca command is to identify clusters based on relational data of flows. We illustrate the command by providing multiple applications, from the research fields of

  6. Exploiting sparsity of interconnections in spatio-temporal wind speed forecasting using Wavelet Transform

    International Nuclear Information System (INIS)

    Tascikaraoglu, Akin; Sanandaji, Borhan M.; Poolla, Kameshwar; Varaiya, Pravin

    2016-01-01

    Highlights: • We propose a spatio-temporal approach for wind speed forecasting. • The method is based on a combination of Wavelet decomposition and structured-sparse recovery. • Our analyses confirm that low-dimensional structures govern the interactions between stations. • Our method particularly shows improvements for profiles with high ramps. • We examine our approach on real data and illustrate its superiority over a set of benchmark models. - Abstract: Integration of renewable energy resources into the power grid is essential in achieving the envisioned sustainable energy future. Stochasticity and intermittency characteristics of renewable energies, however, present challenges for integrating these resources into the existing grid in a large scale. Reliable renewable energy integration is facilitated by accurate wind forecasts. In this paper, we propose a novel wind speed forecasting method which first utilizes Wavelet Transform (WT) for decomposition of the wind speed data into more stationary components and then uses a spatio-temporal model on each sub-series for incorporating both temporal and spatial information. The proposed spatio-temporal forecasting approach on each sub-series is based on the assumption that there usually exists an intrinsic low-dimensional structure between time series data in a collection of meteorological stations. Our approach is inspired by Compressive Sensing (CS) and structured-sparse recovery algorithms. Based on detailed case studies, we show that the proposed approach based on exploiting the sparsity of correlations between a large set of meteorological stations and decomposing time series for higher-accuracy forecasts considerably improve the short-term forecasts compared to the temporal and spatio-temporal benchmark methods.

  7. Fishing elevates variability in the abundance of exploited species.

    Science.gov (United States)

    Hsieh, Chih-Hao; Reiss, Christian S; Hunter, John R; Beddington, John R; May, Robert M; Sugihara, George

    2006-10-19

    The separation of the effects of environmental variability from the impacts of fishing has been elusive, but is essential for sound fisheries management. We distinguish environmental effects from fishing effects by comparing the temporal variability of exploited versus unexploited fish stocks living in the same environments. Using the unique suite of 50-year-long larval fish surveys from the California Cooperative Oceanic Fisheries Investigations we analyse fishing as a treatment effect in a long-term ecological experiment. Here we present evidence from the marine environment that exploited species exhibit higher temporal variability in abundance than unexploited species. This remains true after accounting for life-history effects, abundance, ecological traits and phylogeny. The increased variability of exploited populations is probably caused by fishery-induced truncation of the age structure, which reduces the capacity of populations to buffer environmental events. Therefore, to avoid collapse, fisheries must be managed not only to sustain the total viable biomass but also to prevent the significant truncation of age structure. The double jeopardy of fishing to potentially deplete stock sizes and, more immediately, to amplify the peaks and valleys of population variability, calls for a precautionary management approach.

  8. Practical application of game theory based production flow planning method in virtual manufacturing networks

    Science.gov (United States)

    Olender, M.; Krenczyk, D.

    2016-08-01

    Modern enterprises have to react quickly to dynamic changes in the market, due to changing customer requirements and expectations. One of the key area of production management, that must continuously evolve by searching for new methods and tools for increasing the efficiency of manufacturing systems is the area of production flow planning and control. These aspects are closely connected with the ability to implement the concept of Virtual Enterprises (VE) and Virtual Manufacturing Network (VMN) in which integrated infrastructure of flexible resources are created. In the proposed approach, the players role perform the objects associated with the objective functions, allowing to solve the multiobjective production flow planning problems based on the game theory, which is based on the theory of the strategic situation. For defined production system and production order models ways of solving the problem of production route planning in VMN on computational examples for different variants of production flow is presented. Possible decision strategy to use together with an analysis of calculation results is shown.

  9. Redefining Exploitation

    DEFF Research Database (Denmark)

    Agarwala, Rina

    2016-01-01

    This article examines how self-employed workers are organizing in the garments and waste collection industries in India. Although the question of who is profiting from self-employed workers’ labor is complex, the cases outlined in this paper highlight telling instances of how some self......-employed workers are organizing as workers. They are fighting labor exploitation by redefining the concept to include additional exploitation axes (from the state and middle class) and forms (including sexual). In doing so, they are redefining potential solutions, including identities and material benefits, to fit...... their unique needs. By expanding the category of “workers” beyond those defined by a narrow focus on a standard employer-employee relationship, these movements are also fighting exclusion from earlier labor protections by increasing the number of entitled beneficiaries. These struggles provide an important...

  10. A combined data mining approach using rough set theory and case-based reasoning in medical datasets

    Directory of Open Access Journals (Sweden)

    Mohammad Taghi Rezvan

    2014-06-01

    Full Text Available Case-based reasoning (CBR is the process of solving new cases by retrieving the most relevant ones from an existing knowledge-base. Since, irrelevant or redundant features not only remarkably increase memory requirements but also the time complexity of the case retrieval, reducing the number of dimensions is an issue worth considering. This paper uses rough set theory (RST in order to reduce the number of dimensions in a CBR classifier with the aim of increasing accuracy and efficiency. CBR exploits a distance based co-occurrence of categorical data to measure similarity of cases. This distance is based on the proportional distribution of different categorical values of features. The weight used for a feature is the average of co-occurrence values of the features. The combination of RST and CBR has been applied to real categorical datasets of Wisconsin Breast Cancer, Lymphography, and Primary cancer. The 5-fold cross validation method is used to evaluate the performance of the proposed approach. The results show that this combined approach lowers computational costs and improves performance metrics including accuracy and interpretability compared to other approaches developed in the literature.

  11. An objective and parsimonious approach for classifying natural flow regimes at a continental scale

    Science.gov (United States)

    Archfield, S. A.; Kennen, J.; Carlisle, D.; Wolock, D.

    2013-12-01

    Hydroecological stream classification--the process of grouping streams by similar hydrologic responses and, thereby, similar aquatic habitat--has been widely accepted and is often one of the first steps towards developing ecological flow targets. Despite its importance, the last national classification of streamgauges was completed about 20 years ago. A new classification of 1,534 streamgauges in the contiguous United States is presented using a novel and parsimonious approach to understand similarity in ecological streamflow response. This new classification approach uses seven fundamental daily streamflow statistics (FDSS) rather than winnowing down an uncorrelated subset from 200 or more ecologically relevant streamflow statistics (ERSS) commonly used in hydroecological classification studies. The results of this investigation demonstrate that the distributions of 33 tested ERSS are consistently different among the classes derived from the seven FDSS. It is further shown that classification based solely on the 33 ERSS generally does a poorer job in grouping similar streamgauges than the classification based on the seven FDSS. This new classification approach has the additional advantages of overcoming some of the subjectivity associated with the selection of the classification variables and provides a set of robust continental-scale classes of US streamgauges.

  12. Design of a mesoscale continuous flow route towards lithiated methoxyallene.

    Science.gov (United States)

    Seghers, Sofie; Heugebaert, Thomas S A; Moens, Matthias; Sonck, Jolien; Thybaut, Joris; Stevens, Chris Victor

    2018-05-11

    The unique nucleophilic properties of lithiated methoxyallene allow for C-C bond formation with a wide variety of electrophiles, thus introducing an allenic group for further functionalization. This approach has yielded a tremendously broad range of (hetero)cyclic scaffolds, including API precursors. To date, however, its valorization at scale is hampered by the batch synthesis protocol which suffers from serious safety issues. Hence, the attractive heat and mass transfer properties of flow technology were exploited to establish a mesoscale continuous flow route towards lithiated methoxyallene. An excellent conversion of 94% was obtained, corresponding to a methoxyallene throughput of 8.2 g/h. The process is characterized by short reaction times, mild reaction conditions and a stoichiometric use of reagents. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Distributed amplifier using Josephson vortex flow transistors

    International Nuclear Information System (INIS)

    McGinnis, D.P.; Beyer, J.B.; Nordman, J.E.

    1986-01-01

    A wide-band traveling wave amplifier using vortex flow transistors is proposed. A vortex flow transistor is a long Josephson junction used as a current controlled voltage source. The dual nature of this device to the field effect transistor is exploited. A circuit model of this device is proposed and a distributed amplifier utilizing 50 vortex flow transistors is predicted to have useful gain to 100 GHz

  14. Estimation of flow stress of radiation induced F/M steels using molecular dynamics and discrete dislocation dynamics approach

    International Nuclear Information System (INIS)

    More, Ameya; Dutta, B.K.; Durgaprasad, P.V.; Arya, A.K.

    2012-01-01

    Fe-Cr based Ferritic/Martensitic (F/M) steels are the candidate structural materials for future fusion reactors. In this work, a multi-scale approach comprising atomistic Molecular Dynamics (MD) simulations and Discrete Dislocation Dynamics (DDD) simulations are used to model the effect of irradiation dose on the flow stress of F/M steels. At the atomic scale, molecular dynamics simulations are used to study the dislocation interaction with irradiation induced defects, i.e. voids and He bubbles. Whereas, the DDD simulations are used to estimate the change in flow stress of the material as a result of irradiation hardening. (author)

  15. Multiphase reacting flows modelling and simulation

    CERN Document Server

    Marchisio, Daniele L

    2007-01-01

    The papers in this book describe the most widely applicable modeling approaches and are organized in six groups covering from fundamentals to relevant applications. In the first part, some fundamentals of multiphase turbulent reacting flows are covered. In particular the introduction focuses on basic notions of turbulence theory in single-phase and multi-phase systems as well as on the interaction between turbulence and chemistry. In the second part, models for the physical and chemical processes involved are discussed. Among other things, particular emphasis is given to turbulence modeling strategies for multiphase flows based on the kinetic theory for granular flows. Next, the different numerical methods based on Lagrangian and/or Eulerian schemes are presented. In particular the most popular numerical approaches of computational fluid dynamics codes are described (i.e., Direct Numerical Simulation, Large Eddy Simulation, and Reynolds-Averaged Navier-Stokes approach). The book will cover particle-based meth...

  16. Comparison of algebraic and analytical approaches to the formulation of the statistical model-based reconstruction problem for X-ray computed tomography.

    Science.gov (United States)

    Cierniak, Robert; Lorent, Anna

    2016-09-01

    The main aim of this paper is to investigate properties of our originally formulated statistical model-based iterative approach applied to the image reconstruction from projections problem which are related to its conditioning, and, in this manner, to prove a superiority of this approach over ones recently used by other authors. The reconstruction algorithm based on this conception uses a maximum likelihood estimation with an objective adjusted to the probability distribution of measured signals obtained from an X-ray computed tomography system with parallel beam geometry. The analysis and experimental results presented here show that our analytical approach outperforms the referential algebraic methodology which is explored widely in the literature and exploited in various commercial implementations. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Exploiting conformational ensembles in modeling protein-protein interactions on the proteome scale

    Science.gov (United States)

    Kuzu, Guray; Gursoy, Attila; Nussinov, Ruth; Keskin, Ozlem

    2013-01-01

    Cellular functions are performed through protein-protein interactions; therefore, identification of these interactions is crucial for understanding biological processes. Recent studies suggest that knowledge-based approaches are more useful than ‘blind’ docking for modeling at large scales. However, a caveat of knowledge-based approaches is that they treat molecules as rigid structures. The Protein Data Bank (PDB) offers a wealth of conformations. Here, we exploited ensemble of the conformations in predictions by a knowledge-based method, PRISM. We tested ‘difficult’ cases in a docking-benchmark dataset, where the unbound and bound protein forms are structurally different. Considering alternative conformations for each protein, the percentage of successfully predicted interactions increased from ~26% to 66%, and 57% of the interactions were successfully predicted in an ‘unbiased’ scenario, in which data related to the bound forms were not utilized. If the appropriate conformation, or relevant template interface, is unavailable in the PDB, PRISM could not predict the interaction successfully. The pace of the growth of the PDB promises a rapid increase of ensemble conformations emphasizing the merit of such knowledge-based ensemble strategies for higher success rates in protein-protein interaction predictions on an interactome-scale. We constructed the structural network of ERK interacting proteins as a case study. PMID:23590674

  18. Relativistic quantum metrology: exploiting relativity to improve quantum measurement technologies.

    Science.gov (United States)

    Ahmadi, Mehdi; Bruschi, David Edward; Sabín, Carlos; Adesso, Gerardo; Fuentes, Ivette

    2014-05-22

    We present a framework for relativistic quantum metrology that is useful for both Earth-based and space-based technologies. Quantum metrology has been so far successfully applied to design precision instruments such as clocks and sensors which outperform classical devices by exploiting quantum properties. There are advanced plans to implement these and other quantum technologies in space, for instance Space-QUEST and Space Optical Clock projects intend to implement quantum communications and quantum clocks at regimes where relativity starts to kick in. However, typical setups do not take into account the effects of relativity on quantum properties. To include and exploit these effects, we introduce techniques for the application of metrology to quantum field theory. Quantum field theory properly incorporates quantum theory and relativity, in particular, at regimes where space-based experiments take place. This framework allows for high precision estimation of parameters that appear in quantum field theory including proper times and accelerations. Indeed, the techniques can be applied to develop a novel generation of relativistic quantum technologies for gravimeters, clocks and sensors. As an example, we present a high precision device which in principle improves the state-of-the-art in quantum accelerometers by exploiting relativistic effects.

  19. Reliable fault detection and diagnosis of photovoltaic systems based on statistical monitoring approaches

    KAUST Repository

    Harrou, Fouzi; Sun, Ying; Taghezouit, Bilal; Saidi, Ahmed; Hamlati, Mohamed-Elkarim

    2017-01-01

    This study reports the development of an innovative fault detection and diagnosis scheme to monitor the direct current (DC) side of photovoltaic (PV) systems. Towards this end, we propose a statistical approach that exploits the advantages of one

  20. Atomic orbital-based SOS-MP2 with tensor hypercontraction. I. GPU-based tensor construction and exploiting sparsity

    Energy Technology Data Exchange (ETDEWEB)

    Song, Chenchen; Martínez, Todd J. [Department of Chemistry and the PULSE Institute, Stanford University, Stanford, California 94305 (United States); SLAC National Accelerator Laboratory, Menlo Park, California 94025 (United States)

    2016-05-07

    We present a tensor hypercontracted (THC) scaled opposite spin second order Møller-Plesset perturbation theory (SOS-MP2) method. By using THC, we reduce the formal scaling of SOS-MP2 with respect to molecular size from quartic to cubic. We achieve further efficiency by exploiting sparsity in the atomic orbitals and using graphical processing units (GPUs) to accelerate integral construction and matrix multiplication. The practical scaling of GPU-accelerated atomic orbital-based THC-SOS-MP2 calculations is found to be N{sup 2.6} for reference data sets of water clusters and alanine polypeptides containing up to 1600 basis functions. The errors in correlation energy with respect to density-fitting-SOS-MP2 are less than 0.5 kcal/mol for all systems tested (up to 162 atoms).

  1. Atomic orbital-based SOS-MP2 with tensor hypercontraction. I. GPU-based tensor construction and exploiting sparsity.

    Science.gov (United States)

    Song, Chenchen; Martínez, Todd J

    2016-05-07

    We present a tensor hypercontracted (THC) scaled opposite spin second order Møller-Plesset perturbation theory (SOS-MP2) method. By using THC, we reduce the formal scaling of SOS-MP2 with respect to molecular size from quartic to cubic. We achieve further efficiency by exploiting sparsity in the atomic orbitals and using graphical processing units (GPUs) to accelerate integral construction and matrix multiplication. The practical scaling of GPU-accelerated atomic orbital-based THC-SOS-MP2 calculations is found to be N(2.6) for reference data sets of water clusters and alanine polypeptides containing up to 1600 basis functions. The errors in correlation energy with respect to density-fitting-SOS-MP2 are less than 0.5 kcal/mol for all systems tested (up to 162 atoms).

  2. A Novel Approach to Detect Network Attacks Using G-HMM-Based Temporal Relations between Internet Protocol Packets

    Directory of Open Access Journals (Sweden)

    Han Kyusuk

    2011-01-01

    Full Text Available This paper introduces novel attack detection approaches on mobile and wireless device security and network which consider temporal relations between internet packets. In this paper we first present a field selection technique using a Genetic Algorithm and generate a Packet-based Mining Association Rule from an original Mining Association Rule for Support Vector Machine in mobile and wireless network environment. Through the preprocessing with PMAR, SVM inputs can account for time variation between packets in mobile and wireless network. Third, we present Gaussian observation Hidden Markov Model to exploit the hidden relationships between packets based on probabilistic estimation. In our G-HMM approach, we also apply G-HMM feature reduction for better initialization. We demonstrate the usefulness of our SVM and G-HMM approaches with GA on MIT Lincoln Lab datasets and a live dataset that we captured on a real mobile and wireless network. Moreover, experimental results are verified by -fold cross-validation test.

  3. Exploration, Exploitation, and Organizational Coordination Mechanisms

    Directory of Open Access Journals (Sweden)

    Silvio Popadiuk

    2016-03-01

    Full Text Available This paper presents an empirical relationship among exploration, exploitation, and organizational coordination mechanisms, classified as the centralization of decision-making, formalization, and connectedness. In order to analyze the findings of this survey, we used two techniques: Principal Component Analysis (PCA and Partial Least Squares Path Modeling (PLS-PM. Our analysis was supported by 249 answers from managers of companies located in Brazil (convenience sampling. Contrary to expectations, centralization and exploitation were negatively associated. Our data supports the research hypothesis that formalization is positively associated with exploitation. Although the relationship between formalization and exploration were significant, the result is contrary to the research hypothesis that we made. The relationships among connectedness and exploitation, and connectedness and exploration were both positive and significant. This relationship means that the more connectedness increases, the higher the likelihood of exploitation and exploration.

  4. Exploring, exploiting and evolving diversity of aquatic ecosystem models

    DEFF Research Database (Denmark)

    Janssen, Annette B G; Arhonditsis, George B.; Beusen, Arthur

    2015-01-01

    Here, we present a community perspective on how to explore, exploit and evolve the diversity in aquatic ecosystem models. These models play an important role in understanding the functioning of aquatic ecosystems, filling in observation gaps and developing effective strategies for water quality...... management. In this spirit, numerous models have been developed since the 1970s. We set off to explore model diversity by making an inventory among 42 aquatic ecosystem modellers, by categorizing the resulting set of models and by analysing them for diversity. We then focus on how to exploit model diversity...... available through open-source policies, to standardize documentation and technical implementation of models, and to compare models through ensemble modelling and interdisciplinary approaches. We end with our perspective on how the field of aquatic ecosystem modelling might develop in the next 5–10 years...

  5. Real-Time Model-Based Leak-Through Detection within Cryogenic Flow Systems

    Science.gov (United States)

    Walker, M.; Figueroa, F.

    2015-01-01

    The timely detection of leaks within cryogenic fuel replenishment systems is of significant importance to operators on account of the safety and economic impacts associated with material loss and operational inefficiencies. Associated loss in control of pressure also effects the stability and ability to control the phase of cryogenic fluids during replenishment operations. Current research dedicated to providing Prognostics and Health Management (PHM) coverage of such cryogenic replenishment systems has focused on the detection of leaks to atmosphere involving relatively simple model-based diagnostic approaches that, while effective, are unable to isolate the fault to specific piping system components. The authors have extended this research to focus on the detection of leaks through closed valves that are intended to isolate sections of the piping system from the flow and pressurization of cryogenic fluids. The described approach employs model-based detection of leak-through conditions based on correlations of pressure changes across isolation valves and attempts to isolate the faults to specific valves. Implementation of this capability is enabled by knowledge and information embedded in the domain model of the system. The approach has been used effectively to detect such leak-through faults during cryogenic operational testing at the Cryogenic Testbed at NASA's Kennedy Space Center.

  6. Application of a stereoscopic digital subtraction angiography approach to blood flow analysis

    International Nuclear Information System (INIS)

    Fencil, L.E.; Doi, K.; Hoffmann, K.R.

    1986-01-01

    The authors are developing a stereoscopic digital subtraction angiographic (DSA) approach for accurate measurement of the size, magnification factor, orientation, and blood flow of a selected vessel segment. We employ a Siemens Digitron 2 and a Stereolix x-ray tube with a 25-mm tube shift. Absolute vessel sizes in each stereoscopic image are determined using the magnification factor and an iterative deconvolution technique employing the LSF of the DSA system. From data on vessel diameter and three-dimensional orientation, the effective attenuation coefficient of the diluted contrast medium can be determined, thus allowing accurate blood flow analysis in high-frame-rate DSA images. The accuracy and precision of the approach will be studied using both static and dynamic phantoms

  7. Two different approaches for creating a prescribed opposed-flow velocity field for flame spread experiments

    Directory of Open Access Journals (Sweden)

    Carmignani Luca

    2015-01-01

    Full Text Available Opposed-flow flame spread over solid fuels is a fundamental area of research in fire science. Typically combustion wind tunnels are used to generate the opposing flow of oxidizer against which a laminar flame spread occurs along the fuel samples. The spreading flame is generally embedded in a laminar boundary layer, which interacts with the strong buoyancy-induced flow to affect the mechanism of flame spread. In this work, two different approaches for creating the opposed-flow are compared. In the first approach, a vertical combustion tunnel is used where a thin fuel sample, thin acrylic or ashless filter paper, is held vertically along the axis of the test-section with the airflow controlled by controlling the duty cycles of four fans. As the sample is ignited, a flame spreads downward in a steady manner along a developing boundary layer. In the second approach, the sample is held in a movable cart placed in an eight-meter tall vertical chamber filled with air. As the sample is ignited, the cart is moved downward (through a remote-controlled mechanism at a prescribed velocity. The results from the two approaches are compared to establish the boundary layer effect on flame spread over thin fuels.

  8. An eigenvalue approach for the automatic scaling of unknowns in model-based reconstructions: Application to real-time phase-contrast flow MRI.

    Science.gov (United States)

    Tan, Zhengguo; Hohage, Thorsten; Kalentev, Oleksandr; Joseph, Arun A; Wang, Xiaoqing; Voit, Dirk; Merboldt, K Dietmar; Frahm, Jens

    2017-12-01

    The purpose of this work is to develop an automatic method for the scaling of unknowns in model-based nonlinear inverse reconstructions and to evaluate its application to real-time phase-contrast (RT-PC) flow magnetic resonance imaging (MRI). Model-based MRI reconstructions of parametric maps which describe a physical or physiological function require the solution of a nonlinear inverse problem, because the list of unknowns in the extended MRI signal equation comprises multiple functional parameters and all coil sensitivity profiles. Iterative solutions therefore rely on an appropriate scaling of unknowns to numerically balance partial derivatives and regularization terms. The scaling of unknowns emerges as a self-adjoint and positive-definite matrix which is expressible by its maximal eigenvalue and solved by power iterations. The proposed method is applied to RT-PC flow MRI based on highly undersampled acquisitions. Experimental validations include numerical phantoms providing ground truth and a wide range of human studies in the ascending aorta, carotid arteries, deep veins during muscular exercise and cerebrospinal fluid during deep respiration. For RT-PC flow MRI, model-based reconstructions with automatic scaling not only offer velocity maps with high spatiotemporal acuity and much reduced phase noise, but also ensure fast convergence as well as accurate and precise velocities for all conditions tested, i.e. for different velocity ranges, vessel sizes and the simultaneous presence of signals with velocity aliasing. In summary, the proposed automatic scaling of unknowns in model-based MRI reconstructions yields quantitatively reliable velocities for RT-PC flow MRI in various experimental scenarios. Copyright © 2017 John Wiley & Sons, Ltd.

  9. Automated UAV-based mapping for airborne reconnaissance and video exploitation

    Science.gov (United States)

    Se, Stephen; Firoozfam, Pezhman; Goldstein, Norman; Wu, Linda; Dutkiewicz, Melanie; Pace, Paul; Naud, J. L. Pierre

    2009-05-01

    Airborne surveillance and reconnaissance are essential for successful military missions. Such capabilities are critical for force protection, situational awareness, mission planning, damage assessment and others. UAVs gather huge amount of video data but it is extremely labour-intensive for operators to analyse hours and hours of received data. At MDA, we have developed a suite of tools towards automated video exploitation including calibration, visualization, change detection and 3D reconstruction. The on-going work is to improve the robustness of these tools and automate the process as much as possible. Our calibration tool extracts and matches tie-points in the video frames incrementally to recover the camera calibration and poses, which are then refined by bundle adjustment. Our visualization tool stabilizes the video, expands its field-of-view and creates a geo-referenced mosaic from the video frames. It is important to identify anomalies in a scene, which may include detecting any improvised explosive devices (IED). However, it is tedious and difficult to compare video clips to look for differences manually. Our change detection tool allows the user to load two video clips taken from two passes at different times and flags any changes between them. 3D models are useful for situational awareness, as it is easier to understand the scene by visualizing it in 3D. Our 3D reconstruction tool creates calibrated photo-realistic 3D models from video clips taken from different viewpoints, using both semi-automated and automated approaches. The resulting 3D models also allow distance measurements and line-of- sight analysis.

  10. Modelling mean transit time of stream base flow during tropical cyclone rainstorm in a steep relief forested catchment

    Science.gov (United States)

    Lee, Jun-Yi; Huang, -Chuan, Jr.

    2017-04-01

    Mean transit time (MTT) is one of the of fundamental catchment descriptors to advance understanding on hydrological, ecological, and biogeochemical processes and improve water resources management. However, there were few documented the base flow partitioning (BFP) and mean transit time within a mountainous catchment in typhoon alley. We used a unique data set of 18O isotope and conductivity composition of rainfall (136 mm to 778 mm) and streamflow water samples collected for 14 tropical cyclone events (during 2011 to 2015) in a steep relief forested catchment (Pinglin, in northern Taiwan). A lumped hydrological model, HBV, considering dispersion model transit time distribution was used to estimate total flow, base flow, and MTT of stream base flow. Linear regression between MTT and hydrometric (precipitation intensity and antecedent precipitation index) variables were used to explore controls on MTT variation. Results revealed that both the simulation performance of total flow and base flow were satisfactory, and the Nash-Sutcliffe model efficiency coefficient of total flow and base flow was 0.848 and 0.732, respectively. The event magnitude increased with the decrease of estimated MTTs. Meanwhile, the estimated MTTs varied 4-21 days with the increase of BFP between 63-92%. The negative correlation between event magnitude and MTT and BFP showed the forcing controls the MTT and BFP. Besides, a negative relationship between MTT and the antecedent precipitation index was also found. In other words, wetter antecedent moisture content more rapidly active the fast flow paths. This approach is well suited for constraining process-based modeling in a range of high precipitation intensity and steep relief forested environments.

  11. Hepatocyte-based flow analytical bioreactor for xenobiotics metabolism bioprediction

    Directory of Open Access Journals (Sweden)

    M Helvenstein

    2017-04-01

    Full Text Available The research for new in vitro screening tools for predictive metabolic profiling of drug candidates is of major interest in the pharmaceutical field. The main motivation is to avoid late rejection in drug development and to deliver safer drugs to the market. Thanks to the superparamagnetic properties of iron oxide nanoparticles, a flow bioreactor has been developed which is able to perform xenobiotic metabolism studies. The selected cell line (HepaRG maintained its metabolic competencies once iron oxide nanoparticles were internalized. Based on magnetically trapped cells in a homemade immobilization chamber, through which a flow of circulating phase was injected to transport nutrients and/or the studied xenobiotic, off-line and online (when coupled to a high-performance liquid chromatography chain metabolic assays were developed using diclofenac as a reference compound. The diclofenac demonstrated a similar metabolization profile chromatogram, both with the newly developed setup and with the control situation. Highly versatile, this pioneering and innovative instrumental design paves the way for a new approach in predictive metabolism studies.

  12. Mode decomposition methods for flows in high-contrast porous media. Global-local approach

    KAUST Repository

    Ghommem, Mehdi; Presho, Michael; Calo, Victor M.; Efendiev, Yalchin R.

    2013-01-01

    In this paper, we combine concepts of the generalized multiscale finite element method (GMsFEM) and mode decomposition methods to construct a robust global-local approach for model reduction of flows in high-contrast porous media. This is achieved by implementing Proper Orthogonal Decomposition (POD) and Dynamic Mode Decomposition (DMD) techniques on a coarse grid computed using GMsFEM. The resulting reduced-order approach enables a significant reduction in the flow problem size while accurately capturing the behavior of fully-resolved solutions. We consider a variety of high-contrast coefficients and present the corresponding numerical results to illustrate the effectiveness of the proposed technique. This paper is a continuation of our work presented in Ghommem et al. (2013) [1] where we examine the applicability of POD and DMD to derive simplified and reliable representations of flows in high-contrast porous media on fully resolved models. In the current paper, we discuss how these global model reduction approaches can be combined with local techniques to speed-up the simulations. The speed-up is due to inexpensive, while sufficiently accurate, computations of global snapshots. © 2013 Elsevier Inc.

  13. Mode decomposition methods for flows in high-contrast porous media. Global-local approach

    KAUST Repository

    Ghommem, Mehdi

    2013-11-01

    In this paper, we combine concepts of the generalized multiscale finite element method (GMsFEM) and mode decomposition methods to construct a robust global-local approach for model reduction of flows in high-contrast porous media. This is achieved by implementing Proper Orthogonal Decomposition (POD) and Dynamic Mode Decomposition (DMD) techniques on a coarse grid computed using GMsFEM. The resulting reduced-order approach enables a significant reduction in the flow problem size while accurately capturing the behavior of fully-resolved solutions. We consider a variety of high-contrast coefficients and present the corresponding numerical results to illustrate the effectiveness of the proposed technique. This paper is a continuation of our work presented in Ghommem et al. (2013) [1] where we examine the applicability of POD and DMD to derive simplified and reliable representations of flows in high-contrast porous media on fully resolved models. In the current paper, we discuss how these global model reduction approaches can be combined with local techniques to speed-up the simulations. The speed-up is due to inexpensive, while sufficiently accurate, computations of global snapshots. © 2013 Elsevier Inc.

  14. Comparison of quantitative myocardial perfusion imaging CT to fluorescent microsphere-based flow from high-resolution cryo-images

    Science.gov (United States)

    Eck, Brendan L.; Fahmi, Rachid; Levi, Jacob; Fares, Anas; Wu, Hao; Li, Yuemeng; Vembar, Mani; Dhanantwari, Amar; Bezerra, Hiram G.; Wilson, David L.

    2016-03-01

    Myocardial perfusion imaging using CT (MPI-CT) has the potential to provide quantitative measures of myocardial blood flow (MBF) which can aid the diagnosis of coronary artery disease. We evaluated the quantitative accuracy of MPI-CT in a porcine model of balloon-induced LAD coronary artery ischemia guided by fractional flow reserve (FFR). We quantified MBF at baseline (FFR=1.0) and under moderate ischemia (FFR=0.7) using MPI-CT and compared to fluorescent microsphere-based MBF from high-resolution cryo-images. Dynamic, contrast-enhanced CT images were obtained using a spectral detector CT (Philips Healthcare). Projection-based mono-energetic images were reconstructed and processed to obtain MBF. Three MBF quantification approaches were evaluated: singular value decomposition (SVD) with fixed Tikhonov regularization (ThSVD), SVD with regularization determined by the L-Curve criterion (LSVD), and Johnson-Wilson parameter estimation (JW). The three approaches over-estimated MBF compared to cryo-images. JW produced the most accurate MBF, with average error 33.3+/-19.2mL/min/100g, whereas LSVD and ThSVD had greater over-estimation, 59.5+/-28.3mL/min/100g and 78.3+/-25.6 mL/min/100g, respectively. Relative blood flow as assessed by a flow ratio of LAD-to-remote myocardium was strongly correlated between JW and cryo-imaging, with R2=0.97, compared to R2=0.88 and 0.78 for LSVD and ThSVD, respectively. We assessed tissue impulse response functions (IRFs) from each approach for sources of error. While JW was constrained to physiologic solutions, both LSVD and ThSVD produced IRFs with non-physiologic properties due to noise. The L-curve provided noise-adaptive regularization but did not eliminate non-physiologic IRF properties or optimize for MBF accuracy. These findings suggest that model-based MPI-CT approaches may be more appropriate for quantitative MBF estimation and that cryo-imaging can support the development of MPI-CT by providing spatial distributions of MBF.

  15. Estimation of distribution algorithm with path relinking for the blocking flow-shop scheduling problem

    Science.gov (United States)

    Shao, Zhongshi; Pi, Dechang; Shao, Weishi

    2018-05-01

    This article presents an effective estimation of distribution algorithm, named P-EDA, to solve the blocking flow-shop scheduling problem (BFSP) with the makespan criterion. In the P-EDA, a Nawaz-Enscore-Ham (NEH)-based heuristic and the random method are combined to generate the initial population. Based on several superior individuals provided by a modified linear rank selection, a probabilistic model is constructed to describe the probabilistic distribution of the promising solution space. The path relinking technique is incorporated into EDA to avoid blindness of the search and improve the convergence property. A modified referenced local search is designed to enhance the local exploitation. Moreover, a diversity-maintaining scheme is introduced into EDA to avoid deterioration of the population. Finally, the parameters of the proposed P-EDA are calibrated using a design of experiments approach. Simulation results and comparisons with some well-performing algorithms demonstrate the effectiveness of the P-EDA for solving BFSP.

  16. [Impacts of hydroelectric cascade exploitation on river ecosystem and landscape: a review].

    Science.gov (United States)

    Yang, Kun; Deng, Xi; Li, Xue-Ling; Wen, Ping

    2011-05-01

    Hydroelectric cascade exploitation, one of the major ways for exploiting water resources and developing hydropower, not only satisfies the needs of various national economic sectors, but also promotes the socio-economic sustainable development of river basin. unavoidable anthropogenic impacts on the entire basin ecosystem. Based on the process of hydroelectric cascade exploitation and the ecological characteristics of river basins, this paper reviewed the major impacts of hydroelectric cascade exploitation on dam-area ecosystems, river reservoirs micro-climate, riparian ecosystems, river aquatic ecosystems, wetlands, and river landscapes. Some prospects for future research were offered, e.g., strengthening the research of chain reactions and cumulative effects of ecological factors affected by hydroelectric cascade exploitation, intensifying the study of positive and negative ecological effects under the dam networks and their joint operations, and improving the research of successional development and stability of basin ecosystems at different temporal and spatial scales.

  17. 3D CFD computations of transitional flows using DES and a correlation based transition model; Wind turbines

    Energy Technology Data Exchange (ETDEWEB)

    Soerensen, Niels N.

    2009-07-15

    The report describes the application of the correlation based transition model of Menter et. al. [1, 2] to the cylinder drag crisis and the stalled flow over an DU-96-W-351 airfoil using the DES methodology. When predicting the flow over airfoils and rotors, the laminar-turbulent transition process can be important for the aerodynamic performance. Today, the most widespread approach is to use fully turbulent computations, where the transitional process is ignored and the entire boundary layer on the wings or airfoils is handled by the turbulence model. The correlation based transition model has lately shown promising results, and the present paper describes the application of the model to predict the drag and shedding frequency for flow around a cylinder from sub to super-critical Reynolds numbers. Additionally, the model is applied to the flow around the DU-96 airfoil, at high angles of attack. (au)

  18. Approaching a universal scaling relationship between fracture stiffness and fluid flow

    Science.gov (United States)

    Pyrak-Nolte, Laura J.; Nolte, David D.

    2016-02-01

    A goal of subsurface geophysical monitoring is the detection and characterization of fracture alterations that affect the hydraulic integrity of a site. Achievement of this goal requires a link between the mechanical and hydraulic properties of a fracture. Here we present a scaling relationship between fluid flow and fracture-specific stiffness that approaches universality. Fracture-specific stiffness is a mechanical property dependent on fracture geometry that can be monitored remotely using seismic techniques. A Monte Carlo numerical approach demonstrates that a scaling relationship exists between flow and stiffness for fractures with strongly correlated aperture distributions, and continues to hold for fractures deformed by applied stress and by chemical erosion as well. This new scaling relationship provides a foundation for simulating changes in fracture behaviour as a function of stress or depth in the Earth and will aid risk assessment of the hydraulic integrity of subsurface sites.

  19. Performance of a vanadium redox flow battery with and without flow fields

    International Nuclear Information System (INIS)

    Xu, Q.; Zhao, T.S.; Zhang, C.

    2014-01-01

    Highlights: • The performances of a VRFB with/without flow fields are compared. • The respective maximum power efficiency occurs at different flow rates. • The battery with flow fields Exhibits 5% higher energy efficiency. - Abstract: A flow field is an indispensable component for fuel cells to macroscopically distribute reactants onto electrodes. However, it is still unknown whether flow fields are also required in all-vanadium redox flow batteries (VRFBs). In this work, the performance of a VRFB with flow fields is analyzed and compared with the performance of a VRFB without flow fields. It is demonstrated that the battery with flow fields has a higher discharge voltage at higher flow rates, but exhibits a larger pressure drop. The maximum power-based efficiency occurs at different flow rates for the both batteries with and without flow fields. It is found that the battery with flow fields Exhibits 5% higher energy efficiency than the battery without flow fields, when operating at the flow rates corresponding to each battery's maximum power-based efficiency. Therefore, the inclusion of flow fields in VRFBs can be an effective approach for improving system efficiency

  20. Zinc fixation preserves flow cytometry scatter and fluorescence parameters and allows simultaneous analysis of DNA content and synthesis, and intracellular and surface epitopes

    DEFF Research Database (Denmark)

    Jensen, Uffe Birk; Owens, David; Pedersen, Søren

    2010-01-01

    Zinc salt-based fixation (ZBF) has proved advantageous in histochemical analyses conducted on intact tissues but has not been exploited in flow cytometry procedures that focus on quantitative analysis of individual cells. Here, we show that ZBF performs equally well to paraformaldehyde in the pre......Zinc salt-based fixation (ZBF) has proved advantageous in histochemical analyses conducted on intact tissues but has not been exploited in flow cytometry procedures that focus on quantitative analysis of individual cells. Here, we show that ZBF performs equally well to paraformaldehyde...... allowing subsequent quantitative PCR analysis or labeling for incorporation of the thymidine analog EdU following surface and intracellular epitope staining. Finally, ZBF treatment allows for long-term storage of labeled cells with little change in these parameters. Thus, we present a protocol for zinc...... salt fixation of cells that allows for the simultaneous analysis of DNA and intracellular and cell surface proteins by flow cytometry....

  1. Automated Low-Cost Smartphone-Based Lateral Flow Saliva Test Reader for Drugs-of-Abuse Detection

    Directory of Open Access Journals (Sweden)

    Adrian Carrio

    2015-11-01

    Full Text Available Lateral flow assay tests are nowadays becoming powerful, low-cost diagnostic tools. Obtaining a result is usually subject to visual interpretation of colored areas on the test by a human operator, introducing subjectivity and the possibility of errors in the extraction of the results. While automated test readers providing a result-consistent solution are widely available, they usually lack portability. In this paper, we present a smartphone-based automated reader for drug-of-abuse lateral flow assay tests, consisting of an inexpensive light box and a smartphone device. Test images captured with the smartphone camera are processed in the device using computer vision and machine learning techniques to perform automatic extraction of the results. A deep validation of the system has been carried out showing the high accuracy of the system. The proposed approach, applicable to any line-based or color-based lateral flow test in the market, effectively reduces the manufacturing costs of the reader and makes it portable and massively available while providing accurate, reliable results.

  2. Automated Low-Cost Smartphone-Based Lateral Flow Saliva Test Reader for Drugs-of-Abuse Detection.

    Science.gov (United States)

    Carrio, Adrian; Sampedro, Carlos; Sanchez-Lopez, Jose Luis; Pimienta, Miguel; Campoy, Pascual

    2015-11-24

    Lateral flow assay tests are nowadays becoming powerful, low-cost diagnostic tools. Obtaining a result is usually subject to visual interpretation of colored areas on the test by a human operator, introducing subjectivity and the possibility of errors in the extraction of the results. While automated test readers providing a result-consistent solution are widely available, they usually lack portability. In this paper, we present a smartphone-based automated reader for drug-of-abuse lateral flow assay tests, consisting of an inexpensive light box and a smartphone device. Test images captured with the smartphone camera are processed in the device using computer vision and machine learning techniques to perform automatic extraction of the results. A deep validation of the system has been carried out showing the high accuracy of the system. The proposed approach, applicable to any line-based or color-based lateral flow test in the market, effectively reduces the manufacturing costs of the reader and makes it portable and massively available while providing accurate, reliable results.

  3. Reverse engineering of a railcar prototype via energetic macroscopic representation approach

    International Nuclear Information System (INIS)

    Agbli, Kréhi Serge; Hissel, Daniel; Sorrentino, Marco; Chauvet, Frédéric; Pouget, Julien

    2016-01-01

    Highlights: • A complex EMR model of a new railcar range has been developed. • A satisfactory assessment of the fuel consumption of the railcar. • The significant potential benefits are attainable by hybridizing the original railcar. • The regenerative braking can provide up to 240 kW h saving. - Abstract: Energetic Macroscopic Representation (EMR) modelling approach is proposed to perform model-based reverse-engineering of a new railcar range, having six propulsion units, each consisting of a diesel engine and a traction motor. Particularly, EMR intrinsic features were exploited to perform phenomenological structuration of power flows, thus allowing proper and comprehensive modelling of complex systems, such as the under-study railcar. Based on some prospective real trips, selected in such a way as to enable realistic evaluation of effective railcar effort, EMR-based prediction of railcar energy consumption is performed. Furthermore, physical consistency of each powertrain component operation was carefully verified. The suitability of EMR approach was thus proven effective to perform reverse-engineering of known specifications and available experimental data, with the final aim of reconstructing a high fidelity computational tool that meets computational burden requirements for subsequent model-based tasks deployment. Finally, specific simulation analyses were performed to evaluate the potential benefits attainable through electric hybridization of the original powertrain.

  4. An automated approach for segmentation of intravascular ultrasound images based on parametric active contour models

    International Nuclear Information System (INIS)

    Vard, Alireza; Jamshidi, Kamal; Movahhedinia, Naser

    2012-01-01

    This paper presents a fully automated approach to detect the intima and media-adventitia borders in intravascular ultrasound images based on parametric active contour models. To detect the intima border, we compute a new image feature applying a combination of short-term autocorrelations calculated for the contour pixels. These feature values are employed to define an energy function of the active contour called normalized cumulative short-term autocorrelation. Exploiting this energy function, the intima border is separated accurately from the blood region contaminated by high speckle noise. To extract media-adventitia boundary, we define a new form of energy function based on edge, texture and spring forces for the active contour. Utilizing this active contour, the media-adventitia border is identified correctly even in presence of branch openings and calcifications. Experimental results indicate accuracy of the proposed methods. In addition, statistical analysis demonstrates high conformity between manual tracing and the results obtained by the proposed approaches.

  5. Mechanical disequilibria in two-phase flow models: approaches by relaxation and by a reduced model

    International Nuclear Information System (INIS)

    Labois, M.

    2008-10-01

    This thesis deals with hyperbolic models for the simulation of compressible two-phase flows, to find alternatives to the classical bi-fluid model. We first establish a hierarchy of two-phase flow models, obtained according to equilibrium hypothesis between the physical variables of each phase. The use of Chapman-Enskog expansions enables us to link the different existing models to each other. Moreover, models that take into account small physical unbalances are obtained by means of expansion to the order one. The second part of this thesis focuses on the simulation of flows featuring velocity unbalances and pressure balances, in two different ways. First, a two-velocity two-pressure model is used, where non-instantaneous velocity and pressure relaxations are applied so that a balancing of these variables is obtained. A new one-velocity one-pressure dissipative model is then proposed, where the arising of second-order terms enables us to take into account unbalances between the phase velocities. We develop a numerical method based on a fractional step approach for this model. (author)

  6. A comparison and assessment of approaches for modelling flow over in-line tube banks

    International Nuclear Information System (INIS)

    Iacovides, Hector; Launder, Brian; West, Alastair

    2014-01-01

    Highlights: • We present wall-resolved LES and URANS simulations of periodic flow in heated in-line tube banks. • Simulations of flow in a confined in-line tube-bank are compared with experimental data. • When pitch-to-diameter (P/D) ratio becomes less than 1.6, the periodic flow becomes skewed. • URANS tested here unable to mimic the periodic flow at P/D = 1.6. • In confined tube banks URANS suggest alternate, in the axial direction, flow deflection. - Abstract: The paper reports experiences from applying alternative strategies for modelling turbulent flow and local heat-transfer coefficients around in-line tube banks. The motivation is the simulation of conditions in the closely packed cross-flow heat exchangers used in advanced gas-cooled nuclear reactors (AGRs). The main objective is the flow simulation in large-scale tube banks with confining walls. The suitability and accuracy of wall-resolved large-eddy simulation (LES) and Unsteady Reynolds-Averaged Navier–Stokes (URANS) approaches are examined for generic, square, in-line tube banks, where experimental data are limited but available. Within the latter approach, both eddy-viscosity and Reynolds-stress-transport models have been tested. The assumption of flow periodicity in all three directions is investigated by varying the domain size. It is found that the path taken by the fluid through the tube-bank configuration differs according to the treatment of turbulence and whether the flow is treated as two- or three-dimensional. Finally, the important effect of confining walls has been examined by making direct comparison with the experiments of the complete test rig of Aiba et al. (1982)

  7. Energy-based operator splitting approach for the time discretization of coupled systems of partial and ordinary differential equations for fluid flows: The Stokes case

    Science.gov (United States)

    Carichino, Lucia; Guidoboni, Giovanna; Szopos, Marcela

    2018-07-01

    The goal of this work is to develop a novel splitting approach for the numerical solution of multiscale problems involving the coupling between Stokes equations and ODE systems, as often encountered in blood flow modeling applications. The proposed algorithm is based on a semi-discretization in time based on operator splitting, whose design is guided by the rationale of ensuring that the physical energy balance is maintained at the discrete level. As a result, unconditional stability with respect to the time step choice is ensured by the implicit treatment of interface conditions within the Stokes substeps, whereas the coupling between Stokes and ODE substeps is enforced via appropriate initial conditions for each substep. Notably, unconditional stability is attained without the need of subiterating between Stokes and ODE substeps. Stability and convergence properties of the proposed algorithm are tested on three specific examples for which analytical solutions are derived.

  8. Low energy consumption vortex wave flow membrane bioreactor.

    Science.gov (United States)

    Wang, Zhiqiang; Dong, Weilong; Hu, Xiaohong; Sun, Tianyu; Wang, Tao; Sun, Youshan

    2017-11-01

    In order to reduce the energy consumption and membrane fouling of the conventional membrane bioreactor (MBR), a kind of low energy consumption vortex wave flow MBR was exploited based on the combination of biofilm process and membrane filtration process, as well as the vortex wave flow technique. The experimental results showed that the vortex wave flow state in the membrane module could be formed when the Reynolds number (Re) of liquid was adjusted between 450 and 1,050, and the membrane flux declined more slowly in the vortex wave flow state than those in the laminar flow state and turbulent flow state. The MBR system was used to treat domestic wastewater under the condition of vortex wave flow state for 30 days. The results showed that the removal efficiency for CODcr and NH 3 -N was 82% and 98% respectively, and the permeate quality met the requirement of 'Water quality standard for urban miscellaneous water consumption (GB/T 18920-2002)'. Analysis of the energy consumption of the MBR showed that the average energy consumption was 1.90 ± 0.55 kWh/m 3 (permeate), which was only two thirds of conventional MBR energy consumption.

  9. Simultaneous ultrasound and photoacoustics based flow cytometry

    Science.gov (United States)

    Gnyawali, Vaskar; Strohm, Eric M.; Tsai, Scott S. H.; Kolios, Michael C.

    2018-04-01

    We have developed a flow cytometer based on simultaneous detection of ultrasound and photoacoustic waves from individual particles/cells flowing in a microfluidic channel. Our polydimethylsiloxane (PDMS) based hydrodynamic 3-dimensional (3D) flow-focusing microfluidic device contains a cross-junction channel, a micro-needle (ID 100 μm and OD 200 μm) insert, and a 3D printed frame to hold and align a high frequency (center frequency 375 MHz) ultrasound transducer. The focused flow passes through a narrow focal zone with lateral and axial focal lengths of 6-8 μm and 15-20 μm, respectively. Both the lateral and axial alignments are achieved by screwing the transducer to the frame onto the PDMS device. Individual particles pass through an interrogation zone in the microfluidic channel with a collinearly aligned ultrasound transducer and a focused 532 nm wavelength laser beam. The particles are simultaneously insonified by high-frequency ultrasound and irradiated by a laser beam. The ultrasound backscatter and laser generated photoacoustic waves are detected for each passing particle. The backscattered ultrasound and photoacoustic signal are strongly dependent on the size, morphology, mechanical properties, and material properties of the flowing particles; these parameters can be extracted by analyzing unique features in the power spectrum of the signals. Frequencies less than 100 MHz do not have these unique spectral signatures. We show that we can reliably distinguish between different particles in a sample using the acoustic-based flow cytometer. This technique, when extended to biomedical applications, allows us to rapidly analyze the spectral signatures from individual single cells of a large cell population, with applications towards label-free detection and characterization of healthy and diseased cells.

  10. Potential effects of elevated base flow and midsummer spike flow experiments on riparian vegetation along the Green River

    Science.gov (United States)

    Friedman, Jonathan M.

    2018-01-01

    The Upper Colorado River Endangered Fish Recovery Program has requested experimental flow releases from Flaming Gorge Dam for (1) elevated summer base flows to promote larval endangered Colorado pikeminnow, and (2) midsummer spike flows to disadvantage spawning invasive smallmouth bass. This white paper explores the effects of these proposed flow modifications on riparian vegetation and sediment deposition downstream along the Green River. Although modest in magnitude, the elevated base flows and possible associated reductions in magnitude or duration of peak flows would exacerbate a long-term trend of flow stabilization on the Green River that is already leading to proliferation of vegetation including invasive tamarisk along the channel and associated sediment deposition, channel narrowing and channel simplification. Midsummer spike flows could promote establishment of late-flowering plants like tamarisk. Because channel narrowing and simplification threaten persistence and quality of backwater and side channel features needed by endangered fish, the proposed flow modifications could lead to degradation of fish habitat. Channel narrowing and vegetation encroachment could be countered by increases in peak flows or reductions in base flows in some years and by prescription of rapid flow declines following midsummer spike flows. These strategies for reducing vegetation encroachment would need to be balanced with flow

  11. A Multiple-Iterated Dual Control Model for Groundwater Exploitation and Water Level Based on the Optimal Allocation Model of Water Resources

    Directory of Open Access Journals (Sweden)

    Junqiu Liu

    2018-04-01

    Full Text Available In order to mitigate environmental and ecological impacts resulting from groundwater overexploitation, we developed a multiple-iterated dual control model consisting of four modules for groundwater exploitation and water level. First, a water resources allocation model integrating calculation module of groundwater allowable withdrawal was built to predict future groundwater recharge and discharge. Then, the results were input into groundwater numerical model to simulate water levels. Groundwater exploitation was continuously optimized using the critical groundwater level as the feedback, and a groundwater multiple-iterated technique was applied to the feedback process. The proposed model was successfully applied to a typical region in Shenyang in northeast China. Results showed the groundwater numerical model was verified in simulating water levels, with a mean absolute error of 0.44 m, an average relative error of 1.33%, and a root-mean-square error of 0.46 m. The groundwater exploitation reduced from 290.33 million m3 to 116.76 million m3 and the average water level recovered from 34.27 m to 34.72 m in planning year. Finally, we proposed the strategies for water resources management in which the water levels should be controlled within the critical groundwater level. The developed model provides a promising approach for water resources allocation and sustainable groundwater management, especially for those regions with overexploited groundwater.

  12. Industrial energy-flow management

    International Nuclear Information System (INIS)

    Lampret, Marko; Bukovec, Venceslav; Paternost, Andrej; Krizman, Srecko; Lojk, Vito; Golobic, Iztok

    2007-01-01

    Deregulation of the energy market has created new opportunities for the development of new energy-management methods based on energy assets, risk management, energy efficiency and sustainable development. Industrial energy-flow management in pharmaceutical systems, with a responsible approach to sustainable development, is a complex task. For this reason, an energy-information centre, with over 14,000 online measured data/nodes, was implemented. This paper presents the energy-flow rate, exergy-flow rate and cost-flow rate diagrams, with emphasis on cost-flow rate per energy unit or exergy unit of complex pharmaceutical systems

  13. The largest renewable, easily exploitable, and economically sustainable energy resource

    Science.gov (United States)

    Abbate, Giancarlo; Saraceno, Eugenio

    2018-02-01

    Sun, the ultimate energy resource of our planet, transfers energy to the Earth at an average power of 23,000 TW. Earth surface can be regarded as a huge panel transforming solar energy into a more convenient mechanical form, the wind. Since millennia wind is recognized as an exploitable form of energy and it is common knowledge that the higher you go, the stronger the winds flow. To go high is difficult; however Bill Gates cites high wind among possible energy miracles in the near future. Public awareness of this possible miracle is still missing, but today's technology is ready for it.

  14. The Hidden Flow Structure and Metric Space of Network Embedding Algorithms Based on Random Walks.

    Science.gov (United States)

    Gu, Weiwei; Gong, Li; Lou, Xiaodan; Zhang, Jiang

    2017-10-13

    Network embedding which encodes all vertices in a network as a set of numerical vectors in accordance with it's local and global structures, has drawn widespread attention. Network embedding not only learns significant features of a network, such as the clustering and linking prediction but also learns the latent vector representation of the nodes which provides theoretical support for a variety of applications, such as visualization, link prediction, node classification, and recommendation. As the latest progress of the research, several algorithms based on random walks have been devised. Although those algorithms have drawn much attention for their high scores in learning efficiency and accuracy, there is still a lack of theoretical explanation, and the transparency of those algorithms has been doubted. Here, we propose an approach based on the open-flow network model to reveal the underlying flow structure and its hidden metric space of different random walk strategies on networks. We show that the essence of embedding based on random walks is the latent metric structure defined on the open-flow network. This not only deepens our understanding of random- walk-based embedding algorithms but also helps in finding new potential applications in network embedding.

  15. Integral representation in the hodograph plane of compressible flow

    DEFF Research Database (Denmark)

    Hansen, Erik Bent; Hsiao, G.C.

    2003-01-01

    Compressible flow is considered in the hodograph plane. The linearity of the equation determining the stream function is exploited to derive a representation formula involving boundary data only, and a fundamental solution to the adjoint equation. For subsonic flow, an efficient algorithm...

  16. The ESA Scientific Exploitation of Operational Missions element

    Science.gov (United States)

    Desnos, Yves-Louis; Regner, Peter; Delwart, Steven; Benveniste, Jerome; Engdahl, Marcus; Zehner, Claus; Mathieu, Pierre-Philippe; Bojkov, Bojan; Gascon, Ferran; Donlon, Craig; Davidson, Malcolm; Goryl, Philippe; Pinnock, Simon

    2015-04-01

    SEOM is a program element within the fourth period (2013-2017) of ESA's Earth Observation Envelope Programme (http://seom.esa.int/). The prime objective is to federate, support and expand the international research community that the ERS,ENVISAT and the Envelope programmes have built up over the last 25 years. It aims to further strengthen the leadership of the European Earth Observation research community by enabling them to extensively exploit future European operational EO missions. SEOM will enable the science community to address new scientific research that are opened by free and open access to data from operational EO missions. Based on community-wide recommendations for actions on key research issues, gathered through a series of international thematic workshops and scientific user consultation meetings, a work plan has been established and is approved every year by ESA Members States. The 2015 SEOM work plan is covering the organisation of three Science users consultation workshops for Sentinel1/3/5P , the launch of new R&D studies for scientific exploitation of the Sentinels, the development of open-source multi-mission scientific toolboxes, the organisation of advanced international training courses, summer schools and educational materials, as well as activities for promoting the scientific use of EO data. The first SEOM projects have been tendered since 2013 including the development of Sentinel toolboxes, advanced INSAR algorithms for Sentinel-1 TOPS data exploitation, Improved Atmospheric Spectroscopic data-base (IAS), as well as grouped studies for Sentinel-1, -2, and -3 land and ocean applications and studies for exploiting the synergy between the Sentinels. The status and first results from these SEOM projects will be presented and an outlook for upcoming SEOM studies will be given.

  17. Robust-mode analysis of hydrodynamic flows

    Science.gov (United States)

    Roy, Sukesh; Gord, James R.; Hua, Jia-Chen; Gunaratne, Gemunu H.

    2017-04-01

    The emergence of techniques to extract high-frequency high-resolution data introduces a new avenue for modal decomposition to assess the underlying dynamics, especially of complex flows. However, this task requires the differentiation of robust, repeatable flow constituents from noise and other irregular features of a flow. Traditional approaches involving low-pass filtering and principle components analysis have shortcomings. The approach outlined here, referred to as robust-mode analysis, is based on Koopman decomposition. Three applications to (a) a counter-rotating cellular flame state, (b) variations in financial markets, and (c) turbulent injector flows are provided.

  18. Balancing exploration and exploitation in transferring research into practice: a comparison of five knowledge translation entity archetypes.

    Science.gov (United States)

    Oborn, Eivor; Barrett, Michael; Prince, Karl; Racko, Girts

    2013-09-05

    Translating knowledge from research into clinical practice has emerged as a practice of increasing importance. This has led to the creation of new organizational entities designed to bridge knowledge between research and practice. Within the UK, the Collaborations for Leadership in Applied Health Research and Care (CLAHRC) have been introduced to ensure that emphasis is placed in ensuring research is more effectively translated and implemented in clinical practice. Knowledge translation (KT) can be accomplished in various ways and is affected by the structures, activities, and coordination practices of organizations. We draw on concepts in the innovation literature--namely exploration, exploitation, and ambidexterity--to examine these structures and activities as well as the ensuing tensions between research and implementation. Using a qualitative research approach, the study was based on 106 semi-structured, in-depth interviews with the directors, theme leads and managers, key professionals involved in research and implementation in nine CLAHRCs. Data was also collected from intensive focus group workshops. In this article we develop five archetypes for organizing KT. The results show how the various CLAHRC entities work through partnerships to create explorative research and deliver exploitative implementation. The different archetypes highlight a range of structures that can achieve ambidextrous balance as they organize activity and coordinate practice on a continuum of exploration and exploitation. This work suggests that KT entities aim to reach their goals through a balance between exploration and exploitation in the support of generating new research and ensuring knowledge implementation. We highlight different organizational archetypes that support various ways to maintain ambidexterity, where both exploration and exploitation are supported in an attempt to narrow the knowledge gaps. The KT entity archetypes offer insights on strategies in structuring

  19. Rarefied gas flow in a rectangular enclosure induced by non-isothermal walls

    Energy Technology Data Exchange (ETDEWEB)

    Vargas, Manuel; Tatsios, Giorgos; Valougeorgis, Dimitris, E-mail: diva@mie.uth.gr [Department of Mechanical Engineering, University of Thessaly, 38334 Volos (Greece); Stefanov, Stefan [Institute of Mechanics, Bulgarian Academy of Sciences, Sofia (Bulgaria)

    2014-05-15

    The flow of a rarefied gas in a rectangular enclosure due to the non-isothermal walls with no synergetic contributions from external force fields is investigated. The top and bottom walls are maintained at constant but different temperatures and along the lateral walls a linear temperature profile is assumed. Modeling is based on the direct numerical solution of the Shakhov kinetic equation and the Direct Simulation Monte Carlo (DSMC) method. Solving the problem both deterministically and stochastically allows a systematic comparison and verification of the results as well as the exploitation of the numerical advantages of each approach in the investigation of the involved flow and heat transfer phenomena. The thermally induced flow is simulated in terms of three dimensionless parameters characterizing the problem, namely, the reference Knudsen number, the temperature ratio of the bottom over the top plates, and the enclosure aspect ratio. Their effect on the flow configuration and bulk quantities is thoroughly examined. Along the side walls, the gas flows at small Knudsen numbers from cold-to-hot, while as the Knudsen number is increased the gas flows from hot-to-cold and the thermally induced flow configuration becomes more complex. These flow patterns with the hot-to-cold flow to be extended to the whole length of the non-isothermal side walls may exist even at small temperature differences and then, they are enhanced as the temperature difference between the top and bottom plates is increased. The cavity aspect ratio also influences this flow configuration and the hot-to-cold flow is becoming more dominant as the depth compared to the width of the cavity is increased. To further analyze the flow patterns a novel solution decomposition into ballistic and collision parts is introduced. This is achieved by accordingly modifying the indexing process of the typical DSMC algorithm. The contribution of each part of the solution is separately examined and a physical

  20. SSC-EKE: Semi-Supervised Classification with Extensive Knowledge Exploitation.

    Science.gov (United States)

    Qian, Pengjiang; Xi, Chen; Xu, Min; Jiang, Yizhang; Su, Kuan-Hao; Wang, Shitong; Muzic, Raymond F

    2018-01-01

    We introduce a new, semi-supervised classification method that extensively exploits knowledge. The method has three steps. First, the manifold regularization mechanism, adapted from the Laplacian support vector machine (LapSVM), is adopted to mine the manifold structure embedded in all training data, especially in numerous label-unknown data. Meanwhile, by converting the labels into pairwise constraints, the pairwise constraint regularization formula (PCRF) is designed to compensate for the few but valuable labelled data. Second, by further combining the PCRF with the manifold regularization, the precise manifold and pairwise constraint jointly regularized formula (MPCJRF) is achieved. Third, by incorporating the MPCJRF into the framework of the conventional SVM, our approach, referred to as semi-supervised classification with extensive knowledge exploitation (SSC-EKE), is developed. The significance of our research is fourfold: 1) The MPCJRF is an underlying adjustment, with respect to the pairwise constraints, to the graph Laplacian enlisted for approximating the potential data manifold. This type of adjustment plays the correction role, as an unbiased estimation of the data manifold is difficult to obtain, whereas the pairwise constraints, converted from the given labels, have an overall high confidence level. 2) By transforming the values of the two terms in the MPCJRF such that they have the same range, with a trade-off factor varying within the invariant interval [0, 1), the appropriate impact of the pairwise constraints to the graph Laplacian can be self-adaptively determined. 3) The implication regarding extensive knowledge exploitation is embodied in SSC-EKE. That is, the labelled examples are used not only to control the empirical risk but also to constitute the MPCJRF. Moreover, all data, both labelled and unlabelled, are recruited for the model smoothness and manifold regularization. 4) The complete framework of SSC-EKE organically incorporates multiple

  1. VIP - A Framework-Based Approach to Robot Vision

    Directory of Open Access Journals (Sweden)

    Gerd Mayer

    2008-11-01

    Full Text Available For robot perception, video cameras are very valuable sensors, but the computer vision methods applied to extract information from camera images are usually computationally expensive. Integrating computer vision methods into a robot control architecture requires to balance exploitation of camera images with the need to preserve reactivity and robustness. We claim that better software support is needed in order to facilitate and simplify the application of computer vision and image processing methods on autonomous mobile robots. In particular, such support must address a simplified specification of image processing architectures, control and synchronization issues of image processing steps, and the integration of the image processing machinery into the overall robot control architecture. This paper introduces the video image processing (VIP framework, a software framework for multithreaded control flow modeling in robot vision.

  2. VIP - A Framework-Based Approach to Robot Vision

    Directory of Open Access Journals (Sweden)

    Hans Utz

    2006-03-01

    Full Text Available For robot perception, video cameras are very valuable sensors, but the computer vision methods applied to extract information from camera images are usually computationally expensive. Integrating computer vision methods into a robot control architecture requires to balance exploitation of camera images with the need to preserve reactivity and robustness. We claim that better software support is needed in order to facilitate and simplify the application of computer vision and image processing methods on autonomous mobile robots. In particular, such support must address a simplified specification of image processing architectures, control and synchronization issues of image processing steps, and the integration of the image processing machinery into the overall robot control architecture. This paper introduces the video image processing (VIP framework, a software framework for multithreaded control flow modeling in robot vision.

  3. An Exploitation of Satellite-based Observation for Health Information: The UFOS Project

    Energy Technology Data Exchange (ETDEWEB)

    Mangin, A.; Morel, M.; Fanton d' Andon, O

    2000-07-01

    Short, medium and long-term trends of UV intensity levels are of crucial importance for either assessing effective biological impacts on human population, or implementing adequate preventive behaviours. Better information on a large spatial scale and increased public awareness of the short-term variations in UV values will help to support health agencies' goals of educating the public on UV risks. The Ultraviolet Forecast Operational Service Project (UFAS), financed in part by the European Commission/DG Information Society (TEN-TELECOM programme), aims to exploit satellite-based observations and to supply a set of UV products directly useful to health care. The short-term objective is to demonstrate the technical and economical feasibility and benefits that could be brought by such a system. UFOS is carried out by ACRI, with the support of an Advisory Group chaired by WHO and involving representation from the sectors of Health (WHO, INTERSUN collaborating centres, ZAMBON), Environment (WMO, IASB), and Telecommunications (EURECOM, IMET). (author)

  4. Conference on the exploitation, maintenance and resale of ground-based photovoltaic plants

    International Nuclear Information System (INIS)

    Roesner, Sven; Christmann, Ralf; Bozonnat, Cedric; Le Pivert, Xavier; Vaassen, Willi; Dumoulin, Cedric; Kiefer, Klaus; Semmel, Andreas; Doose, Eckhard; Bion, Alain; Sanches, Frederico; Daval, Xavier; Pampouille, Antoine; Goetze, Holger; Stahl, Wolf-Ruediger; Merere, Karine

    2017-11-01

    This document gathers contributions and debate contents of a conference. A first set of contributions addressed the situation and recent developments of ground-based photovoltaic power plants in France and in Germany with presentations of legal frameworks in these both countries. The second set addressed the optimisation of such power plants: meteorological prediction and follow-up at the service of production, risks to which these power plants are exposed during operation, and the issue of right price and good practices for maintenance contracts for these plants. A round table addressed the issue of the balance between optimisation and established practices in a new economic framework. The next set of contributions addressed reasons for and effects of the resale of photovoltaic fleet during their exploitation: actors and financing solutions, value components, point of attention and legal view on re-financing contracts. A round table discussed trends and success factors for the re-financing of photovoltaic projects

  5. An Exploitation of Satellite-based Observation for Health Information: The UFOS Project

    International Nuclear Information System (INIS)

    Mangin, A.; Morel, M.; Fanton d'Andon, O.

    2000-01-01

    Short, medium and long-term trends of UV intensity levels are of crucial importance for either assessing effective biological impacts on human population, or implementing adequate preventive behaviours. Better information on a large spatial scale and increased public awareness of the short-term variations in UV values will help to support health agencies' goals of educating the public on UV risks. The Ultraviolet Forecast Operational Service Project (UFAS), financed in part by the European Commission/DG Information Society (TEN-TELECOM programme), aims to exploit satellite-based observations and to supply a set of UV products directly useful to health care. The short-term objective is to demonstrate the technical and economical feasibility and benefits that could be brought by such a system. UFOS is carried out by ACRI, with the support of an Advisory Group chaired by WHO and involving representation from the sectors of Health (WHO, INTERSUN collaborating centres, ZAMBON), Environment (WMO, IASB), and Telecommunications (EURECOM, IMET). (author)

  6. The RiverFish Approach to Business Process Modeling: Linking Business Steps to Control-Flow Patterns

    Science.gov (United States)

    Zuliane, Devanir; Oikawa, Marcio K.; Malkowski, Simon; Alcazar, José Perez; Ferreira, João Eduardo

    Despite the recent advances in the area of Business Process Management (BPM), today’s business processes have largely been implemented without clearly defined conceptual modeling. This results in growing difficulties for identification, maintenance, and reuse of rules, processes, and control-flow patterns. To mitigate these problems in future implementations, we propose a new approach to business process modeling using conceptual schemas, which represent hierarchies of concepts for rules and processes shared among collaborating information systems. This methodology bridges the gap between conceptual model description and identification of actual control-flow patterns for workflow implementation. We identify modeling guidelines that are characterized by clear phase separation, step-by-step execution, and process building through diagrams and tables. The separation of business process modeling in seven mutually exclusive phases clearly delimits information technology from business expertise. The sequential execution of these phases leads to the step-by-step creation of complex control-flow graphs. The process model is refined through intuitive table and diagram generation in each phase. Not only does the rigorous application of our modeling framework minimize the impact of rule and process changes, but it also facilitates the identification and maintenance of control-flow patterns in BPM-based information system architectures.

  7. Flexible temperature and flow sensor from laser-induced graphene

    KAUST Repository

    Marengo, Marco

    2017-12-25

    Herein we present a flexible temperature sensor and a flow speed sensor based on laser-induced graphene. The main benefits arise from peculiar electrical, thermal and mechanical performances of the material thus obtained, along with a cheap and simple fabrication process. The temperature sensor is a negative temperature coefficient thermistor with non-linear response typical of semi-metals. The thermistor shows a 4% decrease of the resistance in a temperature range of 20–60 °C. The flow sensor exploits the piezoresistive properties of laser-induced graphene and can be used both in gaseous and liquid media thanks to a protective polydimethylsiloxane coating. Main characteristics are ultra-fast response and versatility in design offered by the laser technology.

  8. Optimal inverse magnetorheological damper modeling using shuffled frog-leaping algorithm–based adaptive neuro-fuzzy inference system approach

    Directory of Open Access Journals (Sweden)

    Xiufang Lin

    2016-08-01

    Full Text Available Magnetorheological dampers have become prominent semi-active control devices for vibration mitigation of structures which are subjected to severe loads. However, the damping force cannot be controlled directly due to the inherent nonlinear characteristics of the magnetorheological dampers. Therefore, for fully exploiting the capabilities of the magnetorheological dampers, one of the challenging aspects is to develop an accurate inverse model which can appropriately predict the input voltage to control the damping force. In this article, a hybrid modeling strategy combining shuffled frog-leaping algorithm and adaptive-network-based fuzzy inference system is proposed to model the inverse dynamic characteristics of the magnetorheological dampers for improving the modeling accuracy. The shuffled frog-leaping algorithm is employed to optimize the premise parameters of the adaptive-network-based fuzzy inference system while the consequent parameters are tuned by a least square estimation method, here known as shuffled frog-leaping algorithm-based adaptive-network-based fuzzy inference system approach. To evaluate the effectiveness of the proposed approach, the inverse modeling results based on the shuffled frog-leaping algorithm-based adaptive-network-based fuzzy inference system approach are compared with those based on the adaptive-network-based fuzzy inference system and genetic algorithm–based adaptive-network-based fuzzy inference system approaches. Analysis of variance test is carried out to statistically compare the performance of the proposed methods and the results demonstrate that the shuffled frog-leaping algorithm-based adaptive-network-based fuzzy inference system strategy outperforms the other two methods in terms of modeling (training accuracy and checking accuracy.

  9. A modified GO-FLOW methodology with common cause failure based on Discrete Time Bayesian Network

    International Nuclear Information System (INIS)

    Fan, Dongming; Wang, Zili; Liu, Linlin; Ren, Yi

    2016-01-01

    Highlights: • Identification of particular causes of failure for common cause failure analysis. • Comparison two formalisms (GO-FLOW and Discrete Time Bayesian network) and establish the correlation between them. • Mapping the GO-FLOW model into Bayesian network model. • Calculated GO-FLOW model with common cause failures based on DTBN. - Abstract: The GO-FLOW methodology is a success-oriented system reliability modelling technique for multi-phase missions involving complex time-dependent, multi-state and common cause failure (CCF) features. However, the analysis algorithm cannot easily handle the multiple shared signals and CCFs. In addition, the simulative algorithm is time consuming when vast multi-state components exist in the model, and the multiple time points of phased mission problems increases the difficulty of the analysis method. In this paper, the Discrete Time Bayesian Network (DTBN) and the GO-FLOW methodology are integrated by the unified mapping rules. Based on these rules, the multi operators can be mapped into DTBN followed by, a complete GO-FLOW model with complex characteristics (e.g. phased mission, multi-state, and CCF) can be converted to the isomorphic DTBN and easily analyzed by utilizing the DTBN. With mature algorithms and tools, the multi-phase mission reliability parameter can be efficiently obtained via the proposed