Directory of Open Access Journals (Sweden)
Kaijun Zhou
2017-09-01
Full Text Available The Jump Point Search (JPS algorithm is adopted for local path planning of the driverless car under urban environment, and it is a fast search method applied in path planning. Firstly, a vector Geographic Information System (GIS map, including Global Positioning System (GPS position, direction, and lane information, is built for global path planning. Secondly, the GIS map database is utilized in global path planning for the driverless car. Then, the JPS algorithm is adopted to avoid the front obstacle, and to find an optimal local path for the driverless car in the urban environment. Finally, 125 different simulation experiments in the urban environment demonstrate that JPS can search out the optimal and safety path successfully, and meanwhile, it has a lower time complexity compared with the Vector Field Histogram (VFH, the Rapidly Exploring Random Tree (RRT, A*, and the Probabilistic Roadmaps (PRM algorithms. Furthermore, JPS is validated usefully in the structured urban environment.
Directory of Open Access Journals (Sweden)
Lea Nemec
2008-12-01
Full Text Available Experiences, which we receive in space (indirectly influence on education process respectively on learning-environment. Because of that is the most productive learning-environment those witch founded on experiential-learning. In this research experience took the leading place in forming didactical approaches in teaching geography and to define learning-styles and methods respectively in the direction of creating representative geographical learning environment.
The Random Material Point Method
Wang, B.; Vardon, P.J.; Hicks, M.A.
2017-01-01
The material point method is a finite element variant which allows the material, represented by a point-wise discretization, to move through the background mesh. This means that large deformations, such as those observed post slope failure, can be computed. By coupling this material level
American Society for Testing and Materials. Philadelphia
2008-01-01
1.1 This test method covers the measurement of the heat-transfer rate or the heat flux to the surface of a solid body (test sample) using the measured transient temperature rise of a thermocouple located at the null point of a calorimeter that is installed in the body and is configured to simulate a semi-infinite solid. By definition the null point is a unique position on the axial centerline of a disturbed body which experiences the same transient temperature history as that on the surface of a solid body in the absence of the physical disturbance (hole) for the same heat-flux input. 1.2 Null-point calorimeters have been used to measure high convective or radiant heat-transfer rates to bodies immersed in both flowing and static environments of air, nitrogen, carbon dioxide, helium, hydrogen, and mixtures of these and other gases. Flow velocities have ranged from zero (static) through subsonic to hypersonic, total flow enthalpies from 1.16 to greater than 4.65 × 101 MJ/kg (5 × 102 to greater than 2 × 104 ...
Efthimiou, George C.; Kovalets, Ivan V.; Venetsanos, Alexandros; Andronopoulos, Spyros; Argyropoulos, Christos D.; Kakosimos, Konstantinos
2017-12-01
An improved inverse modelling method to estimate the location and the emission rate of an unknown point stationary source of passive atmospheric pollutant in a complex urban geometry is incorporated in the Computational Fluid Dynamics code ADREA-HF and presented in this paper. The key improvement in relation to the previous version of the method lies in a two-step segregated approach. At first only the source coordinates are analysed using a correlation function of measured and calculated concentrations. In the second step the source rate is identified by minimizing a quadratic cost function. The validation of the new algorithm is performed by simulating the MUST wind tunnel experiment. A grid-independent flow field solution is firstly attained by applying successive refinements of the computational mesh and the final wind flow is validated against the measurements quantitatively and qualitatively. The old and new versions of the source term estimation method are tested on a coarse and a fine mesh. The new method appeared to be more robust, giving satisfactory estimations of source location and emission rate on both grids. The performance of the old version of the method varied between failure and success and appeared to be sensitive to the selection of model error magnitude that needs to be inserted in its quadratic cost function. The performance of the method depends also on the number and the placement of sensors constituting the measurement network. Of significant interest for the practical application of the method in urban settings is the number of concentration sensors required to obtain a ;satisfactory; determination of the source. The probability of obtaining a satisfactory solution - according to specified criteria -by the new method has been assessed as function of the number of sensors that constitute the measurement network.
Directory of Open Access Journals (Sweden)
Emilio García Vega
2015-09-01
Full Text Available The determination of the specific environment is important for the formulation of efficient enterprise strategies, on the basis of a strategic analysis properly focused. This paper suggests a method to help its limitation and identification. With its use, it pretends to offer a simple and practical tool that allows to have a more accurate approach to the identification of the industry that will be analysed, as well as, a clarification of the specification of the direct and substitute competition. Also, with the use of this tool, the managers of a business idea, an experienced or new organization, will have an approximation to the mentioned themes that are of a strategic importance in any management type. Likewise, two applications of the proposed method are presented: the first orientated to a business idea and the second to supermarkets with a high service charge in Lima, Peru.
Fast vanishing-point detection in unstructured environments.
Moghadam, Peyman; Starzyk, Janusz A; Wijesoma, W S
2012-01-01
Vision-based road detection in unstructured environments is a challenging problem as there are hardly any discernible and invariant features that can characterize the road or its boundaries in such environments. However, a salient and consistent feature of most roads or tracks regardless of type of the environments is that their edges, boundaries, and even ruts and tire tracks left by previous vehicles on the path appear to converge into a single point known as the vanishing point. Hence, estimating this vanishing point plays a pivotal role in the determination of the direction of the road. In this paper, we propose a novel methodology based on image texture analysis for the fast estimation of the vanishing point in challenging and unstructured roads. The key attributes of the methodology consist of the optimal local dominant orientation method that uses joint activities of only four Gabor filters to precisely estimate the local dominant orientation at each pixel location in the image plane, the weighting of each pixel based on its dominant orientation, and an adaptive distance-based voting scheme for the estimation of the vanishing point. A series of quantitative and qualitative analyses are presented using natural data sets from the Defense Advanced Research Projects Agency Grand Challenge projects to demonstrate the effectiveness and the accuracy of the proposed methodology.
Parametric methods for spatial point processes
DEFF Research Database (Denmark)
Møller, Jesper
(This text is submitted for the volume ‘A Handbook of Spatial Statistics' edited by A.E. Gelfand, P. Diggle, M. Fuentes, and P. Guttorp, to be published by Chapmand and Hall/CRC Press, and planned to appear as Chapter 4.4 with the title ‘Parametric methods'.) 1 Introduction This chapter considers...... inference procedures for parametric spatial point process models. The widespread use of sensible but ad hoc methods based on summary statistics of the kind studied in Chapter 4.3 have through the last two decades been supplied by likelihood based methods for parametric spatial point process models....... The increasing development of such likelihood based methods, whether frequentist or Bayesian, has lead to more objective and efficient statistical procedures. When checking a fitted parametric point process model, summary statistics and residual analysis (Chapter 4.5) play an important role in combination...
Radar Methods in Urban Environments
2016-10-26
AFRL-AFOSR-VA-TR-2016-0344 Radar Methods in Urban Environments Arye Nehorai WASHINGTON UNIVERSITY THE Final Report 10/26/2016 DISTRIBUTION A...Methods in Urban Environments 5a. CONTRACT NUMBER 5b. GRANT NUMBER FA9550-11-1-0210 5c. PROGRAM ELEMENT NUMBER 61102F 6. AUTHOR(S) Arye Nehorai 5d...Methods in Urban Environments Grant No. FA9550-11-1-0210 Final Report August 2011 – July 2016 Arye Nehorai Department of Electrical and Systems
Device and method for determining freezing points
Mathiprakasam, Balakrishnan (Inventor)
1986-01-01
A freezing point method and device (10) are disclosed. The method and device pertain to an inflection point technique for determining the freezing points of mixtures. In both the method and device (10), the mixture is cooled to a point below its anticipated freezing point and then warmed at a substantially linear rate. During the warming process, the rate of increase of temperature of the mixture is monitored by, for example, thermocouple (28) with the thermocouple output signal being amplified and differentiated by a differentiator (42). The rate of increase of temperature data are analyzed and a peak rate of increase of temperature is identified. In the preferred device (10) a computer (22) is utilized to analyze the rate of increase of temperature data following the warming process. Once the maximum rate of increase of temperature is identified, the corresponding temperature of the mixture is located and earmarked as being substantially equal to the freezing point of the mixture. In a preferred device (10), the computer (22), in addition to collecting the temperature and rate of change of temperature data, controls a programmable power supply (14) to provide a predetermined amount of cooling and warming current to thermoelectric modules (56).
Revisiting Blasius Flow by Fixed Point Method
Directory of Open Access Journals (Sweden)
Ding Xu
2014-01-01
Full Text Available The well-known Blasius flow is governed by a third-order nonlinear ordinary differential equation with two-point boundary value. Specially, one of the boundary conditions is asymptotically assigned on the first derivative at infinity, which is the main challenge on handling this problem. Through introducing two transformations not only for independent variable bur also for function, the difficulty originated from the semi-infinite interval and asymptotic boundary condition is overcome. The deduced nonlinear differential equation is subsequently investigated with the fixed point method, so the original complex nonlinear equation is replaced by a series of integrable linear equations. Meanwhile, in order to improve the convergence and stability of iteration procedure, a sequence of relaxation factors is introduced in the framework of fixed point method and determined by the steepest descent seeking algorithm in a convenient manner.
Pointing Verification Method for Spaceborne Lidars
Directory of Open Access Journals (Sweden)
Axel Amediek
2017-01-01
Full Text Available High precision acquisition of atmospheric parameters from the air or space by means of lidar requires accurate knowledge of laser pointing. Discrepancies between the assumed and actual pointing can introduce large errors due to the Doppler effect or a wrongly assumed air pressure at ground level. In this paper, a method for precisely quantifying these discrepancies for airborne and spaceborne lidar systems is presented. The method is based on the comparison of ground elevations derived from the lidar ranging data with high-resolution topography data obtained from a digital elevation model and allows for the derivation of the lateral and longitudinal deviation of the laser beam propagation direction. The applicability of the technique is demonstrated by using experimental data from an airborne lidar system, confirming that geo-referencing of the lidar ground spot trace with an uncertainty of less than 10 m with respect to the used digital elevation model (DEM can be obtained.
Method Points: towards a metric for method complexity
Directory of Open Access Journals (Sweden)
Graham McLeod
1998-11-01
Full Text Available A metric for method complexity is proposed as an aid to choosing between competing methods, as well as in validating the effects of method integration or the products of method engineering work. It is based upon a generic method representation model previously developed by the author and adaptation of concepts used in the popular Function Point metric for system size. The proposed technique is illustrated by comparing two popular I.E. deliverables with counterparts in the object oriented Unified Modeling Language (UML. The paper recommends ways to improve the practical adoption of new methods.
ASSESSING TEMPORAL BEHAVIOR IN LIDAR POINT CLOUDS OF URBAN ENVIRONMENTS
Directory of Open Access Journals (Sweden)
J. Schachtschneider
2017-05-01
Full Text Available Self-driving cars and robots that run autonomously over long periods of time need high-precision and up-to-date models of the changing environment. The main challenge for creating long term maps of dynamic environments is to identify changes and adapt the map continuously. Changes can occur abruptly, gradually, or even periodically. In this work, we investigate how dense mapping data of several epochs can be used to identify the temporal behavior of the environment. This approach anticipates possible future scenarios where a large fleet of vehicles is equipped with sensors which continuously capture the environment. This data is then being sent to a cloud based infrastructure, which aligns all datasets geometrically and subsequently runs scene analysis on it, among these being the analysis for temporal changes of the environment. Our experiments are based on a LiDAR mobile mapping dataset which consists of 150 scan strips (a total of about 1 billion points, which were obtained in multiple epochs. Parts of the scene are covered by up to 28 scan strips. The time difference between the first and last epoch is about one year. In order to process the data, the scan strips are aligned using an overall bundle adjustment, which estimates the surface (about one billion surface element unknowns as well as 270,000 unknowns for the adjustment of the exterior orientation parameters. After this, the surface misalignment is usually below one centimeter. In the next step, we perform a segmentation of the point clouds using a region growing algorithm. The segmented objects and the aligned data are then used to compute an occupancy grid which is filled by tracing each individual LiDAR ray from the scan head to every point of a segment. As a result, we can assess the behavior of each segment in the scene and remove voxels from temporal objects from the global occupancy grid.
Assessing Temporal Behavior in LIDAR Point Clouds of Urban Environments
Schachtschneider, J.; Schlichting, A.; Brenner, C.
2017-05-01
Self-driving cars and robots that run autonomously over long periods of time need high-precision and up-to-date models of the changing environment. The main challenge for creating long term maps of dynamic environments is to identify changes and adapt the map continuously. Changes can occur abruptly, gradually, or even periodically. In this work, we investigate how dense mapping data of several epochs can be used to identify the temporal behavior of the environment. This approach anticipates possible future scenarios where a large fleet of vehicles is equipped with sensors which continuously capture the environment. This data is then being sent to a cloud based infrastructure, which aligns all datasets geometrically and subsequently runs scene analysis on it, among these being the analysis for temporal changes of the environment. Our experiments are based on a LiDAR mobile mapping dataset which consists of 150 scan strips (a total of about 1 billion points), which were obtained in multiple epochs. Parts of the scene are covered by up to 28 scan strips. The time difference between the first and last epoch is about one year. In order to process the data, the scan strips are aligned using an overall bundle adjustment, which estimates the surface (about one billion surface element unknowns) as well as 270,000 unknowns for the adjustment of the exterior orientation parameters. After this, the surface misalignment is usually below one centimeter. In the next step, we perform a segmentation of the point clouds using a region growing algorithm. The segmented objects and the aligned data are then used to compute an occupancy grid which is filled by tracing each individual LiDAR ray from the scan head to every point of a segment. As a result, we can assess the behavior of each segment in the scene and remove voxels from temporal objects from the global occupancy grid.
Existing bridge evaluation using deficiency point method
Directory of Open Access Journals (Sweden)
Vičan Josef
2016-01-01
Full Text Available In the transforming EU countries, transportation infrastructure has a prominent position in advancing industry and society. Recent developments show, that attention should be moved from the design of new structures towards the repair and reconstruction of existing ones to ensure and increase their satisfactory structural reliability and durability. The problem is very urgent because many construction projects, especially transport infrastructure, in most European countries are more than 50-60 years old and require rehabilitations based on objective evaluations. Therefore, the paper presents methodology of existing bridge evaluation based on reliability concept using Deficiency Point Method. The methodology was prepared from the viewpoint to determine the priority order for existing bridge rehabilitation.
An evolutionary tipping point in a changing environment.
Osmond, Matthew M; Klausmeier, Christopher A
2017-12-01
Populations can persist in directionally changing environments by evolving. Quantitative genetic theory aims to predict critical rates of environmental change beyond which populations go extinct. Here, we point out that all current predictions effectively assume the same specific fitness function. This function causes selection on the standing genetic variance of quantitative traits to become increasingly strong as mean trait values depart from their optima. Hence, there is no bound on the rate of evolution and persistence is determined by the critical rate of environmental change at which populations cease to grow. We then show that biologically reasonable changes to the underlying fitness function can impose a qualitatively different extinction threshold. In particular, inflection points caused by weakening selection create local extrema in the strength of selection and thus in the rate of evolution. These extrema can produce evolutionary tipping points, where long-run population growth rates drop from positive to negative values without ever crossing zero. Generic early-warning signs of tipping points are found to have little power to detect imminent extinction, and require hard-to-gather data. Furthermore, we show how evolutionary tipping points produce evolutionary hysteresis, creating extinction debts. © 2017 The Author(s). Evolution © 2017 The Society for the Study of Evolution.
Evaluation of the wheel-point and step-point methods of veld ...
African Journals Online (AJOL)
The step-point method yielded results on percentage veld composition and on veld composition score which did not differ in precision or in absolute amount from those obtained using the wheel-point apparatus. Adoption of the step point method in preference to the wheel-point method saves in equipment and manpower, ...
Material-Point Method Analysis of Bending in Elastic Beams
DEFF Research Database (Denmark)
Andersen, Søren Mikkel; Andersen, Lars
2007-01-01
The aim of this paper is to rest different kinds of spatial interpolation for the material-point method.......The aim of this paper is to rest different kinds of spatial interpolation for the material-point method....
Interior-Point Methods for Linear Programming: A Review
Singh, J. N.; Singh, D.
2002-01-01
The paper reviews some recent advances in interior-point methods for linear programming and indicates directions in which future progress can be made. Most of the interior-point methods belong to any of three categories: affine-scaling methods, potential reduction methods and central path methods. These methods are discussed together with…
Slope failure analysis using the random material point method
Wang, B.; Hicks, M.A.; Vardon, P.J.
2016-01-01
The random material point method (RMPM), which combines random field theory and the material point method (MPM), is proposed. It differs from the random finite-element method (RFEM), by assigning random field (cell) values to material points that are free to move relative to the computational grid
Logistics engineering education from the point of view environment
Bányai, Ágota
2010-05-01
-of-the-art, reliable, automated mechatronics-material flow system with its single control engineering system provides the academic staff with up-to-date research facilities, and enables the students to study sophisticated equipment and systems that could also operate under industrial conditions, thus offering knowledge that can be efficiently utilised in the industry after graduation. The laboratory measurements of the programme in logistics engineering are performed in this laboratory, and they are supplemented by the theoretical and practical measurements in the ‘Robotic Technology Assembly Laboratory', the ‘Power Electronics Laboratory', the ‘Mechatronics Laboratory', the ‘CAD/CAM Laboratory' and the ‘Acoustics and Product Laboratory'. The bodies of knowledge connected with environment protection and sustainable development can be grouped around three large topic areas. In environmental economics the objective is to present the corporate-organisational aspects of environmental management. Putting environmental management in the focal point, the objective of the programme is to impart knowledge that can be utilised in practice which can be used to shift the relation between the organisation and its environment in the direction of sustainability. The tools include environmental controlling, environmental marketing and various solutions of environmental performance evaluation. The second large topic area is globalization and its logistic aspects. In the field of global logistics the following knowledge carries special weight: logistic challenges in a globalised world; the concept of global logistics, its conditions and effects; delayed manufacture, assembly, packaging; the economic investigation of delayed assembly; globalised purchase and distribution in logistics; the logistic features of the globalised production supply/distribution chain; meta-logistics systems; logistics-related EU harmonisation issues; the effect of e-commerce on the global logistic system; logistic
ADOxx Modelling Method Conceptualization Environment
Directory of Open Access Journals (Sweden)
Nesat Efendioglu
2017-04-01
Full Text Available The importance of Modelling Methods Engineering is equally rising with the importance of domain specific languages (DSL and individual modelling approaches. In order to capture the relevant semantic primitives for a particular domain, it is necessary to involve both, (a domain experts, who identify relevant concepts as well as (b method engineers who compose a valid and applicable modelling approach. This process consists of a conceptual design of formal or semi-formal of modelling method as well as a reliable, migratable, maintainable and user friendly software development of the resulting modelling tool. Modelling Method Engineering cycle is often under-estimated as both the conceptual architecture requires formal verification and the tool implementation requires practical usability, hence we propose a guideline and corresponding tools to support actors with different background along this complex engineering process. Based on practical experience in business, more than twenty research projects within the EU frame programmes and a number of bilateral research initiatives, this paper introduces the phases, corresponding a toolbox and lessons learned with the aim to support the engineering of a modelling method. ”The proposed approach is illustrated and validated within use cases from three different EU-funded research projects in the fields of (1 Industry 4.0, (2 e-learning and (3 cloud computing. The paper discusses the approach, the evaluation results and derived outlooks.
Biased gradient squared descent saddle point finding method.
Duncan, Juliana; Wu, Qiliang; Promislow, Keith; Henkelman, Graeme
2014-05-21
The harmonic approximation to transition state theory simplifies the problem of calculating a chemical reaction rate to identifying relevant low energy saddle points in a chemical system. Here, we present a saddle point finding method which does not require knowledge of specific product states. In the method, the potential energy landscape is transformed into the square of the gradient, which converts all critical points of the original potential energy surface into global minima. A biasing term is added to the gradient squared landscape to stabilize the low energy saddle points near a minimum of interest, and destabilize other critical points. We demonstrate that this method is competitive with the dimer min-mode following method in terms of the number of force evaluations required to find a set of low-energy saddle points around a reactant minimum.
ECONOMY AND ENVIRONMENT. POINTS OF VIEW AND ACTIONS
Directory of Open Access Journals (Sweden)
Ramona Maria CHIVU
2015-06-01
Full Text Available The current situation of the environment is a direct consequence of history, especially of economic history. The article analyzes the relationship between economy and environment, in order to determine the main causes that led to the ecological crisis. The article argues that despite the close relationship between environment and economy, the economy did not take into account the environmental protection, until there appeared serious environmental impacts. Including the environment in economic thinking led, first, to the application of conventional techniques to economy to try to solve problems. The article also examines some priorities for improving the relationship between economy and environment conducive to move towards a model of sustainable development. The conclusion is that sustainable development is the goal to be achieved. In search of sustainability, companies should play an important role, as in all phases of the life cycle of their products, there are environmental risks and, therefore, they are a major source of environmental degradation of the planet.
Sivasubramani, S.; Ahmad, Md. Samar
2014-06-01
This paper proposes a new hybrid algorithm combining harmony search (HS) algorithm and interior point method (IPM) for economic dispatch (ED) problem with valve-point effect. ED problem with valve-point effect is modeled as a non-linear, constrained and non-convex optimization problem having several local minima. IPM is a best non-linear optimization method for convex optimization problems. Since ED problem with valve-point effect has multiple local minima, IPM results in a local optimum solution. In order to avoid IPM getting trapped in a local optimum, a new evolutionary algorithm HS, which is good in global exploration, has been combined. In the hybrid method, HS is used for global search and IPM for local search. The hybrid method has been tested on three different test systems to prove its effectiveness. Finally, the simulation results are also compared with other methods reported in the literature.
Liu, Xiaoqiang; Chen, Yanming; Cheng, Liang; Yao, Mengru; Deng, Shulin; Li, Manchun; Cai, Dong
2017-01-01
Filtering of airborne laser scanning (ALS) point clouds into ground and nonground points is a core postprocessing step for ALS data. A hierarchical filtering method, which has high operating efficiency and accuracy because of the combination of multiscale morphology and progressive triangulated irregular network (TIN) densification (PTD), is proposed. In the proposed method, the grid is first constructed for the ALS point clouds, and virtual seed points are set by analyzing the shape and elevation distribution of points within the grid. Then, the virtual seed points are classified as ground or nonground using the multiscale morphological method. Finally, the virtual ground seed points are utilized to generate the initial TIN, and the filter is completed by iteratively densifying the initial TIN. We used various ALS data to test the performance of the proposed method. The experimental results show that the proposed filtering method has strong applicability for a variety of landscapes and, in particular, has lower commission error than the classical PTD filtering method in urban areas.
Identification of critical points of thermal environment in broiler production
Directory of Open Access Journals (Sweden)
AG Menezes
2010-03-01
Full Text Available This paper describes an exploratory study carried out to determine critical control points and possible risks in hatcheries and broiler farms. The study was based in the identification of the potential hazards existing in broiler production, from the hatchery to the broiler farm, identifying critical control points and defining critical limits. The following rooms were analyzed in the hatchery: egg cold storage, pre-heating, incubator, and hatcher rooms. Two broiler houses were studied in two different farms. The following data were collected in the hatchery and broiler houses: temperature (ºC and relative humidity (%, air velocity (m s-1, ammonia levels, and light intensity (lx. In the broiler house study, a questionnaire using information of the Broiler Production Good Practices (BPGP manual was applied, and workers were interviewed. Risk analysis matrices were build to determine Critical Control Points (CCP. After data collection, Statistical Process Control (SPC was applied through the analysis of the Process Capacity Index, using the software program Minitab15®. Environmental temperature and relative humidity were the critical points identified in the hatchery and in both farms. The classes determined as critical control points in the broiler houses were poultry litter, feeding, drinking water, workers' hygiene and health, management and biosecurity, norms and legislation, facilities, and activity planning. It was concluded that CCP analysis, associated with SPC control tools and guidelines of good production practices, may contribute to improve quality control in poultry production.
Post-Processing in the Material-Point Method
DEFF Research Database (Denmark)
Andersen, Søren; Andersen, Lars Vabbersgaard
The material-point method (MPM) is a numerical method for dynamic or static analysis of solids using a discretization in time and space. The method has shown to be successful in modelling physical problems involving large deformations, which are difficult to model with traditional numerical tools......-point method. The first idea involves associating a volume with each material point and displaying the deformation of this volume. In the discretization process, the physical domain is divided into a number of smaller volumes each represented by a simple shape; here quadrilaterals are chosen for the presented...... strain problems. It is noted, that this idea is also relevant for other point based methods, such as smoothed particle hydrodynamics, where the history dependent variables are tracked by a set of particles. The second idea introduced in the article involves the fact that while the stresses may oscillate...
A Novel Fast Method for Point-sampled Model Simplification
Directory of Open Access Journals (Sweden)
Cao Zhi
2016-01-01
Full Text Available A novel fast simplification method for point-sampled statue model is proposed. Simplifying method for 3d model reconstruction is a hot topic in the field of 3D surface construction. But it is difficult as point cloud of many 3d models is very large, so its running time becomes very long. In this paper, a two-stage simplifying method is proposed. Firstly, a feature-preserved non-uniform simplification method for cloud points is presented, which simplifies the data set to remove the redundancy while keeping down the features of the model. Secondly, an affinity clustering simplifying method is used to classify the point cloud into a sharp point or a simple point. The advantage of Affinity Propagation clustering is passing messages among data points and fast speed of processing. Together with the re-sampling, it can dramatically reduce the duration of the process while keep a lower memory cost. Both theoretical analysis and experimental results show that after the simplification, the performance of the proposed method is efficient as well as the details of the surface are preserved well.
Leverage points for improving global food security and the environment.
West, Paul C; Gerber, James S; Engstrom, Peder M; Mueller, Nathaniel D; Brauman, Kate A; Carlson, Kimberly M; Cassidy, Emily S; Johnston, Matt; MacDonald, Graham K; Ray, Deepak K; Siebert, Stefan
2014-07-18
Achieving sustainable global food security is one of humanity's contemporary challenges. Here we present an analysis identifying key "global leverage points" that offer the best opportunities to improve both global food security and environmental sustainability. We find that a relatively small set of places and actions could provide enough new calories to meet the basic needs for more than 3 billion people, address many environmental impacts with global consequences, and focus food waste reduction on the commodities with the greatest impact on food security. These leverage points in the global food system can help guide how nongovernmental organizations, foundations, governments, citizens' groups, and businesses prioritize actions. Copyright © 2014, American Association for the Advancement of Science.
C-point and V-point singularity lattice formation and index sign conversion methods
Kumar Pal, Sushanta; Ruchi; Senthilkumaran, P.
2017-06-01
The generic singularities in an ellipse field are C-points namely stars, lemons and monstars in a polarization distribution with C-point indices (-1/2), (+1/2) and (+1/2) respectively. Similar to C-point singularities, there are V-point singularities that occur in a vector field and are characterized by Poincare-Hopf index of integer values. In this paper we show that the superposition of three homogenously polarized beams in different linear states leads to the formation of polarization singularity lattice. Three point sources at the focal plane of the lens are used to create three interfering plane waves. A radial/azimuthal polarization converter (S-wave plate) placed near the focal plane modulates the polarization states of the three beams. The interference pattern is found to host C-points and V-points in a hexagonal lattice. The C-points occur at intensity maxima and V-points occur at intensity minima. Modulating the state of polarization (SOP) of three plane waves from radial to azimuthal does not essentially change the nature of polarization singularity lattice as the Poincare-Hopf index for both radial and azimuthal polarization distributions is (+1). Hence a transformation from a star to a lemon is not trivial, as such a transformation requires not a single SOP change, but a change in whole spatial SOP distribution. Further there is no change in the lattice structure and the C- and V-points appear at locations where they were present earlier. Hence to convert an interlacing star and V-point lattice into an interlacing lemon and V-point lattice, the interferometer requires modification. We show for the first time a method to change the polarity of C-point and V-point indices. This means that lemons can be converted into stars and stars can be converted into lemons. Similarly the positive V-point can be converted to negative V-point and vice versa. The intensity distribution in all these lattices is invariant as the SOPs of the three beams are changed in an
Analysis of Stress Updates in the Material-point Method
DEFF Research Database (Denmark)
2009-01-01
are solved on a background computational grid. Several references state, that one of the main advantages of the material-point method is the easy application of complicated material behaviour as the constitutive response is updated individually for each material point. However, as discussed here, the MPM way...
Surface processing methods for point sets using finite elements
Clarenz, Ulrich; Rumpf, Martin; Telea, Alexandru
2004-01-01
We present a framework for processing point-based surfaces via partial differential equations (PDEs). Our framework efficiently and effectively brings well-known PDE-based processing techniques to the field of point-based surfaces. At the core of our method is a finite element discretization of PDEs
Novel Ratio Subtraction and Isoabsorptive Point Methods for ...
African Journals Online (AJOL)
Purpose: To develop and validate two innovative spectrophotometric methods used for the simultaneous determination of ambroxol hydrochloride and doxycycline in their binary mixture. Methods: Ratio subtraction and isoabsorptive point methods were used for the simultaneous determination of ambroxol hydrochloride ...
Image to Point Cloud Method of 3D-MODELING
Chibunichev, A. G.; Galakhov, V. P.
2012-07-01
This article describes the method of constructing 3D models of objects (buildings, monuments) based on digital images and a point cloud obtained by terrestrial laser scanner. The first step is the automated determination of exterior orientation parameters of digital image. We have to find the corresponding points of the image and point cloud to provide this operation. Before the corresponding points searching quasi image of point cloud is generated. After that SIFT algorithm is applied to quasi image and real image. SIFT algorithm allows to find corresponding points. Exterior orientation parameters of image are calculated from corresponding points. The second step is construction of the vector object model. Vectorization is performed by operator of PC in an interactive mode using single image. Spatial coordinates of the model are calculated automatically by cloud points. In addition, there is automatic edge detection with interactive editing available. Edge detection is performed on point cloud and on image with subsequent identification of correct edges. Experimental studies of the method have demonstrated its efficiency in case of building facade modeling.
Entrepreneur environment management behavior evaluation method derived from environmental economy.
Zhang, Lili; Hou, Xilin; Xi, Fengru
2013-12-01
Evaluation system can encourage and guide entrepreneurs, and impel them to perform well in environment management. An evaluation method based on advantage structure is established. It is used to analyze entrepreneur environment management behavior in China. Entrepreneur environment management behavior evaluation index system is constructed based on empirical research. Evaluation method of entrepreneurs is put forward, from the point of objective programming-theory to alert entrepreneurs concerned to think much of it, which means to take minimized objective function as comprehensive evaluation result and identify disadvantage structure pattern. Application research shows that overall behavior of Chinese entrepreneurs environmental management are good, specially, environment strategic behavior are best, environmental management behavior are second, cultural behavior ranks last. Application results show the efficiency and feasibility of this method. Copyright © 2013 The Research Centre for Eco-Environmental Sciences, Chinese Academy of Sciences. Published by Elsevier B.V. All rights reserved.
Natural Preconditioning and Iterative Methods for Saddle Point Systems
Pestana, Jennifer
2015-01-01
© 2015 Society for Industrial and Applied Mathematics. The solution of quadratic or locally quadratic extremum problems subject to linear(ized) constraints gives rise to linear systems in saddle point form. This is true whether in the continuous or the discrete setting, so saddle point systems arising from the discretization of partial differential equation problems, such as those describing electromagnetic problems or incompressible flow, lead to equations with this structure, as do, for example, interior point methods and the sequential quadratic programming approach to nonlinear optimization. This survey concerns iterative solution methods for these problems and, in particular, shows how the problem formulation leads to natural preconditioners which guarantee a fast rate of convergence of the relevant iterative methods. These preconditioners are related to the original extremum problem and their effectiveness - in terms of rapidity of convergence - is established here via a proof of general bounds on the eigenvalues of the preconditioned saddle point matrix on which iteration convergence depends.
Selective Integration in the Material-Point Method
DEFF Research Database (Denmark)
2009-01-01
The paper deals with stress integration in the material-point method. In order to avoid parasitic shear in bending, a formulation is proposed, based on selective integration in the background grid that is used to solve the governing equations. The suggested integration scheme is compared to a tra......The paper deals with stress integration in the material-point method. In order to avoid parasitic shear in bending, a formulation is proposed, based on selective integration in the background grid that is used to solve the governing equations. The suggested integration scheme is compared...... to a traditional material-point-method computation in which the stresses are evaluated at the material points. The deformation of a cantilever beam is analysed, assuming elastic or elastoplastic material behaviour....
Material-point Method Analysis of Bending in Elastic Beams
DEFF Research Database (Denmark)
Andersen, Søren Mikkel; Andersen, Lars
The aim of this paper is to test different types of spatial interpolation for the materialpoint method. The interpolations include quadratic elements and cubic splines. A brief introduction to the material-point method is given. Simple liner-elastic problems are tested, including the classical...... cantilevered beam problem. As shown in the paper, the use of negative shape functions is not consistent with the material-point method in its current form, necessitating other types of interpolation such as cubic splines in order to obtain smoother representations of field quantities. It is shown...
A fixed point method to compute solvents of matrix polynomials
Marcos, Fernando; Pereira, Edgar
2009-01-01
Matrix polynomials play an important role in the theory of matrix differential equations. We develop a fixed point method to compute solutions of matrix polynomials equations, where the matricial elements of the matrix polynomial are considered separately as complex polynomials. Numerical examples illustrate the method presented.
Full-step interior-point methods for symmetric optimization
Gu, G.
2009-01-01
In [SIAM J. Optim., 16(4):1110--1136 (electronic), 2006] Roos proposed a full-Newton step Infeasible Interior-Point Method (IIPM) for Linear Optimization (LO). It is a primal-dual homotopy method; it differs from the classical IIPMs in that it uses only full steps. This means that no line searches
Novel TPPO Based Maximum Power Point Method for Photovoltaic System
Directory of Open Access Journals (Sweden)
ABBASI, M. A.
2017-08-01
Full Text Available Photovoltaic (PV system has a great potential and it is installed more when compared with other renewable energy sources nowadays. However, the PV system cannot perform optimally due to its solid reliance on climate conditions. Due to this dependency, PV system does not operate at its maximum power point (MPP. Many MPP tracking methods have been proposed for this purpose. One of these is the Perturb and Observe Method (P&O which is the most famous due to its simplicity, less cost and fast track. But it deviates from MPP in continuously changing weather conditions, especially in rapidly changing irradiance conditions. A new Maximum Power Point Tracking (MPPT method, Tetra Point Perturb and Observe (TPPO, has been proposed to improve PV system performance in changing irradiance conditions and the effects on characteristic curves of PV array module due to varying irradiance are delineated. The Proposed MPPT method has shown better results in increasing the efficiency of a PV system.
Analysis of Spatial Interpolation in the Material-Point Method
DEFF Research Database (Denmark)
Andersen, Søren; Andersen, Lars
2010-01-01
This paper analyses different types of spatial interpolation for the material-point method The interpolations include quadratic elements and cubic splines in addition to the standard linear shape functions usually applied. For the small-strain problem of a vibrating bar, the best results are obta......This paper analyses different types of spatial interpolation for the material-point method The interpolations include quadratic elements and cubic splines in addition to the standard linear shape functions usually applied. For the small-strain problem of a vibrating bar, the best results...
Micro-four-point Probe Hall effect Measurement method
DEFF Research Database (Denmark)
Petersen, Dirch Hjorth; Hansen, Ole; Lin, Rong
2008-01-01
contributions may be separated using dual configuration measurements. The method differs from conventional van der Pauw measurements since the probe pins are placed in the interior of the sample region, not just on the perimeter. We experimentally verify the method by micro-four-point probe measurements...... on ultrashallow junctions in silicon and germanium. On a cleaved silicon ultrashallow junction sample we determine carrier mobility, sheet carrier density, and sheet resistance from micro-four-point probe measurements under various experimental conditions, and show with these conditions reproducibility within...
Distributed Interior-point Method for Loosely Coupled Problems
DEFF Research Database (Denmark)
Pakazad, Sina Khoshfetrat; Hansson, Anders; Andersen, Martin Skovgaard
2014-01-01
In this paper, we put forth distributed algorithms for solving loosely coupled unconstrained and constrained optimization problems. Such problems are usually solved using algorithms that are based on a combination of decomposition and first order methods. These algorithms are commonly very slow...... and require many iterations to converge. In order to alleviate this issue, we propose algorithms that combine the Newton and interior-point methods with proximal splitting methods for solving such problems. Particularly, the algorithm for solving unconstrained loosely coupled problems, is based on Newton......’s method and utilizes proximal splitting to distribute the computations for calculating the Newton step at each iteration. A combination of this algorithm and the interior-point method is then used to introduce a distributed algorithm for solving constrained loosely coupled problems. We also provide...
Modelling of Landslides with the Material-point Method
DEFF Research Database (Denmark)
Andersen, Søren; Andersen, Lars
2009-01-01
A numerical model for studying the dynamic evolution of landslides is presented. The numerical model is based on the Generalized Interpolation Material Point Method. A simplified slope with a house placed on top is analysed. An elasto-plastic material model based on the Mohr-Coulomb yield criterion...
Incompressible material point method for free surface flow
Zhang, Fan; Zhang, Xiong; Sze, Kam Yim; Lian, Yanping; Liu, Yan
2017-02-01
To overcome the shortcomings of the weakly compressible material point method (WCMPM) for modeling the free surface flow problems, an incompressible material point method (iMPM) is proposed based on operator splitting technique which splits the solution of momentum equation into two steps. An intermediate velocity field is first obtained by solving the momentum equations ignoring the pressure gradient term, and then the intermediate velocity field is corrected by the pressure term to obtain a divergence-free velocity field. A level set function which represents the signed distance to free surface is used to track the free surface and apply the pressure boundary conditions. Moreover, an hourglass damping is introduced to suppress the spurious velocity modes which are caused by the discretization of the cell center velocity divergence from the grid vertexes velocities when solving pressure Poisson equations. Numerical examples including dam break, oscillation of a cubic liquid drop and a droplet impact into deep pool show that the proposed incompressible material point method is much more accurate and efficient than the weakly compressible material point method in solving free surface flow problems.
Material-Point Method Analysis of Bending in Elastic Beams
DEFF Research Database (Denmark)
Andersen, Søren Mikkel; Andersen, Lars
2007-01-01
cantilevered beam problem. As shown in the paper, the use of negative shape functions is not consistent with the material-point method in its current form, necessitating other types of interpolation such as cubic splines in order to obtain smoother representations of field quantities. It is shown...
Material-Point-Method Analysis of Collapsing Slopes
DEFF Research Database (Denmark)
Andersen, Søren; Andersen, Lars
2009-01-01
-interpolation material-point method, combining a Eulerian grid for solving the governing equations of a continuum with a Lagrangian description for the material. The method is extended to analyse interaction between multiple bodies, introducing a master-slave algorithm for frictional contact along interfaces. Further......, a deformed material description is introduced, based on time integration of the deformation gradient and utilising Gauss quadrature over the volume associated with each material point. The method has been implemented in a Fortran code and employed for the analysis of a landslide that took place during...... the night of December 1st, 2008, near Lønstrup, Denmark. Using a simple Mohr-Coulomb model for the soil, the computational model is able to reproduce the change in the slope geometry at the site....
A Review on the Modified Finite Point Method
Directory of Open Access Journals (Sweden)
Nan-Jing Wu
2014-01-01
Full Text Available The objective of this paper is to make a review on recent advancements of the modified finite point method, named MFPM hereafter. This MFPM method is developed for solving general partial differential equations. Benchmark examples of employing this method to solve Laplace, Poisson, convection-diffusion, Helmholtz, mild-slope, and extended mild-slope equations are verified and then illustrated in fluid flow problems. Application of MFPM to numerical generation of orthogonal grids, which is governed by Laplace equation, is also demonstrated.
Computation of multi-material interactions using point method
Energy Technology Data Exchange (ETDEWEB)
Zhang, Duan Z [Los Alamos National Laboratory; Ma, Xia [Los Alamos National Laboratory; Giguere, Paul T [Los Alamos National Laboratory
2009-01-01
Calculations of fluid flows are often based on Eulerian description, while calculations of solid deformations are often based on Lagrangian description of the material. When the Eulerian descriptions are used to problems of solid deformations, the state variables, such as stress and damage, need to be advected, causing significant numerical diffusion error. When Lagrangian methods are used to problems involving large solid deformat ions or fluid flows, mesh distortion and entanglement are significant sources of error, and often lead to failure of the calculation. There are significant difficulties for either method when applied to problems involving large deformation of solids. To address these difficulties, particle-in-cell (PIC) method is introduced in the 1960s. In the method Eulerian meshes stay fixed and the Lagrangian particles move through the Eulerian meshes during the material deformation. Since its introduction, many improvements to the method have been made. The work of Sulsky et al. (1995, Comput. Phys. Commun. v. 87, pp. 236) provides a mathematical foundation for an improved version, material point method (MPM) of the PIC method. The unique advantages of the MPM method have led to many attempts of applying the method to problems involving interaction of different materials, such as fluid-structure interactions. These problems are multiphase flow or multimaterial deformation problems. In these problems pressures, material densities and volume fractions are determined by satisfying the continuity constraint. However, due to the difference in the approximations between the material point method and the Eulerian method, erroneous results for pressure will be obtained if the same scheme used in Eulerian methods for multiphase flows is used to calculate the pressure. To resolve this issue, we introduce a numerical scheme that satisfies the continuity requirement to higher order of accuracy in the sense of weak solutions for the continuity equations
An improved maximum power point tracking method for photovoltaic systems
Energy Technology Data Exchange (ETDEWEB)
Tafticht, T.; Agbossou, K.; Doumbia, M.L.; Cheriti, A. [Institut de recherche sur l' hydrogene, Departement de genie electrique et genie informatique, Universite du Quebec a Trois-Rivieres, C.P. 500, Trois-Rivieres (QC) (Canada)
2008-07-15
In most of the maximum power point tracking (MPPT) methods described currently in the literature, the optimal operation point of the photovoltaic (PV) systems is estimated by linear approximations. However these approximations can lead to less than optimal operating conditions and hence reduce considerably the performances of the PV system. This paper proposes a new approach to determine the maximum power point (MPP) based on measurements of the open-circuit voltage of the PV modules, and a nonlinear expression for the optimal operating voltage is developed based on this open-circuit voltage. The approach is thus a combination of the nonlinear and perturbation and observation (P and O) methods. The experimental results show that the approach improves clearly the tracking efficiency of the maximum power available at the output of the PV modules. The new method reduces the oscillations around the MPP, and increases the average efficiency of the MPPT obtained. The new MPPT method will deliver more power to any generic load or energy storage media. (author)
2014-10-01
TITLE AND SUBTITLE Forward Arming and Refueling Points for Fighter Aircraft: Power Projection in an Antiaccess Environment 5a. CONTRACT NUMBER 5b...Arming and Refueling Point (FARP) Using Discrete Event Simulation,” Graduate Research Project AFIT/ MLM / ENS/05-08 (Wright-Patterson AFB, OH: Air
Multiperiod hydrothermal economic dispatch by an interior point method
Directory of Open Access Journals (Sweden)
Kimball L. M.
2002-01-01
Full Text Available This paper presents an interior point algorithm to solve the multiperiod hydrothermal economic dispatch (HTED. The multiperiod HTED is a large scale nonlinear programming problem. Various optimization methods have been applied to the multiperiod HTED, but most neglect important network characteristics or require decomposition into thermal and hydro subproblems. The algorithm described here exploits the special bordered block diagonal structure and sparsity of the Newton system for the first order necessary conditions to result in a fast efficient algorithm that can account for all network aspects. Applying this new algorithm challenges a conventional method for the use of available hydro resources known as the peak shaving heuristic.
A Robust Shape Reconstruction Method for Facial Feature Point Detection.
Tan, Shuqiu; Chen, Dongyi; Guo, Chenggang; Huang, Zhiqi
2017-01-01
Facial feature point detection has been receiving great research advances in recent years. Numerous methods have been developed and applied in practical face analysis systems. However, it is still a quite challenging task because of the large variability in expression and gestures and the existence of occlusions in real-world photo shoot. In this paper, we present a robust sparse reconstruction method for the face alignment problems. Instead of a direct regression between the feature space and the shape space, the concept of shape increment reconstruction is introduced. Moreover, a set of coupled overcomplete dictionaries termed the shape increment dictionary and the local appearance dictionary are learned in a regressive manner to select robust features and fit shape increments. Additionally, to make the learned model more generalized, we select the best matched parameter set through extensive validation tests. Experimental results on three public datasets demonstrate that the proposed method achieves a better robustness over the state-of-the-art methods.
Perceived effectiveness of teaching methods for point of care ultrasound.
Cartier, Rudolph A; Skinner, Carl; Laselle, Brooks
2014-07-01
Point of care ultrasound (POCUS) is a rapidly expanding aspect of both the practice and education of emergency physicians. The most effective methods of teaching these valuable skills have not been explored. This project aimed to identify those methods that provide the best educational value as determined by the learner. Data was collected from pre- and post-course surveys administered to students of the introductory POCUS course provided to emergency medicine residents each year at our facility. Data were collected in 2010 and 2011. Participants were asked to evaluate the effectiveness of small- vs. large-group format, still images vs. video clips, and PowerPoint slides vs. live demonstration vs. hands-on scanning. Students felt the most effective methods to be small-group format, video-clip examples, and hands-on scanning sessions. Students also rated hands-on sessions, still images, and video images as more effective in post-course surveys as compared with pre-course surveys. The methods perceived as most effective for POCUS education are small-group format, video-clip examples, and hands-on scanning sessions. Published by Elsevier Inc.
Directory of Open Access Journals (Sweden)
Jingyu Sun
2014-07-01
Full Text Available To survive in the current shipbuilding industry, it is of vital importance for shipyards to have the ship components’ accuracy evaluated efficiently during most of the manufacturing steps. Evaluating components’ accuracy by comparing each component’s point cloud data scanned by laser scanners and the ship’s design data formatted in CAD cannot be processed efficiently when (1 extract components from point cloud data include irregular obstacles endogenously, or when (2 registration of the two data sets have no clear direction setting. This paper presents reformative point cloud data processing methods to solve these problems. K-d tree construction of the point cloud data fastens a neighbor searching of each point. Region growing method performed on the neighbor points of the seed point extracts the continuous part of the component, while curved surface fitting and B-spline curved line fitting at the edge of the continuous part recognize the neighbor domains of the same component divided by obstacles’ shadows. The ICP (Iterative Closest Point algorithm conducts a registration of the two sets of data after the proper registration’s direction is decided by principal component analysis. By experiments conducted at the shipyard, 200 curved shell plates are extracted from the scanned point cloud data, and registrations are conducted between them and the designed CAD data using the proposed methods for an accuracy evaluation. Results show that the methods proposed in this paper support the accuracy evaluation targeted point cloud data processing efficiently in practice.
Fast calculation method of a CGH for a patch model using a point-based method.
Ogihara, Y; Sakamoto, Y
2015-01-01
Holography is three-dimensional display technology. Computer-generated holograms (CGHs) are created by simulating light propagation on a computer, and they are able to display a virtual object. There are mainly two types of calculation methods of CGHs, a point-based method and the fast Fourier-transform (FFT)-based method. The FFT-based method is based on a patch model, and it is suited to accelerating the calculations as it calculates the light propagation across a patch as a whole. The calculations with the point-based method are characterized by a high degree of parallelism, and it is suited to accelerating graphics processing units (GPUs). The point-based method is not suitable for calculation with the patch model. This paper proposes a fast calculation algorithm for a patch model with the point-based method. The proposed method calculates the line on a patch as a whole regardless of the number of points on the line. When the proposed method is implemented on a GPU, the calculation time of the proposed method is shorter than with the point-based method.
Modeling of Landslides with the Material Point Method
DEFF Research Database (Denmark)
Andersen, Søren Mikkel; Andersen, Lars
2008-01-01
A numerical model for studying the dynamic evolution of landslides is presented. The numerical model is based on the Generalized Interpolation Material Point Method. A simplified slope with a house placed on top is analysed. An elasto-plastic material model based on the Mohr-Coulomb yield criterion...... is employed for the soil. The slide is triggered for the initially stable slope by removing the cohesion of the soil and the slide is followed from the triggering until a state of equilibrium is again reached. Parameter studies, in which the angle of internal friction of the soil and the degree...
A Searching Method of Candidate Segmentation Point in SPRINT Classification
Directory of Open Access Journals (Sweden)
Zhihao Wang
2016-01-01
Full Text Available SPRINT algorithm is a classical algorithm for building a decision tree that is a widely used method of data classification. However, the SPRINT algorithm has high computational cost in the calculation of attribute segmentation. In this paper, an improved SPRINT algorithm is proposed, which searches better candidate segmentation point for the discrete and continuous attributes. The experiment results demonstrate that the proposed algorithm can reduce the computation cost and improve the efficiency of the algorithm by improving the segmentation of continuous attributes and discrete attributes.
Methods for Project Tracking in Creative Environment
Directory of Open Access Journals (Sweden)
Eva Šviráková
2017-06-01
Full Text Available The objective of this paper is to design new alternative methods for project tracking in creative industry environment. One of the research method is system dynamics modelling. A dynamics model accepts problems which were identified based on qualitative research and assessed using the system thinking method. A system dynamics model contains a project reference mode which correctly and provably expresses the planned and actual project development in terms of scope and budget. Reference mode of the project was discovered on the basis of Earned Value Management method modification. System dynamics modelling suitability is demonstrated on a case study of a creative project called “Water for Everyone”. If the project is behind schedule, the simulation explains why it happened and forecasts further project development. Managers can use the modelling process to evaluate the impact of their decisions on the next stages of the project life cycle and adopt new management practices using scenarios. The published research is valuable for key stakeholders as it is practically focused on ascertaining essential information about the project progress.
Performance of remote target pointing hand movements in a 3D environment.
Lee, Yung-Hui; Wu, Shu-Kai; Liu, Yan-Pin
2013-06-01
In this study, we investigated and modeled the performance of target pointing hand movements in a hand free, touchless 3D environment. The targets had different positions, sizes and distances. Performance measurements included total movement time and movement trajectories. The total movement time consisted of a "primary submovement time" and a "secondary submovement time". Results indicated that the total movement time for targets with depth in the upper part of the spherical framework (3.10s) was shorter than for targets without depth (3.79s). The time for targets without depth in the lower part of the spherical framework (2.94s) was shorter than for targets with depth (3.57s). Within a 3D perspective display, the perception of distance and size depends on its depth position. Our results confirmed the adequacy of the 3D information in the display by showing the longest total movement time was observed for the reach of the "forward" target (3.94s). Fitts' model explained the total movement time (for targets without depth r(2)=.72; for targets with depth r(2)=.72). This study showed that participants navigated the 3D space naturally and could move the cursor using both sequential a axis moving strategy and a straight line moving strategy. Real-life applications of the proposed method include interface design for 3D perspective displays and hand movements in 3D environments. Copyright © 2013. Published by Elsevier B.V.
A MODELING METHOD OF FLUTTERING LEAVES BASED ON POINT CLOUD
Directory of Open Access Journals (Sweden)
J. Tang
2017-09-01
Full Text Available Leaves falling gently or fluttering are common phenomenon in nature scenes. The authenticity of leaves falling plays an important part in the dynamic modeling of natural scenes. The leaves falling model has a widely applications in the field of animation and virtual reality. We propose a novel modeling method of fluttering leaves based on point cloud in this paper. According to the shape, the weight of leaves and the wind speed, three basic trajectories of leaves falling are defined, which are the rotation falling, the roll falling and the screw roll falling. At the same time, a parallel algorithm based on OpenMP is implemented to satisfy the needs of real-time in practical applications. Experimental results demonstrate that the proposed method is amenable to the incorporation of a variety of desirable effects.
Barnes, Jerry D.
1982-01-01
Approved for public release; distribution is unlimited This thesis examines the questions of user requirements, design considerations, and network environment for a local area network Terminal Management function in support of the Naval Supply Systems Command's Stock Point Logistics Integrated Communications Environment (SPLICE). Criteria are developed from this examination. They include process-process communication, virtual terminal, and user defined screen capabilities as well as a ne...
Directory of Open Access Journals (Sweden)
Mroczka Janusz
2014-12-01
Full Text Available Photovoltaic panels have a non-linear current-voltage characteristics to produce the maximum power at only one point called the maximum power point. In the case of the uniform illumination a single solar panel shows only one maximum power, which is also the global maximum power point. In the case an irregularly illuminated photovoltaic panel many local maxima on the power-voltage curve can be observed and only one of them is the global maximum. The proposed algorithm detects whether a solar panel is in the uniform insolation conditions. Then an appropriate strategy of tracking the maximum power point is taken using a decision algorithm. The proposed method is simulated in the environment created by the authors, which allows to stimulate photovoltaic panels in real conditions of lighting, temperature and shading.
An Automated Fixed-Point Optimization Tool in MATLAB XSG/SynDSP Environment
Wang, Cheng C.; Changchun Shi; Brodersen, Robert W.; Dejan Marković
2011-01-01
This paper presents an automated tool for floating-point to fixed-point conversion. The tool is based on previous work that was built in MATLAB/Simulink environment and Xilinx System Generator support. The tool is now extended to include Synplify DSP blocksets in a seamless way from the users' view point. In addition to FPGA area estimation, the tool now also includes ASIC area estimation for end-users who choose the ASIC flow. The tool minimizes hardware cost subject to mean-squared quantiza...
Visual capture and understanding of hand pointing actions in a 3-D environment.
Colombo, C; Del Bimbo, A; Valli, A
2003-01-01
We present a nonintrusive system based on computer vision for human-computer interaction in three-dimensional (3-D) environments controlled by hand pointing gestures. Users are allowed to walk around in a room and manipulate information displayed on its walls by using their own hands as pointing devices. Once captured and tracked in real-time using stereo vision, hand pointing gestures are remapped onto the current point of interest, thus reproducing in an advanced interaction scenario the "drag and click" behavior of traditional mice. The system, called PointAt (patent pending), enjoys a careful modeling of both user and optical subsystem, and visual algorithms for self-calibration and adaptation to both user peculiarities and environmental changes. The concluding sections provide an insight into system characteristics, performance, and relevance for real applications.
DEFF Research Database (Denmark)
Bey, Niki
2000-01-01
to three essential assessment steps, the method enables rough environmental evaluations and supports in this way material- and process-related decision-making in the early stages of design. In its overall structure, the Oil Point Method is related to Life Cycle Assessment - except for two main differences...... industrial activity world-wide - makes it increasingly evident that our current way of life is not sustainable. A major contribution of society's negative impact on the environment is related to industrial products and the processes during their life cycle, from raw materials extraction over manufacturing...... of environmental evaluation and only approximate information about the product and its life cycle. This dissertation addresses this challenge in presenting a method, which is tailored to these requirements of designers - the Oil Point Method (OPM). In providing environmental key information and confining itself...
Robust maximum power point tracking method for photovoltaic cells
Energy Technology Data Exchange (ETDEWEB)
Chu, C.C.; Chen, C.L. [National Cheng Kung Univ., Taiwan (China). Dept. of Aeronautics and Astronautics
2007-07-01
This paper described a peak power tracking method that uses a sliding mode control system. The method was designed to track the maximum peak power (MPP) of photovoltaic (PV) applications. The performance of the controller was demonstrated through a series of numerical studies that simulated a PV module designed to deliver a maximum of 60 W of power. An approaching control approach was used to guarantee that system states reached the PV surface and produced MPP consistently. A state space averaging method was used to represent system dynamics. The proposed control law ensured that output voltage remained higher than input voltage. The PV model and proposed approach were modelled and evaluated in relation to robustness to irradiance, temperature, and load. The study demonstrated that the sliding mode approach maintained maximum power output while remaining robust in various external conditions. The system attained steady state irradiance levels within an order of milliseconds. The system was also tested under rapid changes of temperature, where the sliding mode approach was able to maintain output at optimum points. It was concluded that the approach almost reaches the theoretical maximum power of known irradiance and temperature. 20 refs., 1 tab., 9 figs.
Estimates of Genotype x Environment Interactions and Heritability of Black Point in Durum Wheat
Directory of Open Access Journals (Sweden)
Hasan Hasan KILIÇ
2009-12-01
Full Text Available Experiments were carried out in four different locations with 14 durum wheat genotypes in two successful seasons of 1999- 2000 and 2000-2001. Black point disease of genotypes was evaluated by interactions of genotypes and environment as well as heritability (h2. It was found that black point disease affected differently in different locations and growing seasons. This indicates that the genotypes have different adaptation ability for traits studied in different locations. Heritability rate that variance analyzes accepted means squares calculated was found as phenotypic variance rate of genotypic variance was found as 49%. Variance of genotype x location x year was bigger than other variance components. Genotype x year variance was bigger than genotype x location variance too. The heritability of black point disease was founded moderate. In addition to one of factors on the black point disease genotype also environment x genotype interactions were found effective. According to evaluation of black point disease, the highest value was obtained from ‘Sorgül’ (2.7%, ‘Dicle-74’ (2.56% and ‘Gidara-II’ (2.32% varieties; the least value was obtained from ‘Balcali-2000’ variety (0.64%. Alternaria spp., Phoma sp, Fusarium spp., Helminthosporium spp., and Stemphylium spp., fungi were isolated from the grain affected by black point diseases.
Directory of Open Access Journals (Sweden)
Julia Sanchez
2017-09-01
Full Text Available Acquiring 3D data with LiDAR systems involves scanning multiple scenes from different points of view. In actual systems, the ICP algorithm (Iterative Closest Point is commonly used to register the acquired point clouds together to form a unique one. However, this method faces local minima issues and often needs a coarse initial alignment to converge to the optimum. This paper develops a new method for registration adapted to indoor environments and based on structure priors of such scenes. Our method works without odometric data or physical targets. The rotation and translation of the rigid transformation are computed separately, using, respectively, the Gaussian image of the point clouds and a correlation of histograms. To evaluate our algorithm on challenging registration cases, two datasets were acquired and are available for comparison with other methods online. The evaluation of our algorithm on four datasets against six existing methods shows that the proposed method is more robust against sampling and scene complexity. Moreover, the time performances enable a real-time implementation.
A Hybrid Maximum Power Point Tracking Method for Automobile Exhaust Thermoelectric Generator
Quan, Rui; Zhou, Wei; Yang, Guangyou; Quan, Shuhai
2017-05-01
To make full use of the maximum output power of automobile exhaust thermoelectric generator (AETEG) based on Bi2Te3 thermoelectric modules (TEMs), taking into account the advantages and disadvantages of existing maximum power point tracking methods, and according to the output characteristics of TEMs, a hybrid maximum power point tracking method combining perturb and observe (P&O) algorithm, quadratic interpolation and constant voltage tracking method was put forward in this paper. Firstly, it searched the maximum power point with P&O algorithms and a quadratic interpolation method, then, it forced the AETEG to work at its maximum power point with constant voltage tracking. A synchronous buck converter and controller were implemented in the electric bus of the AETEG applied in a military sports utility vehicle, and the whole system was modeled and simulated with a MATLAB/Simulink environment. Simulation results demonstrate that the maximum output power of the AETEG based on the proposed hybrid method is increased by about 3.0% and 3.7% compared with that using only the P&O algorithm and the quadratic interpolation method, respectively. The shorter tracking time is only 1.4 s, which is reduced by half compared with that of the P&O algorithm and quadratic interpolation method, respectively. The experimental results demonstrate that the tracked maximum power is approximately equal to the real value using the proposed hybrid method,and it can preferentially deal with the voltage fluctuation of the AETEG with only P&O algorithm, and resolve the issue that its working point can barely be adjusted only with constant voltage tracking when the operation conditions change.
Phase-integral method allowing nearlying transition points
Fröman, Nanny
1996-01-01
The efficiency of the phase-integral method developed by the present au thors has been shown both analytically and numerically in many publica tions. With the inclusion of supplementary quantities, closely related to new Stokes constants and obtained with the aid of comparison equation technique, important classes of problems in which transition points may approach each other become accessible to accurate analytical treatment. The exposition in this monograph is of a mathematical nature but has important physical applications, some examples of which are found in the adjoined papers. Thus, we would like to emphasize that, although we aim at mathematical rigor, our treatment is made primarily with physical needs in mind. To introduce the reader into the background of this book, we start by de scribing the phase-integral approximation of arbitrary order generated from an unspecified base function. This is done in Chapter 1, which is reprinted, after minor changes, from a review article. Chapter 2 is the re...
Oliva, Marc; Ruiz-Fernández, Jesús
2015-04-01
Elephant Point constitutes an ice-free environment of only 1.16 km2 in the south-western corner of Livingston Island (South Shetland Islands, Antarctica). In January 2014 we conducted a detailed geomorphological mapping in situ, examining the distribution of processes and landforms in Elephant Point. Four main geomorphological environments were identified: proglacial area, moraine system, bedrock plateaus and marine terraces. The ice cap covering most part of the western half of this island has significantly retreated during the last decades in parallel to the accelerated warming trend recorded in the Antarctic Peninsula. Between 1956 and 2010 this rapid retreat has exposed 17.3% of the present-day land surface in Elephant Point. Two of these geomorphological units are located in this new ice-free area: a polygenic moraine stretching from the western to the eastern edges of the peninsula and a relatively flat proglacial environment. The glacier sat next to the northern slope of the moraine in 1956, but the retreat of the Rotch dome glacier during the last decades left these environments free of glacier ice. Following the deglaciation, the postglacial dynamics in these areas showed the characteristic response of paraglacial systems. Very different geomorphological processes occur today in the northern and southern slopes of the moraine, which is related to the different stage of paraglacial adjustment in both sides. The southern slope shows a low to moderate activity of slope processes operating on coarser sediments that have built pronival ramparts, debris flows and alluvial fans. By contrast, mass wasting processes are very active in the northern slope, which is composed of fine-grained unconsolidated sediments. Here, ice-rich permafrost has been observed in slumps degrading the moraine. The sediments of the moraine are being mobilized down-slope in large amounts by landslides and slumps. Up to 9.6% of the surface of the moraine is affected by retrogressive
Incorporating the viewer's point of regard (POR) in gaze-contingent virtual environments
Duchowski, Andrew T.
1998-04-01
Awareness of the viewer's gaze position in a virtual environment can lead to significant savings in scene processing if fine detail information is presented `just in time' only at locations corresponding to the participant's gaze, i.e., in a gaze-contingent manner. This paper describes the evolution of a gaze-contingent video display system, `gcv'. Gcv is a multithreaded, real-time program which displays digital video and simultaneously tracks a subject's eye movements. Treating the eye tracker as an ordinary positional sensor, gcv's architecture shares many similarities with contemporary virtual environment system designs. Performance of the present system is evaluated in terms of (1) eye tracker sampling latency and video transfer rates, and (2) measured eye tracker accuracy and slippage. The programming strategies developed for incorporating the viewer's point-of-regard are independent of proprietary eye tracking equipment and are applicable to general gaze- contingent virtual environment designs.
Huysmans, Marijke; Dassargues, Alain
2014-05-01
In heterogeneous environments with complex geological structures, analysis of pumping and tracer tests is often problematic. Standard interpretation methods do not account for heterogeneity or simulate this heterogeneity introducing empirical zonation of the calibrated parameters or using variogram-based geostatistical techniques that are often not able to describe realistic heterogeneity in complex geological environments where e.g. sedimentary structures, multi-facies deposits, structures with large connectivity or curvi-linear structures can be present. Multiple-point geostatistics aims to overcome the limitations of the variogram and can be applied in different research domains to simulate heterogeneity in complex environments. In this project, multiple-point geostatistics is applied to the interpretation of pumping tests and a tracer test in an actual case of a sandy heterogeneous aquifer. This study allows to deduce the main advantages and disadvantages of this technique compared to variogram-based techniques for interpretation of pumping tests and tracer tests. A pumping test and a tracer test were performed in the same sandbar deposit consisting of cross-bedded units composed of materials with different grain sizes and hydraulic conductivities. The pumping test and the tracer test are analyzed with a local 3D groundwater model in which fine-scale sedimentary heterogeneity is modelled using multiple-point geostatistics. To reduce CPU and RAM requirements of the multiple-point geostatistical simulation steps, edge properties indicating the presence of irregularly-shaped surfaces are directly simulated. Results show that for the pumping test as well as for the tracer test, incorporating heterogeneity results in a better fit between observed and calculated drawdowns/concentrations. The improvement of the fit is however not as large as expected. In this paper, the reasons for these somewhat unsatisfactory results are explored and recommendations for future
Building an Enhanced Vocabulary of the Robot Environment with a Ceiling Pointing Camera
Rituerto, Alejandro; Andreasson, Henrik; Murillo, Ana C.; Lilienthal, Achim; Guerrero, José Jesús
2016-01-01
Mobile robots are of great help for automatic monitoring tasks in different environments. One of the first tasks that needs to be addressed when creating these kinds of robotic systems is modeling the robot environment. This work proposes a pipeline to build an enhanced visual model of a robot environment indoors. Vision based recognition approaches frequently use quantized feature spaces, commonly known as Bag of Words (BoW) or vocabulary representations. A drawback using standard BoW approaches is that semantic information is not considered as a criteria to create the visual words. To solve this challenging task, this paper studies how to leverage the standard vocabulary construction process to obtain a more meaningful visual vocabulary of the robot work environment using image sequences. We take advantage of spatio-temporal constraints and prior knowledge about the position of the camera. The key contribution of our work is the definition of a new pipeline to create a model of the environment. This pipeline incorporates (1) tracking information to the process of vocabulary construction and (2) geometric cues to the appearance descriptors. Motivated by long term robotic applications, such as the aforementioned monitoring tasks, we focus on a configuration where the robot camera points to the ceiling, which captures more stable regions of the environment. The experimental validation shows how our vocabulary models the environment in more detail than standard vocabulary approaches, without loss of recognition performance. We show different robotic tasks that could benefit of the use of our visual vocabulary approach, such as place recognition or object discovery. For this validation, we use our publicly available data-set. PMID:27070607
Building an Enhanced Vocabulary of the Robot Environment with a Ceiling Pointing Camera
Directory of Open Access Journals (Sweden)
Alejandro Rituerto
2016-04-01
Full Text Available Mobile robots are of great help for automatic monitoring tasks in different environments. One of the first tasks that needs to be addressed when creating these kinds of robotic systems is modeling the robot environment. This work proposes a pipeline to build an enhanced visual model of a robot environment indoors. Vision based recognition approaches frequently use quantized feature spaces, commonly known as Bag of Words (BoW or vocabulary representations. A drawback using standard BoW approaches is that semantic information is not considered as a criteria to create the visual words. To solve this challenging task, this paper studies how to leverage the standard vocabulary construction process to obtain a more meaningful visual vocabulary of the robot work environment using image sequences. We take advantage of spatio-temporal constraints and prior knowledge about the position of the camera. The key contribution of our work is the definition of a new pipeline to create a model of the environment. This pipeline incorporates (1 tracking information to the process of vocabulary construction and (2 geometric cues to the appearance descriptors. Motivated by long term robotic applications, such as the aforementioned monitoring tasks, we focus on a configuration where the robot camera points to the ceiling, which captures more stable regions of the environment. The experimental validation shows how our vocabulary models the environment in more detail than standard vocabulary approaches, without loss of recognition performance. We show different robotic tasks that could benefit of the use of our visual vocabulary approach, such as place recognition or object discovery. For this validation, we use our publicly available data-set.
Building an Enhanced Vocabulary of the Robot Environment with a Ceiling Pointing Camera.
Rituerto, Alejandro; Andreasson, Henrik; Murillo, Ana C; Lilienthal, Achim; Guerrero, José Jesús
2016-04-07
Mobile robots are of great help for automatic monitoring tasks in different environments. One of the first tasks that needs to be addressed when creating these kinds of robotic systems is modeling the robot environment. This work proposes a pipeline to build an enhanced visual model of a robot environment indoors. Vision based recognition approaches frequently use quantized feature spaces, commonly known as Bag of Words (BoW) or vocabulary representations. A drawback using standard BoW approaches is that semantic information is not considered as a criteria to create the visual words. To solve this challenging task, this paper studies how to leverage the standard vocabulary construction process to obtain a more meaningful visual vocabulary of the robot work environment using image sequences. We take advantage of spatio-temporal constraints and prior knowledge about the position of the camera. The key contribution of our work is the definition of a new pipeline to create a model of the environment. This pipeline incorporates (1) tracking information to the process of vocabulary construction and (2) geometric cues to the appearance descriptors. Motivated by long term robotic applications, such as the aforementioned monitoring tasks, we focus on a configuration where the robot camera points to the ceiling, which captures more stable regions of the environment. The experimental validation shows how our vocabulary models the environment in more detail than standard vocabulary approaches, without loss of recognition performance. We show different robotic tasks that could benefit of the use of our visual vocabulary approach, such as place recognition or object discovery. For this validation, we use our publicly available data-set.
Nguyen, Hoang Long; Belton, David; Helmholz, Petra
2016-06-01
The demand for accurate spatial data has been increasing rapidly in recent years. Mobile laser scanning (MLS) systems have become a mainstream technology for measuring 3D spatial data. In a MLS point cloud, the point clouds densities of captured point clouds of interest features can vary: they can be sparse and heterogeneous or they can be dense. This is caused by several factors such as the speed of the carrier vehicle and the specifications of the laser scanner(s). The MLS point cloud data needs to be processed to get meaningful information e.g. segmentation can be used to find meaningful features (planes, corners etc.) that can be used as the inputs for many processing steps (e.g. registration, modelling) that are more difficult when just using the point cloud. Planar features are dominating in manmade environments and they are widely used in point clouds registration and calibration processes. There are several approaches for segmentation and extraction of planar objects available, however the proposed methods do not focus on properly segment MLS point clouds automatically considering the different point densities. This research presents the extension of the segmentation method based on planarity of the features. This proposed method was verified using both simulated and real MLS point cloud datasets. The results show that planar objects in MLS point clouds can be properly segmented and extracted by the proposed segmentation method.
Directory of Open Access Journals (Sweden)
H. L. Nguyen
2016-06-01
Full Text Available The demand for accurate spatial data has been increasing rapidly in recent years. Mobile laser scanning (MLS systems have become a mainstream technology for measuring 3D spatial data. In a MLS point cloud, the point clouds densities of captured point clouds of interest features can vary: they can be sparse and heterogeneous or they can be dense. This is caused by several factors such as the speed of the carrier vehicle and the specifications of the laser scanner(s. The MLS point cloud data needs to be processed to get meaningful information e.g. segmentation can be used to find meaningful features (planes, corners etc. that can be used as the inputs for many processing steps (e.g. registration, modelling that are more difficult when just using the point cloud. Planar features are dominating in manmade environments and they are widely used in point clouds registration and calibration processes. There are several approaches for segmentation and extraction of planar objects available, however the proposed methods do not focus on properly segment MLS point clouds automatically considering the different point densities. This research presents the extension of the segmentation method based on planarity of the features. This proposed method was verified using both simulated and real MLS point cloud datasets. The results show that planar objects in MLS point clouds can be properly segmented and extracted by the proposed segmentation method.
Directory of Open Access Journals (Sweden)
Kong Minxiu
2016-01-01
Full Text Available Optimal point-to-point motion planning of flexible parallel manipulator was investigated in this paper and the 3RRR parallel manipulator is taken as the object. First, an optimal point-to-point motion planning problem was constructed with the consideration of the rigid-flexible coupling dynamic model and actuator dynamics. Then, the multi-interval Legendre–Gauss–Radau (LGR pseudospectral method was introduced to transform the optimal control problem into Nonlinear Programming (NLP problem. At last, the simulation and experiment were carried out on the flexible parallel manipulator. Compared with the line motion of quantic polynomial planning, the proposed method could constrain the flexible displacement amplitude and suppress the residue vibration.
Simulation Method of Cumulative Flow without of Axial Stagnation Point
Directory of Open Access Journals (Sweden)
I. V. Minin
2015-01-01
Full Text Available The paper describes a developed analytical model of non-stationary formation of a cumulative jet without axial stagnation point. It shows that it is possible to control the weight, size, speed, and momentum of the jet with the parameters, which are not achievable in the classical mode of jet formation. Considered jet formation principle can be used to conduct laboratory simulation of astro-like plasma jets.
Comparing Single-Point and Multi-point Calibration Methods in Modulated DSC
Energy Technology Data Exchange (ETDEWEB)
Van Buskirk, Caleb Griffith [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-06-14
Heat capacity measurements for High Density Polyethylene (HDPE) and Ultra-high Molecular Weight Polyethylene (UHMWPE) were performed using Modulated Differential Scanning Calorimetry (mDSC) over a wide temperature range, -70 to 115 °C, with a TA Instruments Q2000 mDSC. The default calibration method for this instrument involves measuring the heat capacity of a sapphire standard at a single temperature near the middle of the temperature range of interest. However, this method often fails for temperature ranges that exceed a 50 °C interval, likely because of drift or non-linearity in the instrument's heat capacity readings over time or over the temperature range. Therefore, in this study a method was developed to calibrate the instrument using multiple temperatures and the same sapphire standard.
Development of a Multi-Point Microwave Interferometry (MPMI) Method
Energy Technology Data Exchange (ETDEWEB)
Specht, Paul Elliott [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Cooper, Marcia A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Jilek, Brook Anton [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2015-09-01
A multi-point microwave interferometer (MPMI) concept was developed for non-invasively tracking a shock, reaction, or detonation front in energetic media. Initially, a single-point, heterodyne microwave interferometry capability was established. The design, construction, and verification of the single-point interferometer provided a knowledge base for the creation of the MPMI concept. The MPMI concept uses an electro-optic (EO) crystal to impart a time-varying phase lag onto a laser at the microwave frequency. Polarization optics converts this phase lag into an amplitude modulation, which is analyzed in a heterodyne interfer- ometer to detect Doppler shifts in the microwave frequency. A version of the MPMI was constructed to experimentally measure the frequency of a microwave source through the EO modulation of a laser. The successful extraction of the microwave frequency proved the underlying physical concept of the MPMI design, and highlighted the challenges associated with the longer microwave wavelength. The frequency measurements made with the current equipment contained too much uncertainty for an accurate velocity measurement. Potential alterations to the current construction are presented to improve the quality of the measured signal and enable multiple accurate velocity measurements.
Esfandiar, Habib; Habibnejad Korayem, Moharam
2017-01-01
This paper aims at planning an optimal point to point path for a flexible manipulator under large deformation. For this purpose, the researchers use a direct method and meta-heuristic optimization process. In this paper, the maximum load carried by the manipulator and the minimum transmission time are taken as objective functions of the optimization process to get optimal path profiles. Kinematic constraints, the maximum velocity and acceleration, the dynamic constraint of the maximum torque ...
Novel method for rail wear inspection based on the sparse iterative closest point method
Yi, Bing; Yang, Yue; Yi, Qian; Dai, Wanlin; Li, Xiongbing
2017-12-01
As trains become progressively faster, it is becoming imperative to automatically and precisely inspect the rail profile of high-speed railways to ensure their safety and reliability. To realize this, a new method based on the sparse iterative closest point method is proposed in this study. Moreover, the noncontact method is mainly used for convenience and practicality. First, a line laser-based measurement system is constructed, and the position of the line laser is calculated to ensure that both the top and sides of the rail are in range of the line laser. Then, the measured data of the rail profile are separated into a baseline part and worn part. The baseline is involved in registering the measured data and reference profile by the sparse iterative closest point method. The worn part is then transformed by the same matrix of the baseline part. Finally, the Hausdorff distance is introduced to measure the distance between the wear model and reference model. The experimental results demonstrate the effectiveness and efficiency of the proposed method.
Eubanks-Carter, Catherine; Gorman, Bernard S; Muran, J Christopher
2012-01-01
Analysis of change points in psychotherapy process could increase our understanding of mechanisms of change. In particular, naturalistic change point detection methods that identify turning points or breakpoints in time series data could enhance our ability to identify and study alliance ruptures and resolutions. This paper presents four categories of statistical methods for detecting change points in psychotherapy process: criterion-based methods, control chart methods, partitioning methods, and regression methods. Each method's utility for identifying shifts in the alliance is illustrated using a case example from the Beth Israel Psychotherapy Research program. Advantages and disadvantages of the various methods are discussed.
Improved fixed point iterative method for blade element momentum computations
DEFF Research Database (Denmark)
Sun, Zhenye; Shen, Wen Zhong; Chen, Jin
2017-01-01
, the convergence ability of the iterative method will be greatly enhanced. Numerical tests have been performed under different combinations of local tip speed ratio, local solidity, local twist and airfoil aerodynamic data. Results show that the simple iterative methods have a good convergence ability which...... to the physical solution, especially for the locations near the blade tip and root where the failure rate of the iterative method is high. The stability and accuracy of aerodynamic calculations and optimizations are greatly reduced due to this problem. The intrinsic mechanisms leading to convergence problems...
Novel Ratio Subtraction and Isoabsorptive Point Methods for ...
African Journals Online (AJOL)
Zeany BA. Three different methods for determination of binary mixture of Amlodipine and Atorvastatin using Dual. Wavelength Spectrophotometry. Spectrochim Acta A. Mol Biomol Spectrosc. 2012; 104: 70-76. 3. Prasad C, Gautam A, Bharadwaj V, ...
Unemployment estimation: Spatial point referenced methods and models
Pereira, Soraia
2017-06-26
Portuguese Labor force survey, from 4th quarter of 2014 onwards, started geo-referencing the sampling units, namely the dwellings in which the surveys are carried. This opens new possibilities in analysing and estimating unemployment and its spatial distribution across any region. The labor force survey choose, according to an preestablished sampling criteria, a certain number of dwellings across the nation and survey the number of unemployed in these dwellings. Based on this survey, the National Statistical Institute of Portugal presently uses direct estimation methods to estimate the national unemployment figures. Recently, there has been increased interest in estimating these figures in smaller areas. Direct estimation methods, due to reduced sampling sizes in small areas, tend to produce fairly large sampling variations therefore model based methods, which tend to
Estimation of focusing operators using the Common Focal Point method
Bolte, J.F.B.
2003-01-01
The objective of this PhD project is to present a data-driven method to determine one-way focusing operators. Focusing operators are the input for imaging a subsurface structure from measurements at the surface. They can be used in imaging the earth's interior, and also in non-destructive imaging of
Surveying method of points on axis of cyllindrical pipe
Directory of Open Access Journals (Sweden)
Xu Jinjun
2012-02-01
Full Text Available To determine axes line of pipe or structural building which has a round normal section is recurrent in actual building works. The verticality measurement of chimney, TV tower and the deflection measurement of pipe are important to find out the deviation from design in construction, installation and completion. This paper discussed the measurement technique and data processing method for axes line of round normal section based on reflectorless distance measure. Simulation and practical results showed its feasibility and high efficiency.
A RECOGNITION METHOD FOR AIRPLANE TARGETS USING 3D POINT CLOUD DATA
Directory of Open Access Journals (Sweden)
M. Zhou
2012-07-01
Full Text Available LiDAR is capable of obtaining three dimension coordinates of the terrain and targets directly and is widely applied in digital city, emergent disaster mitigation and environment monitoring. Especially because of its ability of penetrating the low density vegetation and canopy, LiDAR technique has superior advantages in hidden and camouflaged targets detection and recognition. Based on the multi-echo data of LiDAR, and combining the invariant moment theory, this paper presents a recognition method for classic airplanes (even hidden targets mainly under the cover of canopy using KD-Tree segmented point cloud data. The proposed algorithm firstly uses KD-tree to organize and manage point cloud data, and makes use of the clustering method to segment objects, and then the prior knowledge and invariant recognition moment are utilized to recognise airplanes. The outcomes of this test verified the practicality and feasibility of the method derived in this paper. And these could be applied in target measuring and modelling of subsequent data processing.
Hamilton, H. H., II
1982-01-01
An approximate method for calculating heating rates at general three dimensional stagnation points is presented. The application of the method for making stagnation point heating calculations during atmospheric entry is described. Comparisons with results from boundary layer calculations indicate that the method should provide an accurate method for engineering type design and analysis applications.
Wang, D.; Hollaus, M.; Pfeifer, N.
2017-09-01
Classification of wood and leaf components of trees is an essential prerequisite for deriving vital tree attributes, such as wood mass, leaf area index (LAI) and woody-to-total area. Laser scanning emerges to be a promising solution for such a request. Intensity based approaches are widely proposed, as different components of a tree can feature discriminatory optical properties at the operating wavelengths of a sensor system. For geometry based methods, machine learning algorithms are often used to separate wood and leaf points, by providing proper training samples. However, it remains unclear how the chosen machine learning classifier and features used would influence classification results. To this purpose, we compare four popular machine learning classifiers, namely Support Vector Machine (SVM), Na¨ıve Bayes (NB), Random Forest (RF), and Gaussian Mixture Model (GMM), for separating wood and leaf points from terrestrial laser scanning (TLS) data. Two trees, an Erytrophleum fordii and a Betula pendula (silver birch) are used to test the impacts from classifier, feature set, and training samples. Our results showed that RF is the best model in terms of accuracy, and local density related features are important. Experimental results confirmed the feasibility of machine learning algorithms for the reliable classification of wood and leaf points. It is also noted that our studies are based on isolated trees. Further tests should be performed on more tree species and data from more complex environments.
Directory of Open Access Journals (Sweden)
D. Wang
2017-09-01
Full Text Available Classification of wood and leaf components of trees is an essential prerequisite for deriving vital tree attributes, such as wood mass, leaf area index (LAI and woody-to-total area. Laser scanning emerges to be a promising solution for such a request. Intensity based approaches are widely proposed, as different components of a tree can feature discriminatory optical properties at the operating wavelengths of a sensor system. For geometry based methods, machine learning algorithms are often used to separate wood and leaf points, by providing proper training samples. However, it remains unclear how the chosen machine learning classifier and features used would influence classification results. To this purpose, we compare four popular machine learning classifiers, namely Support Vector Machine (SVM, Na¨ıve Bayes (NB, Random Forest (RF, and Gaussian Mixture Model (GMM, for separating wood and leaf points from terrestrial laser scanning (TLS data. Two trees, an Erytrophleum fordii and a Betula pendula (silver birch are used to test the impacts from classifier, feature set, and training samples. Our results showed that RF is the best model in terms of accuracy, and local density related features are important. Experimental results confirmed the feasibility of machine learning algorithms for the reliable classification of wood and leaf points. It is also noted that our studies are based on isolated trees. Further tests should be performed on more tree species and data from more complex environments.
Passive Methods as a Solution for Improving Indoor Environments
Orosa, José A
2012-01-01
There are many aspects to consider when evaluating or improving an indoor environment; thermal comfort, energy saving, preservation of materials, hygiene and health are all key aspects which can be improved by passive methods of environmental control. Passive Methods as a Solution for Improving Indoor Environments endeavours to fill the lack of analysis in this area by using over ten years of research to illustrate the effects of methods such as thermal inertia and permeable coverings; for example, the use of permeable coverings is a well known passive method, but its effects and ways to improve indoor environments have been rarely analyzed. Passive Methods as a Solution for Improving Indoor Environments includes both software simulations and laboratory and field studies. Through these, the main parameters that characterize the behavior of internal coverings are defined. Furthermore, a new procedure is explained in depth which can be used to identify the real expected effects of permeable coverings such ...
Comparison of the Selected State-Of-The-Art 3D Indoor Scanning and Point Cloud Generation Methods
Directory of Open Access Journals (Sweden)
Ville V. Lehtola
2017-08-01
Full Text Available Accurate three-dimensional (3D data from indoor spaces are of high importance for various applications in construction, indoor navigation and real estate management. Mobile scanning techniques are offering an efficient way to produce point clouds, but with a lower accuracy than the traditional terrestrial laser scanning (TLS. In this paper, we first tackle the problem of how the quality of a point cloud should be rigorously evaluated. Previous evaluations typically operate on some point cloud subset, using a manually-given length scale, which would perhaps describe the ranging precision or the properties of the environment. Instead, the metrics that we propose perform the quality evaluation to the full point cloud and over all of the length scales, revealing the method precision along with some possible problems related to the point clouds, such as outliers, over-completeness and misregistration. The proposed methods are used to evaluate the end product point clouds of some of the latest methods. In detail, point clouds are obtained from five commercial indoor mapping systems, Matterport, NavVis, Zebedee, Stencil and Leica Pegasus: Backpack, and three research prototypes, Aalto VILMA , FGI Slammer and the Würzburg backpack. These are compared against survey-grade TLS point clouds captured from three distinct test sites that each have different properties. Based on the presented experimental findings, we discuss the properties of the proposed metrics and the strengths and weaknesses of the above mapping systems and then suggest directions for future research.
Silva, Isabella M M; Almeida, R C C; Alves, M A O; Almeida, P F
2003-03-25
Critical control points (CCPs) associated with Minas Frescal cheese (a Brazilian soft white cheese, eaten fresh) processing in two dairy factories were determined using flow diagrams and microbiological tests for detection of Listeria monocytogenes and other species of Listeria. A total of 218 samples were collected along the production line and environment. The CCPs identified were reception of raw milk, pasteurization, coagulation and storage. Thirteen samples were positive for Listeria; 9 samples were Listeria innocua, 2 were Listeria grayi and 2 were L. monocytogenes. In factory A, Listeria was found in 50% of raw milk samples, 33.3% of curd samples, 16.7% of pasteurized milk samples, 16.7% of cheese samples and 25% of rubber pipes used to transport the whey. The microorganism was not obtained from environmental samples in this plant. In factory B, Listeria was found in one sample of raw milk (16.7%) and in three samples of environment (17.6%) and L. monocytogenes was obtained from raw milk (16.7%) and the floor of the cheese refrigeration room (14.3%). Two serotypes, 4b and 1/2a, were observed among the strains of L. monocytogenes isolated, both which are frequently involved in outbreaks of food-borne listeriosis and sporadic cases of the disease all over the world.
Directory of Open Access Journals (Sweden)
Florin POPESCU
2017-12-01
Full Text Available Early warning system (EWS based on a reliable forecasting process has become a critical component of the management of large complex industrial projects in the globalized transnational environment. The purpose of this research is to critically analyze the forecasting methods from the point of view of early warning, choosing those useful for the construction of EWS. This research addresses complementary techniques, using Bayesian Networks, which addresses both uncertainties and causality in project planning and execution, with the goal of generating early warning signals for project managers. Even though Bayesian networks have been widely used in a range of decision-support applications, their application as early warning systems for project management is still new.
National Research Council Canada - National Science Library
Liu, Jian; Liang, Huawei; Wang, Zhiling; Chen, Xiangcheng
2015-01-01
.... A framework for the online modeling of the driving environment using a multi-beam LIDAR, i.e., a Velodyne HDL-64E LIDAR, which describes the 3D environment in the form of a point cloud, is reported in this article...
A Fixed-Point of View on Gradient Methods for Big Data
Directory of Open Access Journals (Sweden)
Alexander Jung
2017-09-01
Full Text Available Interpreting gradient methods as fixed-point iterations, we provide a detailed analysis of those methods for minimizing convex objective functions. Due to their conceptual and algorithmic simplicity, gradient methods are widely used in machine learning for massive data sets (big data. In particular, stochastic gradient methods are considered the de-facto standard for training deep neural networks. Studying gradient methods within the realm of fixed-point theory provides us with powerful tools to analyze their convergence properties. In particular, gradient methods using inexact or noisy gradients, such as stochastic gradient descent, can be studied conveniently using well-known results on inexact fixed-point iterations. Moreover, as we demonstrate in this paper, the fixed-point approach allows an elegant derivation of accelerations for basic gradient methods. In particular, we will show how gradient descent can be accelerated by a fixed-point preserving transformation of an operator associated with the objective function.
Zhao, Yu; Shi, Chen-Xiao; Kwon, Ki-Chul; Piao, Yan-Ling; Piao, Mei-Lan; Kim, Nam
2018-03-01
We propose a fast calculation method for a computer-generated hologram (CGH) of real objects that uses a point cloud gridding method. The depth information of the scene is acquired using a depth camera and the point cloud model is reconstructed virtually. Because each point of the point cloud is distributed precisely to the exact coordinates of each layer, each point of the point cloud can be classified into grids according to its depth. A diffraction calculation is performed on the grids using a fast Fourier transform (FFT) to obtain a CGH. The computational complexity is reduced dramatically in comparison with conventional methods. The feasibility of the proposed method was confirmed by numerical and optical experiments.
Interior Point Method Evaluation for Reactive Power Flow Optimization in the Power System
Directory of Open Access Journals (Sweden)
Zbigniew Lubośny
2013-03-01
Full Text Available The paper verifies the performance of an interior point method in reactive power flow optimization in the power system. The study was conducted on a 28 node CIGRE system, using the interior point method optimization procedures implemented in Power Factory software.
Detection of Dew-Point by substantial Raman Band Frequency Jumps (A new Method)
DEFF Research Database (Denmark)
Hansen, Susanne Brunsgaard; Berg, Rolf W.; Stenby, Erling Halfdan
Detection of Dew-Point by substantial Raman Band Frequency Jumps (A new Method). See poster at http://www.kemi.dtu.dk/~ajo/rolf/jumps.pdf......Detection of Dew-Point by substantial Raman Band Frequency Jumps (A new Method). See poster at http://www.kemi.dtu.dk/~ajo/rolf/jumps.pdf...
Experimental Method for Determination of Self-Heating at the Point of Measurement
Sestan, D.; Zvizdic, D.; Grgec-Bermanec, L.
2017-09-01
This paper presents a new experimental method and algorithm for the determination of self-heating of platinum resistance thermometer (PRT) when the temperature instability of medium of interest would prevent an accurate self-heating determination using standard methods. In temperature measurements performed by PRT, self-heating is one of the most common sources of error and arises from the increase in sensor temperature caused by the dissipation of electrical heat when measurement current is applied to the temperature sensing element. This increase depends mainly on the applied current and the thermal resistances between thermometer sensing element and the environment surrounding the thermometer. The method is used for determination of self-heating of a 100 Ω industrial PRT which is intended for measurement of air temperature inside the saturation chamber of the primary dew/frost point generator at the Laboratory for Process Measurement (HMI/FSB-LPM). Self-heating is first determined for conditions present during the comparison calibration of the thermometer, using the calibration bath. The measurements were then repeated with thermometer being placed in an air stream inside the saturation chamber. The experiment covers the temperature range between -65°C and 10°C. Self-heating is determined for two different air velocities and two different vertical positions of PRT in relation to the chamber bottom.
Chow, Jacky C. K.; Ebeling, Axel; Teskey, William F.
2012-11-01
Terrestrial laser scanners are high-accuracy 3D imaging instruments that are capable of measuring deformations with sub-millimetre level accuracy in most close-range applications. Traditionally, deformation monitoring via laser scanning is performed by measuring distinct signalised targets. In this case, the centroid of these targets must be determined with great accuracy for optimum detectability. To achieve this, a least-squares target centroid extraction algorithm suitable for planar checkerboard-type targets is proposed for irregularly organised laser scanner data. These target centroids are then used in a free-station network adjustment for performing deformation analysis with no a priori assumptions about the deformation pattern. To ensure the optimum measurement accuracy, all systematic errors inherent to the instrument at the time of data acquisition needs to be removed. One of the methods for reducing these systematic errors is by performing self-calibration of terrestrial laser scanners. In this paper, this was performed on-site to model the systematic errors of the scanner. It is demonstrated that the accuracy of the recovered translational movements were improved by an order of magnitude from the millimetre level to the sub-millimetre level using this approach. Despite the success of using laser scanners with signalised targets in deformation analysis, the main benefit of active sensors like terrestrial laser scanning systems is their ability to capture 3D information of the entire scene without installing markers. A new markerless deformation analysis technique that utilises intersection points derived from planar-features is proposed and tested in this paper. The extraction and intersection of planes in each point cloud can be performed semi-automatically or automatically. This new method is based on free-stationing and does not require a priori knowledge about stable control points or movement patterns. It can detect and measure both translational
Sustainable urban built environment: Modern management concepts and evaluation methods
Ovsiannikova, Tatiana; Nikolaenko, Mariya
2017-01-01
The paper is focused on the analysis of modern concepts in urban development management. It is established that they are based on the principles of ecocentrism and anthropocentrism. The purpose of this research is to develop a system of quality indicators of urban built environment and justification of their application in management of city development. The need for observing the indicators characterizing the urban built environment in the planning of the territory development was proved. Based on the data and reports of the Russian and international organizations the analysis of the existing systems of urban development indicators is made. The suggested solution is to extend the existing indicators systems with that related to urban built environment quality which are recommended for planning urban areas development. The proposed system of indicators includes private, aggregate, normalized, and integrated urban built environment quality indicators using methods of economic-statistical and comparative analysis and index method. Application of these methods allowed calculating the indicators for urban areas of Tomsk Region. The results of calculations are presented in the paper. According to normalized indicators the priority areas for investment and development of urban areas were determined. The scenario conditions allowed estimating changes of quality indicators for urban built environment. Finally, the paper suggests recommendations for making management decisions when creating sustainable environment of life in urban areas.
Shock waves simulated using the dual domain material point method combined with molecular dynamics
Zhang, Duan Z.; Dhakal, Tilak R.
2017-04-01
In this work we combine the dual domain material point method with molecular dynamics in an attempt to create a multiscale numerical method to simulate materials undergoing large deformations with high strain rates. In these types of problems, the material is often in a thermodynamically nonequilibrium state, and conventional constitutive relations or equations of state are often not available. In this method, the closure quantities, such as stress, at each material point are calculated from a molecular dynamics simulation of a group of atoms surrounding the material point. Rather than restricting the multiscale simulation in a small spatial region, such as phase interfaces, or crack tips, this multiscale method can be used to consider nonequilibrium thermodynamic effects in a macroscopic domain. This method takes the advantage that the material points only communicate with mesh nodes, not among themselves; therefore molecular dynamics simulations for material points can be performed independently in parallel. The dual domain material point method is chosen for this multiscale method because it can be used in history dependent problems with large deformation without generating numerical noise as material points move across cells, and also because of its convergence and conservation properties. To demonstrate the feasibility and accuracy of this method, we compare the results of a shock wave propagation in a cerium crystal calculated using the direct molecular dynamics simulation with the results from this combined multiscale calculation.
Facial plastic surgery area acquisition method based on point cloud mathematical model solution.
Li, Xuwu; Liu, Fei
2013-09-01
It is one of the hot research problems nowadays to find a quick and accurate method of acquiring the facial plastic surgery area to provide sufficient but irredundant autologous or in vitro skin source for covering extensive wound, trauma, and burnt area. At present, the acquisition of facial plastic surgery area mainly includes model laser scanning, point cloud data acquisition, pretreatment of point cloud data, three-dimensional model reconstruction, and computation of area. By using this method, the area can be computed accurately, but it is hard to control the random error, and it requires a comparatively longer computation period. In this article, a facial plastic surgery area acquisition method based on point cloud mathematical model solution is proposed. This method applies symmetric treatment to the point cloud based on the pretreatment of point cloud data, through which the comparison diagram color difference map of point cloud error before and after symmetry is obtained. The slicing mathematical model of facial plastic area is got through color difference map diagram. By solving the point cloud data in this area directly, the facial plastic area is acquired. The point cloud data are directly operated in this method, which can accurately and efficiently complete the surgery area computation. The result of the comparative analysis shows the method is effective in facial plastic surgery area.
Full-Newton step interior-point methods for conic optimization
Mansouri, H.
2008-01-01
In the theory of polynomial-time interior-point methods (IPMs) two important classes of methods are distinguished: small-update and large-update methods, respectively. Small-update IPMs have the best theoretical iteration bound and IPMs with full-Newton steps belong to this class of methods. Within
A hybrid solar panel maximum power point search method that uses light and temperature sensors
Ostrowski, Mariusz
2016-04-01
Solar cells have low efficiency and non-linear characteristics. To increase the output power solar cells are connected in more complex structures. Solar panels consist of series of connected solar cells with a few bypass diodes, to avoid negative effects of partial shading conditions. Solar panels are connected to special device named the maximum power point tracker. This device adapt output power from solar panels to load requirements and have also build in a special algorithm to track the maximum power point of solar panels. Bypass diodes may cause appearance of local maxima on power-voltage curve when the panel surface is illuminated irregularly. In this case traditional maximum power point tracking algorithms can find only a local maximum power point. In this article the hybrid maximum power point search algorithm is presented. The main goal of the proposed method is a combination of two algorithms: a method that use temperature sensors to track maximum power point in partial shading conditions and a method that use illumination sensor to track maximum power point in equal illumination conditions. In comparison to another methods, the proposed algorithm uses correlation functions to determinate the relationship between values of illumination and temperature sensors and the corresponding values of current and voltage in maximum power point. In partial shading condition the algorithm calculates local maximum power points bases on the value of temperature and the correlation function and after that measures the value of power on each of calculated point choose those with have biggest value, and on its base run the perturb and observe search algorithm. In case of equal illumination algorithm calculate the maximum power point bases on the illumination value and the correlation function and on its base run the perturb and observe algorithm. In addition, the proposed method uses a special coefficient modification of correlation functions algorithm. This sub
Entropy Based Test Point Evaluation and Selection Method for Analog Circuit Fault Diagnosis
Directory of Open Access Journals (Sweden)
Yuan Gao
2014-01-01
Full Text Available By simplifying tolerance problem and treating faulty voltages on different test points as independent variables, integer-coded table technique is proposed to simplify the test point selection process. Usually, simplifying tolerance problem may induce a wrong solution while the independence assumption will result in over conservative result. To address these problems, the tolerance problem is thoroughly considered in this paper, and dependency relationship between different test points is considered at the same time. A heuristic graph search method is proposed to facilitate the test point selection process. First, the information theoretic concept of entropy is used to evaluate the optimality of test point. The entropy is calculated by using the ambiguous sets and faulty voltage distribution, determined by component tolerance. Second, the selected optimal test point is used to expand current graph node by using dependence relationship between the test point and graph node. Simulated results indicate that the proposed method more accurately finds the optimal set of test points than other methods; therefore, it is a good solution to minimize the size of the test point set. To simplify and clarify the proposed method, only catastrophic and some specific parametric faults are discussed in this paper.
Multi-scale calculation based on dual domain material point method combined with molecular dynamics
Energy Technology Data Exchange (ETDEWEB)
Dhakal, Tilak Raj [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-02-27
This dissertation combines the dual domain material point method (DDMP) with molecular dynamics (MD) in an attempt to create a multi-scale numerical method to simulate materials undergoing large deformations with high strain rates. In these types of problems, the material is often in a thermodynamically non-equilibrium state, and conventional constitutive relations are often not available. In this method, the closure quantities, such as stress, at each material point are calculated from a MD simulation of a group of atoms surrounding the material point. Rather than restricting the multi-scale simulation in a small spatial region, such as phase interfaces, or crack tips, this multi-scale method can be used to consider non-equilibrium thermodynamic e ects in a macroscopic domain. This method takes advantage that the material points only communicate with mesh nodes, not among themselves; therefore MD simulations for material points can be performed independently in parallel. First, using a one-dimensional shock problem as an example, the numerical properties of the original material point method (MPM), the generalized interpolation material point (GIMP) method, the convected particle domain interpolation (CPDI) method, and the DDMP method are investigated. Among these methods, only the DDMP method converges as the number of particles increases, but the large number of particles needed for convergence makes the method very expensive especially in our multi-scale method where we calculate stress in each material point using MD simulation. To improve DDMP, the sub-point method is introduced in this dissertation, which provides high quality numerical solutions with a very small number of particles. The multi-scale method based on DDMP with sub-points is successfully implemented for a one dimensional problem of shock wave propagation in a cerium crystal. The MD simulation to calculate stress in each material point is performed in GPU using CUDA to accelerate the
An array extension method in a noisy environment
Li, Bo; Sun, Chao
2011-06-01
An array extension method in a noisy environment was proposed to improve angular resolution and array gain. The proposed method combines the FOC (fourth-order cumulants) technique with the ETAM (extended towed array measurements) method to extend array aperture and suppress Gaussian noise. First, successive measurements of a virtual uniform linear array were constructed by applying fourth-order cumulants to measurements of uniform linear array; Gaussian noise in these measurements was also eliminated. Then, the array was extended by compensating phase differences using the ETAM method. Finally, the synthetic aperture was extended further by the fourth-order cumulants technique. The proposed FOC-ETAM-FOC method not only improves angular resolution and array gain, but also effectively suppresses Gaussian noise. Furthermore, it inherits the advantages of the ETAM method. Simulation results showed that the FOC-ETAM-FOC method achieved better angular resolution and array gain than the ETAM method. Furthermore this method outperforms the ETAM method in Gaussian noise environment.
Methods and systems relating to an augmented virtuality environment
Nielsen, Curtis W; Anderson, Matthew O; McKay, Mark D; Wadsworth, Derek C; Boyce, Jodie R; Hruska, Ryan C; Koudelka, John A; Whetten, Jonathan; Bruemmer, David J
2014-05-20
Systems and methods relating to an augmented virtuality system are disclosed. A method of operating an augmented virtuality system may comprise displaying imagery of a real-world environment in an operating picture. The method may further include displaying a plurality of virtual icons in the operating picture representing at least some assets of a plurality of assets positioned in the real-world environment. Additionally, the method may include displaying at least one virtual item in the operating picture representing data sensed by one or more of the assets of the plurality of assets and remotely controlling at least one asset of the plurality of assets by interacting with a virtual icon associated with the at least one asset.
AN IMPROVEMENT ON GEOMETRY-BASED METHODS FOR GENERATION OF NETWORK PATHS FROM POINTS
Directory of Open Access Journals (Sweden)
Z. Akbari
2014-10-01
Full Text Available Determining network path is important for different purposes such as determination of road traffic, the average speed of vehicles, and other network analysis. One of the required input data is information about network path. Nevertheless, the data collected by the positioning systems often lead to the discrete points. Conversion of these points to the network path have become one of the challenges which different researchers, presents many ways for solving it. This study aims at investigating geometry-based methods to estimate the network paths from the obtained points and improve an existing point to curve method. To this end, some geometry-based methods have been studied and an improved method has been proposed by applying conditions on the best method after describing and illustrating weaknesses of them.
Quantitative, Qualitative and Geospatial Methods to Characterize HIV Risk Environments.
Directory of Open Access Journals (Sweden)
Erin E Conners
Full Text Available Increasingly, 'place', including physical and geographical characteristics as well as social meanings, is recognized as an important factor driving individual and community health risks. This is especially true among marginalized populations in low and middle income countries (LMIC, whose environments may also be more difficult to study using traditional methods. In the NIH-funded longitudinal study Mapa de Salud, we employed a novel approach to exploring the risk environment of female sex workers (FSWs in two Mexico/U.S. border cities, Tijuana and Ciudad Juárez. In this paper we describe the development, implementation, and feasibility of a mix of quantitative and qualitative tools used to capture the HIV risk environments of FSWs in an LMIC setting. The methods were: 1 Participatory mapping; 2 Quantitative interviews; 3 Sex work venue field observation; 4 Time-location-activity diaries; 5 In-depth interviews about daily activity spaces. We found that the mixed-methodology outlined was both feasible to implement and acceptable to participants. These methods can generate geospatial data to assess the role of the environment on drug and sexual risk behaviors among high risk populations. Additionally, the adaptation of existing methods for marginalized populations in resource constrained contexts provides new opportunities for informing public health interventions.
Remote object translation methods for immersive virtual environments
J.D. Mulder (Jurriaan)
1998-01-01
textabstractIn this paper, seven methods are described to perform remote object translations with a six degree-of-freedom input device in an immersive virtual environment. By manipulating objects remotely, a number of disadvantages of the real-world `direct grab and drag' metaphor can be avoided.
Synthesis of Numerical Methods for Modeling Wave Energy Converter-Point Absorbers: Preprint
Energy Technology Data Exchange (ETDEWEB)
Li, Y.; Yu, Y. H.
2012-05-01
During the past few decades, wave energy has received significant attention among all ocean energy formats. Industry has proposed hundreds of prototypes such as an oscillating water column, a point absorber, an overtopping system, and a bottom-hinged system. In particular, many researchers have focused on modeling the floating-point absorber as the technology to extract wave energy. Several modeling methods have been used such as the analytical method, the boundary-integral equation method, the Navier-Stokes equations method, and the empirical method. However, no standardized method has been decided. To assist the development of wave energy conversion technologies, this report reviews the methods for modeling the floating-point absorber.
Change Analysis in Structural Laser Scanning Point Clouds: The Baseline Method
Directory of Open Access Journals (Sweden)
Yueqian Shen
2016-12-01
Full Text Available A method is introduced for detecting changes from point clouds that avoids registration. For many applications, changes are detected between two scans of the same scene obtained at different times. Traditionally, these scans are aligned to a common coordinate system having the disadvantage that this registration step introduces additional errors. In addition, registration requires stable targets or features. To avoid these issues, we propose a change detection method based on so-called baselines. Baselines connect feature points within one scan. To analyze changes, baselines connecting corresponding points in two scans are compared. As feature points either targets or virtual points corresponding to some reconstructable feature in the scene are used. The new method is implemented on two scans sampling a masonry laboratory building before and after seismic testing, that resulted in damages in the order of several centimeters. The centres of the bricks of the laboratory building are automatically extracted to serve as virtual points. Baselines connecting virtual points and/or target points are extracted and compared with respect to a suitable structural coordinate system. Changes detected from the baseline analysis are compared to a traditional cloud to cloud change analysis demonstrating the potential of the new method for structural analysis.
Energy Technology Data Exchange (ETDEWEB)
Yoo, Hyun Suk; Lee, Jeong Min; Yoon, Jeong Hee; Lee, Dong Ho; Chang, Won; Han, Joon Koo [Seoul National University Hospital, Seoul (Korea, Republic of)
2016-09-15
To prospectively compare technical success rate and reliable measurements of virtual touch quantification (VTQ) elastography and elastography point quantification (ElastPQ), and to correlate liver stiffness (LS) measurements obtained by the two elastography techniques. Our study included 85 patients, 80 of whom were previously diagnosed with chronic liver disease. The technical success rate and reliable measurements of the two kinds of point shear wave elastography (pSWE) techniques were compared by χ{sup 2} analysis. LS values measured using the two techniques were compared and correlated via Wilcoxon signed-rank test, Spearman correlation coefficient, and 95% Bland-Altman limit of agreement. The intraobserver reproducibility of ElastPQ was determined by 95% Bland-Altman limit of agreement and intraclass correlation coefficient (ICC). The two pSWE techniques showed similar technical success rate (98.8% for VTQ vs. 95.3% for ElastPQ, p = 0.823) and reliable LS measurements (95.3% for VTQ vs. 90.6% for ElastPQ, p = 0.509). The mean LS measurements obtained by VTQ (1.71 ± 0.47 m/s) and ElastPQ (1.66 ± 0.41 m/s) were not significantly different (p = 0.209). The LS measurements obtained by the two techniques showed strong correlation (r = 0.820); in addition, the 95% limit of agreement of the two methods was 27.5% of the mean. Finally, the ICC of repeat ElastPQ measurements was 0.991. Virtual touch quantification and ElastPQ showed similar technical success rate and reliable measurements, with strongly correlated LS measurements. However, the two methods are not interchangeable due to the large limit of agreement.
Fine pointing of the Solar Optical Telescope in the Space Shuttle environment
Gowrinathan, S.
1985-01-01
Instruments requiring fine (i.e., sub-arcsecond) pointing, such as the Solar Optical Telescope (SOT), must be equipped with two-stage pointing devices, coarse and fine. Coarse pointing will be performed by a gimbal system, such as the Instrument Pointing System, while the image motion compensation (IMC) will provide fine pointing. This paper describes work performed on the SOT concept design that illustrates IMC as applied to SOT. The SOT control system was modeled in the frequency domain to evaluate performance, stability, and bandwidth requirements. The two requirements of the pointing control, i.e., the 2 arcsecond reproducibility and 0.03 arcsecond rms pointing jitter, can be satisfied by use of IMC at about 20 Hz bandwidth. The need for this high bandwidth is related to Shuttle-induced disturbances that arise primarily from man push-offs and vernier thruster firings. A block diagram of SOT model/stability analysis, schematic illustrations of the SOT pointing system, and a structural model summary are included.
Fujii, Atsunori; Ohsugi, Yudai; Yamamoto, Yuki; Nakamura, Takabun; Sugiura, Toshifumi; Tauchi, Masaki
2007-05-01
In order to find out the most suitable and accurate pointing methods to study the sound localizability of persons with visual impairment, we compared the accuracy of three different pointing methods for indicating the direction of sound sources in a semi-anechoic dark room. Six subjects with visual impairment (two totally blind and four with low vision) participated in this experiment. The three pointing methods employed were (1) directing the face, (2) directing the body trunk on a revolving chair and (3) indicating a tactile cue placed horizontally in front of the subject. Seven sound emitters were arranged in a semicircle 2.0 m from the subject, 0 degrees to +/-80 degrees of the subject's midline, at a height of 1.2 m. The accuracy of the pointing methods was evaluated by measuring the deviation between the angle of the target sound source and that of the subject's response. The result was that all methods indicated that as the angle of the sound source increased from midline, the accuracy decreased. The deviations recorded toward the left and the right of midline were symmetrical. In the whole frontal area (-80 degrees to +80 degrees from midline), both the tactile cue and the body trunk methods were more accurate than the face-pointing method. There was no significant difference in the center (-40 degrees to +40 degrees from midline). In the periphery (-80 degrees and +80 degrees ), the tactile cue pointing method was the most accurate of all and the body trunk method was the next best. These results suggest that the most suitable pointing methods to study the sound localizability of the frontal azimuth for subjects who are visually impaired are the tactile cue and the body trunk methods because of their higher accuracy in the periphery.
Zhang, Zhen; Chen, Siqing; Zheng, Huadong; Sun, Tao; Yu, Yingjie; Gao, Hongyue; Asundi, Anand K.
2017-06-01
Computer holography has made a notably progress in recent years. The point-based method and slice-based method are chief calculation algorithms for generating holograms in holographic display. Although both two methods are validated numerically and optically, the differences of the imaging quality of these methods have not been specifically analyzed. In this paper, we analyze the imaging quality of computer-generated phase holograms generated by point-based Fresnel zone plates (PB-FZP), point-based Fresnel diffraction algorithm (PB-FDA) and slice-based Fresnel diffraction algorithm (SB-FDA). The calculation formula and hologram generation with three methods are demonstrated. In order to suppress the speckle noise, sequential phase-only holograms are generated in our work. The results of reconstructed images numerically and experimentally are also exhibited. By comparing the imaging quality, the merits and drawbacks with three methods are analyzed. Conclusions are given by us finally.
A feature point identification method for positron emission particle tracking with multiple tracers
Energy Technology Data Exchange (ETDEWEB)
Wiggins, Cody, E-mail: cwiggin2@vols.utk.edu [University of Tennessee-Knoxville, Department of Physics and Astronomy, 1408 Circle Drive, Knoxville, TN 37996 (United States); Santos, Roque [University of Tennessee-Knoxville, Department of Nuclear Engineering (United States); Escuela Politécnica Nacional, Departamento de Ciencias Nucleares (Ecuador); Ruggles, Arthur [University of Tennessee-Knoxville, Department of Nuclear Engineering (United States)
2017-01-21
A novel detection algorithm for Positron Emission Particle Tracking (PEPT) with multiple tracers based on optical feature point identification (FPI) methods is presented. This new method, the FPI method, is compared to a previous multiple PEPT method via analyses of experimental and simulated data. The FPI method outperforms the older method in cases of large particle numbers and fine time resolution. Simulated data show the FPI method to be capable of identifying 100 particles at 0.5 mm average spatial error. Detection error is seen to vary with the inverse square root of the number of lines of response (LORs) used for detection and increases as particle separation decreases. - Highlights: • A new approach to positron emission particle tracking is presented. • Using optical feature point identification analogs, multiple particle tracking is achieved. • Method is compared to previous multiple particle method. • Accuracy and applicability of method is explored.
A Novel Line Space Voting Method for Vanishing-Point Detection of General Road Images
Directory of Open Access Journals (Sweden)
Zongsheng Wu
2016-06-01
Full Text Available Vanishing-point detection is an important component for the visual navigation system of an autonomous mobile robot. In this paper, we present a novel line space voting method for fast vanishing-point detection. First, the line segments are detected from the road image by the line segment detector (LSD method according to the pixel’s gradient and texture orientation computed by the Sobel operator. Then, the vanishing-point of the road is voted on by considering the points of the lines and their neighborhood spaces with weighting methods. Our algorithm is simple, fast, and easy to implement with high accuracy. It has been experimentally tested with over hundreds of structured and unstructured road images. The experimental results indicate that the proposed method is effective and can meet the real-time requirements of navigation for autonomous mobile robots and unmanned ground vehicles.
Directory of Open Access Journals (Sweden)
Hongwei Ying
2014-08-01
Full Text Available An extreme point of scale space extraction method for binary multiscale and rotation invariant local feature descriptor is studied in this paper in order to obtain a robust and fast method for local image feature descriptor. Classic local feature description algorithms often select neighborhood information of feature points which are extremes of image scale space, obtained by constructing the image pyramid using certain signal transform method. But build the image pyramid always consumes a large amount of computing and storage resources, is not conducive to the actual applications development. This paper presents a dual multiscale FAST algorithm, it does not need to build the image pyramid, but can extract feature points of scale extreme quickly. Feature points extracted by proposed method have the characteristic of multiscale and rotation Invariant and are fit to construct the local feature descriptor.
Selection of rendezvous points for multi-robot exploration in dynamic environments
de Hoog, J.; Cameron, S.; Visser, A.; Visser, U.; Asadi, S.; Laue, T.; Mayer, N.M.
2010-01-01
For many robotics applications (such as robotic search and rescue), information about the environment must be gathered by a team of robots and returned to a single, specific location. Coordination of robots and sharing of information is vital, and when environments have severe communication
Pei-Jing Rong; Jing-Jun Zhao; Lei Wang; Li-Qun Zhou
2016-01-01
The international standardization of auricular acupuncture points (AAPs) is an important basis for auricular therapy or auricular diagnosis and treatment. The study on the international standardization of AAPs has gone through a long process, in which the location method is one of the key research projects. There are different points of view in the field of AAPs among experts from different countries or regions. By only analyzing the nine representative location methods, this paper tried to o...
Directory of Open Access Journals (Sweden)
Pei-Jing Rong
2016-01-01
Full Text Available The international standardization of auricular acupuncture points (AAPs is an important basis for auricular therapy or auricular diagnosis and treatment. The study on the international standardization of AAPs has gone through a long process, in which the location method is one of the key research projects. There are different points of view in the field of AAPs among experts from different countries or regions. By only analyzing the nine representative location methods, this paper tried to offer a proper location method to locate AAPs. Through analysis of the pros and cons of each location method, the location method applied in the WFAS international standard of AAPs is thoroughly considered as an appropriate method. It is important to keep the right direction during developing an International Organization for Standardization (ISO international standard of auricular acupuncture points and to improve the research quality of international standardization for AAPs.
An effective method based on reference point for glucose sensing at 1100-1600nm
Zheng, Jiaxiang; Xu, Kexin; Yang, Yue
2011-03-01
Non-invasive blood glucose sensing by near-infrared spectroscopy is easily interrupted by the strong background variations compared to the weak glucose signals. In this work, according to the distribution of diffuse reflectance intensity at different source-detector separations, a method based on a reference point and a measuring point, where the diffuse reflectance intensity is insensitive and most sensitive to the variation of glucose concentration, respectively, is applied. And the data processing method based on the information of two points is investigated to improve the precision of glucose sensing. Based on the Monte Carlo simulation in 5% intralipid solution model, the corresponding optical probe is designed which includes two detecting points: a reference point located at 1.3-1.7mm and a measuring point located at 1.7-2.1mm. Using the probe, the in vitro experiment with different glucose concentrations in the intralipid solution is conducted at 1100-1600nm. As a result, compared to the PLS model built by the signal of the measuring point, the root mean square error of prediction (RMSEP) and root mean square error of cross calibration (RMSEC) of the corrected model built by reference point and measuring point reduces by 45.10%, and 32.15% respectively.
Speakman, John R; Levitsky, David A; Allison, David B; Bray, Molly S; de Castro, John M; Clegg, Deborah J; Clapham, John C; Dulloo, Abdul G; Gruer, Laurence; Haw, Sally; Hebebrand, Johannes; Hetherington, Marion M; Higgs, Susanne; Jebb, Susan A; Loos, Ruth J F; Luckman, Simon; Luke, Amy; Mohammed-Ali, Vidya; O'Rahilly, Stephen; Pereira, Mark; Perusse, Louis; Robinson, Tom N; Rolls, Barbara; Symonds, Michael E; Westerterp-Plantenga, Margriet S
2011-11-01
The close correspondence between energy intake and expenditure over prolonged time periods, coupled with an apparent protection of the level of body adiposity in the face of perturbations of energy balance, has led to the idea that body fatness is regulated via mechanisms that control intake and energy expenditure. Two models have dominated the discussion of how this regulation might take place. The set point model is rooted in physiology, genetics and molecular biology, and suggests that there is an active feedback mechanism linking adipose tissue (stored energy) to intake and expenditure via a set point, presumably encoded in the brain. This model is consistent with many of the biological aspects of energy balance, but struggles to explain the many significant environmental and social influences on obesity, food intake and physical activity. More importantly, the set point model does not effectively explain the 'obesity epidemic'--the large increase in body weight and adiposity of a large proportion of individuals in many countries since the 1980s. An alternative model, called the settling point model, is based on the idea that there is passive feedback between the size of the body stores and aspects of expenditure. This model accommodates many of the social and environmental characteristics of energy balance, but struggles to explain some of the biological and genetic aspects. The shortcomings of these two models reflect their failure to address the gene-by-environment interactions that dominate the regulation of body weight. We discuss two additional models--the general intake model and the dual intervention point model--that address this issue and might offer better ways to understand how body fatness is controlled.
Directory of Open Access Journals (Sweden)
John R. Speakman
2011-11-01
Full Text Available The close correspondence between energy intake and expenditure over prolonged time periods, coupled with an apparent protection of the level of body adiposity in the face of perturbations of energy balance, has led to the idea that body fatness is regulated via mechanisms that control intake and energy expenditure. Two models have dominated the discussion of how this regulation might take place. The set point model is rooted in physiology, genetics and molecular biology, and suggests that there is an active feedback mechanism linking adipose tissue (stored energy to intake and expenditure via a set point, presumably encoded in the brain. This model is consistent with many of the biological aspects of energy balance, but struggles to explain the many significant environmental and social influences on obesity, food intake and physical activity. More importantly, the set point model does not effectively explain the ‘obesity epidemic’ – the large increase in body weight and adiposity of a large proportion of individuals in many countries since the 1980s. An alternative model, called the settling point model, is based on the idea that there is passive feedback between the size of the body stores and aspects of expenditure. This model accommodates many of the social and environmental characteristics of energy balance, but struggles to explain some of the biological and genetic aspects. The shortcomings of these two models reflect their failure to address the gene-by-environment interactions that dominate the regulation of body weight. We discuss two additional models – the general intake model and the dual intervention point model – that address this issue and might offer better ways to understand how body fatness is controlled.
A method for improved accuracy in three dimensions for determining wheel/rail contact points
Yang, Xinwen; Gu, Shaojie; Zhou, Shunhua; Zhou, Yu; Lian, Songliang
2015-11-01
Searching for the contact points between wheels and rails is important because these points represent the points of exerted contact forces. In order to obtain an accurate contact point and an in-depth description of the wheel/rail contact behaviours on a curved track or in a turnout, a method with improved accuracy in three dimensions is proposed to determine the contact points and the contact patches between the wheel and the rail when considering the effect of the yaw angle and the roll angle on the motion of the wheel set. The proposed method, with no need of the curve fitting of the wheel and rail profiles, can accurately, directly, and comprehensively determine the contact interface distances between the wheel and the rail. The range iteration algorithm is used to improve the computation efficiency and reduce the calculation required. The present computation method is applied for the analysis of the contact of rails of CHINA (CHN) 75 kg/m and wheel sets of wearing type tread of China's freight cars. In addition, it can be proved that the results of the proposed method are consistent with that of Kalker's program CONTACT, and the maximum deviation from the wheel/rail contact patch area of this two methods is approximately 5%. The proposed method, can also be used to investigate static wheel/rail contact. Some wheel/rail contact points and contact patch distributions are discussed and assessed, wheel and rail non-worn and worn profiles included.
Non-Linear Aeroelastic Analysis Using the Point Transformation Method, Part 1: Freeplay Model
LIU, L.; WONG, Y. S.; LEE, B. H. K.
2002-05-01
A point transformation technique is developed to investigate the non-linear behavior of a two-dimensional aeroelastic system with freeplay models. Two formulations of the point transformation method are presented, which can be applied to accurately predict the frequency and amplitude of limit cycle oscillations. Moreover, it is demonstrated that the developed formulations are capable of detecting complex aeroelastic responses such as periodic motions with harmonics, period doubling, chaotic motions and the coexistence of stable limit cycles. Applications of the point transformation method to several test examples are presented. It is concluded that the formulations developed in this paper are efficient and effective.
A New Obstacle Avoidance Method for Service Robots in Indoor Environments
Budiharto, Widodo; Santoso, Ari; Purwanto, Djoko; Jazidie, Achmad
2012-01-01
The objective of this paper is to propose an obstacle avoidance method for service robots in indoor environments using vision and ultrasonic sensors. For this research, the service robot was programmed to deliver a drinking cup from a specified starting point to the recognized customer. We have developed three main modules: one for face recognition, one for obstacle detection, and one for avoidance maneuvering. The obstacle avoidance system is based on an edg edetection ...
Sensitivity of Coastal Environments and Wildlife to Spilled Oil: Mississippi: NESTS (Nest Points)
National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains sensitive biological resource data for gulls and terns in Mississippi. Vector points in this data set represent bird nesting sites. Species...
Sensitivity of Coastal Environments and Wildlife to Spilled Oil: New Hampshire: NESTS (Nest Points)
National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains sensitive biological resource data for nesting birds in New Hampshire. Vector points in this data set represent locations of nesting osprey...
National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains human-use resource data for staging sites along the Hudson River. Vector points in this data set represent locations of possible staging areas...
National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains sensitive biological resource data for alcids, diving birds, gulls, terns, pelagic birds, and shorebirds in Central California. Vector points...
National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains sensitive biological resource data for nesting birds in Northwest Arctic, Alaska. Vector points in this data set represent locations of...
National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains vector points representing locations in Central California that should be highlighted for protection due to the presence of certain highly...
National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains sensitive biological resource data for seabirds, diving birds, gulls, terns, and shorebirds in Northern California. Vector points in this data...
National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains human-use resource data for sensitive areas along the Hudson River. Vector points in this data set represent sensitive areas. This data set...
National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains sensitive biological resource data for threatened/endangered invertebrate species for the Florida Panhandle. Vector points in this data set...
Creating the data basis for environmental evaluations with the Oil Point Method
DEFF Research Database (Denmark)
Bey, Niki; Lenau, Torben Anker
1999-01-01
with rules-of-thumb. The central idea is that missing indicators can be calculated or estimated by the designers themselves.After discussing energy-related environmental evaluation and arguing for its application in evaluation of concepts, the paper focuses on the basic problem of missing data and describes...... the way in which the problem may be solved by making Oil Point evaluations. Sources of energy data are mentioned. Typical deficits to be aware of - such as the negligence of efficiency factors - are revealed and discussed. Comparative case studies which have shown encouraging results are mentioned as well.......A simple, indicator-based method for environmental evaluations, the Oil Point Method, has been developed. Oil Points are derived from energy data and refer to kilograms of oil, therefore the name. In the Oil Point Method, a certain degree of inaccuracy is explicitly accepted like it is the case...
The Closest Point Method and Multigrid Solvers for Elliptic Equations on Surfaces
Chen, Yujia
2015-01-01
© 2015 Society for Industrial and Applied Mathematics. Elliptic partial differential equations are important from both application and analysis points of view. In this paper we apply the closest point method to solve elliptic equations on general curved surfaces. Based on the closest point representation of the underlying surface, we formulate an embedding equation for the surface elliptic problem, then discretize it using standard finite differences and interpolation schemes on banded but uniform Cartesian grids. We prove the convergence of the difference scheme for the Poisson\\'s equation on a smooth closed curve. In order to solve the resulting large sparse linear systems, we propose a specific geometric multigrid method in the setting of the closest point method. Convergence studies in both the accuracy of the difference scheme and the speed of the multigrid algorithm show that our approaches are effective.
A steady-state target calculation method based on "point" model for integrating processes.
Pang, Qiang; Zou, Tao; Zhang, Yanyan; Cong, Qiumei
2015-05-01
Aiming to eliminate the influences of model uncertainty on the steady-state target calculation for integrating processes, this paper presented an optimization method based on "point" model and a method determining whether or not there is a feasible solution of steady-state target. The optimization method resolves the steady-state optimization problem of integrating processes under the framework of two-stage structure, which builds a simple "point" model for the steady-state prediction, and compensates the error between "point" model and real process in each sampling interval. Simulation results illustrate that the outputs of integrating variables can be restricted within the constraints, and the calculation errors between actual outputs and optimal set-points are small, which indicate that the steady-state prediction model can predict the future outputs of integrating variables accurately. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Quality of business environment in Slovakia from the point of view of small enterprises
Gregova Elena
2014-01-01
Countries with a transition economy more than developed ones are influenced by modern globalization processes which have impact first of all on small sector of economy. Small and medium-sized enterprises in transforming economies are considered as one of the major stabilization factors which promotes the solution of economic and social problems. The favorable environment for business is the main prerequisite for long-term competitiveness and growth of economy of any country. It is the environ...
Milk freezing point determination with infrared spectroscopy and thermistor cryoscopy method
Directory of Open Access Journals (Sweden)
Nataša Pintić Pukec
2009-09-01
Full Text Available Two analytical methods were used for determination of the freezing point on identical test raw milk samples. The aim of this research was to investigate possibility of usage infrared spectrometry method, with MilcoScan FT 6000 milk analyzer for determination of milk freezing point, comparing to results obtained by using a reference thermistor cryoscopy method with Cryoscope 4C3 analyzer. During period of four months, total of 320 milk samples were analyzed. Once a week milk samples were sampled at collection reservoirs from twenty milk producers. Milk freezing point was analyzed with each of investigated methods in three consecutive testing respectively repetition. The results of freezing point were recorded as higher by reference in comparison to infrared spectroscopy method. Mean difference from 1.31 to 5.28 m°C respectively 3.43 m°C was determined between results obtained with infrared spectroscopy and reference method. Mean repeatability results for both investigated methods showed slight difference, sr%=0.194 for the reference method and sr%=0.193 for the infrared spectrometry method. Statistically significant difference between the means of the obtained results with two different investigated methods (P>0.05; P>0,01 was not determined. The results indicate the conclusion that infrared spectroscopy method can be used for detecting adulteration of milk with water addition as screening method. Based upon the obtained results usage of infrared spectrometry method in determination of raw milk freezing point is recommended because it is faster and can be carried out with current analyzers used for determination of other milk quality parameters, for example analyzer MilkoScan FT 6000.
Nanobiotechnology in energy, environment and electronics methods and applications
Nicolini, Claudio
2015-01-01
Introduction: Present Challenges and Future Solutions via Nanotechnology for Electronics, Environment and Energy; Claudio NicoliniPart A: MethodsInfluence of Chromosome Translocation on Yeast Life Span: Implications for Long-Term Industrial Biofermentation; Jason Sims, Dmitri Nikitin, and Carlo V. BruschiPulsed Power Nanotechnologies for Disintegration and Breaking Up of Refractory Precious Metals Ores; Valentin A. Chanturiya and Igor Zh. BuninModeling of Software Sensors in Bioprocess; Luca Belmonte and Claudio NicoliniN
Method for material characterization in a non-anechoic environment
Pometcu, L.; Sharaiha, A.; Benzerga, R.; Tamas, R. D.; Pouliguen, P.
2016-04-01
This paper presents a characterization method for extracting the reflection coefficient of materials and the real part of their permittivity. The characterization is performed in a real environment, as opposed to the classical measurement methods that require an anechoic chamber. In order to reduce the effects of the multipath propagation, a free space bistatic measurement was performed at different distances material-antennas in far field. By using a Teflon sample and a commercial absorbing material sample, measurements have been performed in order to validate the characterization technique.
Directory of Open Access Journals (Sweden)
Sabaghnia Naser
2012-01-01
Full Text Available Lentil (Lens culinaris Medik. is an important source of protein and carbohydrate food for people of developing countries and is popular in some developed countries where they are perceived as a healthy component of the diet. Ten lentil genotypes were tested for grain yield in five different environmental conditions, over two consecutive years to classify thes genotypes for yield stability. Seed yield of lentil genotypes ranged from 989.3 to 1.367 kg ha-1 and the linear regression coefficient ranged from 0.75 to 1.18. The combined analysis of variance showed that the effect of environment (E and genotype by environment (GE interaction were highly significant while the main effect of genotype (G was significant at 0.05 probability level. Four different cluster procedures were used for grouping genotypes and environments. According to dendograms of regression methods for lentil genotypes there were two different genotypic groups based on G plus GE or GE sources. Also, the dendograms of ANOVA methods indicated 5 groups based on G and GE sources and 4 groups based on GE sources. According to dendograms of regression methods for environments there were 5 different groups based on G plus GE sources while the dendograms of ANOVA methods indicated 9 groups based on G and GE sources and 3 groups based on GE sources. The mentioned groups were determined via F-test as an empirical stopping criterion for clustering. The most responsive genotypes with high mean yield genotypes are G2 (1145.3 kg ha-1, G8 (1200.2 kg ha-1 and G9 (1267.9 kg ha-1 and could be recommended as the most favorable genotypes for farmers.
Slope failure with the material point method : An investigation of post-peak material behaviour
Vardon, P.J.; Wang, B.; Hicks, M.A.
2017-01-01
The material point method (MPM) has the potential to simulate the onset, the full evolution and the final condition of a slope failure. It is a variant of the finite element method (FEM), where the material is able to move through the mesh, thereby solving one of the major problems in FEM of mesh
Lee, Jennifer
2012-01-01
The intent of this study was to examine the relationship between media multitasking orientation and grade point average. The study utilized a mixed-methods approach to investigate the research questions. In the quantitative section of the study, the primary method of statistical analyses was multiple regression. The independent variables for the…
Compound material point method (CMPM) to improve stress recovery for quasi-static problems
Gonzalez Acosta, J.L.; Vardon, P.J.; Hicks, M.A.
2017-01-01
Stress oscillations and inaccuracies are commonly reported in the material point method (MPM). This paper investigates the causes and presents a method to reduce them. The oscillations are shown to result from, at least in part, two distinctly different causes, both originating from the shape
On the practical use of the Material Point Method for offshore geotechnical applications
Brinkgreve, R.B.J.; Burg, M; Liim, L.J.; Andreykiv, A
2017-01-01
The Material Point Method (MPM) has been developed as a special finite element-based method for large deformation analysis, material flow and contact problems. When it comes to applications in soil, MPM can provide solutions where conventional FEM faces its limitations. Examples of geotechnical
A Control Method for Maximum Power Point Tracking in Stand-Alone-Type PV Generation Systems
Itako, Kazutaka; Mori, Takeaki
In this paper, a new control method for maximum power point tracking (MPPT) in stand-alone-type PV generaton systems is proposed. In this control method, the operations detecting the maximum power point and tracking its point are alternately carried out by using a step-up DC—DC converter. This method requires neither the measurement of temperature and insolation level nor PV array model. In a stand-alone-type application with a battery load, the design method for the boost inductance L of the step-up DC—DC converter is described, and the experimental results show that the use of the proposed MPPT control increases the PV generated energy by 14.8% compared to the conventional system.
Subspace methods for pattern recognition in intelligent environment
Jain, Lakhmi
2014-01-01
This research book provides a comprehensive overview of the state-of-the-art subspace learning methods for pattern recognition in intelligent environment. With the fast development of internet and computer technologies, the amount of available data is rapidly increasing in our daily life. How to extract core information or useful features is an important issue. Subspace methods are widely used for dimension reduction and feature extraction in pattern recognition. They transform a high-dimensional data to a lower-dimensional space (subspace), where most information is retained. The book covers a broad spectrum of subspace methods including linear, nonlinear and multilinear subspace learning methods and applications. The applications include face alignment, face recognition, medical image analysis, remote sensing image classification, traffic sign recognition, image clustering, super resolution, edge detection, multi-view facial image synthesis.
Evaluation of methods for rapid determination of freezing point of aviation fuels
Mathiprakasam, B.
1982-01-01
Methods for identification of the more promising concepts for the development of a portable instrument to rapidly determine the freezing point of aviation fuels are described. The evaluation process consisted of: (1) collection of information on techniques previously used for the determination of the freezing point, (2) screening and selection of these techniques for further evaluation of their suitability in a portable unit for rapid measurement, and (3) an extensive experimental evaluation of the selected techniques and a final selection of the most promising technique. Test apparatuses employing differential thermal analysis and the change in optical transparency during phase change were evaluated and tested. A technique similar to differential thermal analysis using no reference fuel was investigated. In this method, the freezing point was obtained by digitizing the data and locating the point of inflection. Results obtained using this technique compare well with those obtained elsewhere using different techniques. A conceptual design of a portable instrument incorporating this technique is presented.
DEFF Research Database (Denmark)
Choi, Ui-Min; Blaabjerg, Frede; Lee, Kyo-Beum
2015-01-01
time of small- and medium-voltage vectors. However, if the power factor is lower, there is a limitation to eliminate neutral-point oscillations. In this case, the proposed method can be improved by changing the switching sequence properly. Additionally, a method for neutral-point voltage balancing...
a Data Driven Method for Flat Roof Building Reconstruction from LiDAR Point Clouds
Mahphood, A.; Arefi, H.
2017-09-01
3D building modeling is one of the most important applications in photogrammetry and remote sensing. Airborne LiDAR (Light Detection And Ranging) is one of the primary information sources for building modeling. In this paper, a new data-driven method is proposed for 3D building modeling of flat roofs. First, roof segmentation is implemented using region growing method. The distance between roof points and the height difference of the points are utilized in this step. Next, the building edge points are detected using a new method that employs grid data, and then roof lines are regularized using the straight line approximation. The centroid point and direction for each line are estimated in this step. Finally, 3D model is reconstructed by integrating the roof and wall models. In the end, a qualitative and quantitative assessment of the proposed method is implemented. The results show that the proposed method could successfully model the flat roof buildings using LiDAR point cloud automatically.
A DATA DRIVEN METHOD FOR FLAT ROOF BUILDING RECONSTRUCTION FROM LiDAR POINT CLOUDS
Directory of Open Access Journals (Sweden)
A. Mahphood
2017-09-01
Full Text Available 3D building modeling is one of the most important applications in photogrammetry and remote sensing. Airborne LiDAR (Light Detection And Ranging is one of the primary information sources for building modeling. In this paper, a new data-driven method is proposed for 3D building modeling of flat roofs. First, roof segmentation is implemented using region growing method. The distance between roof points and the height difference of the points are utilized in this step. Next, the building edge points are detected using a new method that employs grid data, and then roof lines are regularized using the straight line approximation. The centroid point and direction for each line are estimated in this step. Finally, 3D model is reconstructed by integrating the roof and wall models. In the end, a qualitative and quantitative assessment of the proposed method is implemented. The results show that the proposed method could successfully model the flat roof buildings using LiDAR point cloud automatically.
A method for automatic feature points extraction of human vertebrae three-dimensional model
Wu, Zhen; Wu, Junsheng
2017-05-01
A method for automatic extraction of the feature points of the human vertebrae three-dimensional model is presented. Firstly, the statistical model of vertebrae feature points is established based on the results of manual vertebrae feature points extraction. Then anatomical axial analysis of the vertebrae model is performed according to the physiological and morphological characteristics of the vertebrae. Using the axial information obtained from the analysis, a projection relationship between the statistical model and the vertebrae model to be extracted is established. According to the projection relationship, the statistical model is matched with the vertebrae model to get the estimated position of the feature point. Finally, by analyzing the curvature in the spherical neighborhood with the estimated position of feature points, the final position of the feature points is obtained. According to the benchmark result on multiple test models, the mean relative errors of feature point positions are less than 5.98%. At more than half of the positions, the error rate is less than 3% and the minimum mean relative error is 0.19%, which verifies the effectiveness of the method.
Study of characteristic point identification and preprocessing method for pulse wave signals.
Sun, Wei; Tang, Ning; Jiang, Guiping
2015-02-01
Characteristics in pulse wave signals (PWSs) include the information of physiology and pathology of human cardiovascular system. Therefore, identification of characteristic points in PWSs plays a significant role in analyzing human cardiovascular system. Particularly, the characteristic points show personal dependent features and are easy to be affected. Acquiring a signal with high signal-to-noise ratio (SNR) and integrity is fundamentally important to precisely identify the characteristic points. Based on the mathematical morphology theory, we design a combined filter, which can effectively suppress the baseline drift and remove the high-frequency noise simultaneously, to preprocess the PWSs. Furthermore, the characteristic points of the preprocessed signal are extracted according to its position relations with the zero-crossing points of wavelet coefficients of the signal. In addition, the differential method is adopted to calibrate the position offset of characteristic points caused by the wavelet transform. We investigated four typical PWSs reconstructed by three Gaussian functions with tunable parameters. The numerical results suggested that the proposed method could identify the characteristic points of PWSs accurately.
Apparatus and method for implementing power saving techniques when processing floating point values
Kim, Young Moon; Park, Sang Phill
2017-10-03
An apparatus and method are described for reducing power when reading and writing graphics data. For example, one embodiment of an apparatus comprises: a graphics processor unit (GPU) to process graphics data including floating point data; a set of registers, at least one of the registers of the set partitioned to store the floating point data; and encode/decode logic to reduce a number of binary 1 values being read from the at least one register by causing a specified set of bit positions within the floating point data to be read out as 0s rather than 1s.
Archaelogy of Arid Environment Points to Management Options for Yucca Mountain
Energy Technology Data Exchange (ETDEWEB)
N. Chapman; A. Dansie; C. McCombie
2006-08-29
As with all planned repositories for spent fuel, the critical period over which Yucca Mountain needs to provide isolation is the first hundreds to thousands of years after the fuel is emplaced, when it is at its most hazardous. Both the original and the proposed new EPA standards highlight the central importance of this performance period by focusing on repository behavior during the first 10,000 years. Archaeology has a lot to tell us about the behavior of materials and structures over this time period. There have been numerous studies of archaeological artifacts in conditions relevant to the groundwater saturated environments that are a feature of most international geological disposal concepts, but relatively few in arid environments like that of the Nevada desert. However, there is much information to be gleaned, not only from classic archaeological areas in the Middle East and around the Mediterranean but also, perhaps surprisingly to some, from Nevada itself. Our recent study evaluated archaeological materials from underground openings and shallow burial in arid environments relevant to Yucca Mountain, drawing conclusions about how their state and their environment of preservation could help to assess design and operational options for the high-level waste repository.
Directory of Open Access Journals (Sweden)
Takashi Fuse
2017-12-01
Full Text Available Three-dimensional (3D road maps have garnered significant attention recently because of applications such as autonomous driving. For 3D road maps to remain accurate and up-to-date, an appropriate updating method is crucial. However, there are currently no updating methods with both satisfactorily high frequency and accuracy. An effective strategy would be to frequently acquire point clouds from regular vehicles, and then take detailed measurements only where necessary. However, there are three challenges when using data from regular vehicles. First, the accuracy and density of the points are comparatively low. Second, the measurement ranges vary for different measurements. Third, tentative changes such as pedestrians must be discriminated from real changes. The method proposed in this paper consists of registration and change detection methods. We first prepare the synthetic data obtained from regular vehicles using mobile mapping system data as a base reference. We then apply our proposed change detection method, in which the occupancy grid method is integrated with Dempster–Shafer theory to deal with occlusions and tentative changes. The results show that the proposed method can detect road environment changes, and it is easy to find changed parts through visualization. The work contributes towards sustainable updates and applications of 3D road maps.
Comparison of Optimization and Two-point Methods in Estimation of Soil Water Retention Curve
Ghanbarian-Alavijeh, B.; Liaghat, A. M.; Huang, G.
2009-04-01
Soil water retention curve (SWRC) is one of the soil hydraulic properties in which its direct measurement is time consuming and expensive. Since, its measurement is unavoidable in study of environmental sciences i.e. investigation of unsaturated hydraulic conductivity and solute transport, in this study the attempt is to predict soil water retention curve from two measured points. By using Cresswell and Paydar (1996) method (two-point method) and an optimization method developed in this study on the basis of two points of SWRC, parameters of Tyler and Wheatcraft (1990) model (fractal dimension and air entry value) were estimated and then water content at different matric potentials were estimated and compared with their measured values (n=180). For each method, we used both 3 and 1500 kPa (case 1) and 33 and 1500 kPa (case 2) as two points of SWRC. The calculated RMSE values showed that in the Creswell and Paydar (1996) method, there exists no significant difference between case 1 and case 2. However, the calculated RMSE value in case 2 (2.35) was slightly less than case 1 (2.37). The results also showed that the developed optimization method in this study had significantly less RMSE values for cases 1 (1.63) and 2 (1.33) rather than Cresswell and Paydar (1996) method.
A Novel Gaze Tracking Method Based on the Generation of Virtual Calibration Points
Directory of Open Access Journals (Sweden)
Hwan Heo
2013-08-01
Full Text Available Most conventional gaze-tracking systems require that users look at many points during the initial calibration stage, which is inconvenient for them. To avoid this requirement, we propose a new gaze-tracking method with four important characteristics. First, our gaze-tracking system uses a large screen located at a distance from the user, who wears a lightweight device. Second, our system requires that users look at only four calibration points during the initial calibration stage, during which four pupil centers are noted. Third, five additional points (virtual pupil centers are generated with a multilayer perceptron using the four actual points (detected pupil centers as inputs. Fourth, when a user gazes at a large screen, the shape defined by the positions of the four pupil centers is a distorted quadrangle because of the nonlinear movement of the human eyeball. The gaze-detection accuracy is reduced if we map the pupil movement area onto the screen area using a single transform function. We overcame this problem by calculating the gaze position based on multi-geometric transforms using the five virtual points and the four actual points. Experiment results show that the accuracy of the proposed method is better than that of other methods.
Methods for Process Evaluation of Work Environment Interventions
DEFF Research Database (Denmark)
Fredslund, Hanne; Strandgaard Pedersen, Jesper
2004-01-01
In recent years, intervention studies have become increasingly popular within occupational health psychology. The vast majority of such studies have focused on interventions themselves and their effects on the working environment and employee health and well-being. Few studies have focused on how......). This paper describes how organisation theory can be used to develop a method for identifying and analysing processes in relation to the implementation of work environment interventions. The reason for using organisation theory is twofold: 1) interventions are never implemented in a vacuum but in a specific...... organisational context (workplace) with certain characteristics, that the organisation theory can capture, 2) within the organisational sociological field there is a long tradition for studying organisational changes such as workplace interventions. In this paper process is defined as `individual, collective...
Learning in Non-Stationary Environments Methods and Applications
Lughofer, Edwin
2012-01-01
Recent decades have seen rapid advances in automatization processes, supported by modern machines and computers. The result is significant increases in system complexity and state changes, information sources, the need for faster data handling and the integration of environmental influences. Intelligent systems, equipped with a taxonomy of data-driven system identification and machine learning algorithms, can handle these problems partially. Conventional learning algorithms in a batch off-line setting fail whenever dynamic changes of the process appear due to non-stationary environments and external influences. Learning in Non-Stationary Environments: Methods and Applications offers a wide-ranging, comprehensive review of recent developments and important methodologies in the field. The coverage focuses on dynamic learning in unsupervised problems, dynamic learning in supervised classification and dynamic learning in supervised regression problems. A later section is dedicated to applications in which dyna...
Alternative Methods for Estimating Plane Parameters Based on a Point Cloud
Stryczek, Roman
2017-12-01
Non-contact measurement techniques carried out using triangulation optical sensors are increasingly popular in measurements with the use of industrial robots directly on production lines. The result of such measurements is often a cloud of measurement points that is characterized by considerable measuring noise, presence of a number of points that differ from the reference model, and excessive errors that must be eliminated from the analysis. To obtain vector information points contained in the cloud that describe reference models, the data obtained during a measurement should be subjected to appropriate processing operations. The present paperwork presents an analysis of suitability of methods known as RANdom Sample Consensus (RANSAC), Monte Carlo Method (MCM), and Particle Swarm Optimization (PSO) for the extraction of the reference model. The effectiveness of the tested methods is illustrated by examples of measurement of the height of an object and the angle of a plane, which were made on the basis of experiments carried out at workshop conditions.
Directory of Open Access Journals (Sweden)
Goutaudier C.
2013-07-01
Full Text Available In many cases of miscibility gap in ternary systems, one critical point at least, stable or metastable, can be observed under isobaric and isothermal conditions. The experimental determination of this invariant point is difficult but its knowledge is essential. The authors propose a method for calculating the composition of the invariant solution starting from the composition of the liquid phases in equilibrium. The computing method is based on the barycentric properties of the conjugate solutions (binodal points and an extension of the straight diameter method. A systematic study was carried out on a large number of ternary systems involving diverse constituents (230 sets ternary systems at various temperatures. Thus the results are presented and analyzed by means of consistency tests.
Fang, W.; Quan, S. H.; Xie, C. J.; Tang, X. F.; Wang, L. L.; Huang, L.
2016-03-01
In this study, a direct-current/direct-current (DC/DC) converter with maximum power point tracking (MPPT) is developed to down-convert the high voltage DC output from a thermoelectric generator to the lower voltage required to charge batteries. To improve the tracking accuracy and speed of the converter, a novel MPPT control scheme characterized by an aggregated dichotomy and gradient (ADG) method is proposed. In the first stage, the dichotomy algorithm is used as a fast search method to find the approximate region of the maximum power point. The gradient method is then applied for rapid and accurate tracking of the maximum power point. To validate the proposed MPPT method, a test bench composed of an automobile exhaust thermoelectric generator was constructed for harvesting the automotive exhaust heat energy. Steady-state and transient tracking experiments under five different load conditions were carried out using a DC/DC converter with the proposed ADG and with three traditional methods. The experimental results show that the ADG method can track the maximum power within 140 ms with a 1.1% error rate when the engine operates at 3300 rpm@71 NM, which is superior to the performance of the single dichotomy method, the single gradient method and the perturbation and observation method from the viewpoint of improved tracking accuracy and speed.
APPLICATION OF POINT-CENTERED QUARTER METHOD FOR MEASUREMENT THE BEACH CRAB (OCYPODE SPP DENSITY
Directory of Open Access Journals (Sweden)
Hanifa Marisa
2015-10-01
Full Text Available Point Quarter Method is a plant community structure measurement procedure. The technique is base on measurement of distance of four plants or trees in every quarter that is made by four space in the cross line sampling field studies. In forest sampling, point centered quarter methods is considered as the efficient, reliable and accurate data, not only for mean distance and density, but for frequency and dominance of species. So it is important to try wether these method ould be applicated to animal, especially crab. These method was applicated for crab population in Padang Beach at December 22nd, 2014. Ten points quartered were made and the distance of every Ocypode sp crab burrow was counted by ruler. Mean distance of crabs burrow gained by divided of total number of quarter (20 with mean distance of every burrow to the point. Density per hectare is 10,000 m divided by quadratic of mean distance. Mean distance of burrow to points were counted and prediction of population per hectare could be found. In these case, mean distance was: 0.41 m and crab population is :59,488.34 individu per hectare. Compared to other species , eg Scylla serrata, its population is bigger, eventhough the condition of beach is polluted and wasted
Methods of Evaluation of the State and Efficiency of the Urban Environment
Patrakeyev, I.; Ziborov, V.; Lazorenko-Hevel, N.
2017-12-01
Today, humanity is experiencing an "urban age", and therefore issues of good management of energy consumption and energy spent on utilization of waste in cities are becoming particularly acute. In this regard, the working group of the World Energy Council proposed a concept of the "energy balance" of the urban environment. This concept was that the energy produced should cover the energy consumed. Metabolism of the urban environment is so hot and so rarely studied by urban planners. This condition is linked first with the fact that metabolism is nothing more than a network of exchange of physical, energy resources and information. This is the real point of meeting the natural, technological, social, economic processes and their transformation into one another. Metabolism is the most important tool for knowing the real mechanics of the movement of resources in such a complex system as the urban environment. The content of the article is an analysis of significant energy and material flows characterizing the metabolism of the urban environment. We considered in the article a new energy paradigm. This paradigm will help in carrying out research in such areas as reducing the burden on the state of the environment, reducing environmental problems and reducing dependence on fossil fuels. Methods and models of metabolic processes in the urban environment will allow to implement in practice the concept of sustainable development of the urban environment, which is the development of the teaching V. Vernadsky about the noosphere.
METHODS OF EVALUATION OF THE STATE AND EFFICIENCY OF THE URBAN ENVIRONMENT
Directory of Open Access Journals (Sweden)
I. Patrakeyev
2017-12-01
Full Text Available Today, humanity is experiencing an "urban age", and therefore issues of good management of energy consumption and energy spent on utilization of waste in cities are becoming particularly acute. In this regard, the working group of the World Energy Council proposed a concept of the "energy balance" of the urban environment. This concept was that the energy produced should cover the energy consumed. Metabolism of the urban environment is so hot and so rarely studied by urban planners. This condition is linked first with the fact that metabolism is nothing more than a network of exchange of physical, energy resources and information. This is the real point of meeting the natural, technological, social, economic processes and their transformation into one another. Metabolism is the most important tool for knowing the real mechanics of the movement of resources in such a complex system as the urban environment. The content of the article is an analysis of significant energy and material flows characterizing the metabolism of the urban environment. We considered in the article a new energy paradigm. This paradigm will help in carrying out research in such areas as reducing the burden on the state of the environment, reducing environmental problems and reducing dependence on fossil fuels. Methods and models of metabolic processes in the urban environment will allow to implement in practice the concept of sustainable development of the urban environment, which is the development of the teaching V. Vernadsky about the noosphere.
Energy Technology Data Exchange (ETDEWEB)
Liu, Youshan, E-mail: ysliu@mail.iggcas.ac.cn [State Key Laboratory of Lithospheric Evolution, Institute of Geology and Geophysics, Chinese Academy of Sciences, Beijing, 100029 (China); Teng, Jiwen, E-mail: jwteng@mail.iggcas.ac.cn [State Key Laboratory of Lithospheric Evolution, Institute of Geology and Geophysics, Chinese Academy of Sciences, Beijing, 100029 (China); Xu, Tao, E-mail: xutao@mail.iggcas.ac.cn [State Key Laboratory of Lithospheric Evolution, Institute of Geology and Geophysics, Chinese Academy of Sciences, Beijing, 100029 (China); CAS Center for Excellence in Tibetan Plateau Earth Sciences, Beijing, 100101 (China); Badal, José, E-mail: badal@unizar.es [Physics of the Earth, Sciences B, University of Zaragoza, Pedro Cerbuna 12, 50009 Zaragoza (Spain)
2017-05-01
The mass-lumped method avoids the cost of inverting the mass matrix and simultaneously maintains spatial accuracy by adopting additional interior integration points, known as cubature points. To date, such points are only known analytically in tensor domains, such as quadrilateral or hexahedral elements. Thus, the diagonal-mass-matrix spectral element method (SEM) in non-tensor domains always relies on numerically computed interpolation points or quadrature points. However, only the cubature points for degrees 1 to 6 are known, which is the reason that we have developed a p-norm-based optimization algorithm to obtain higher-order cubature points. In this way, we obtain and tabulate new cubature points with all positive integration weights for degrees 7 to 9. The dispersion analysis illustrates that the dispersion relation determined from the new optimized cubature points is comparable to that of the mass and stiffness matrices obtained by exact integration. Simultaneously, the Lebesgue constant for the new optimized cubature points indicates its surprisingly good interpolation properties. As a result, such points provide both good interpolation properties and integration accuracy. The Courant–Friedrichs–Lewy (CFL) numbers are tabulated for the conventional Fekete-based triangular spectral element (TSEM), the TSEM with exact integration, and the optimized cubature-based TSEM (OTSEM). A complementary study demonstrates the spectral convergence of the OTSEM. A numerical example conducted on a half-space model demonstrates that the OTSEM improves the accuracy by approximately one order of magnitude compared to the conventional Fekete-based TSEM. In particular, the accuracy of the 7th-order OTSEM is even higher than that of the 14th-order Fekete-based TSEM. Furthermore, the OTSEM produces a result that can compete in accuracy with the quadrilateral SEM (QSEM). The high accuracy of the OTSEM is also tested with a non-flat topography model. In terms of computational
A Study of Impact Point Detecting Method Based on Seismic Signal
Huo, Pengju; Zhang, Yu; Xu, Lina; Huang, Yong
The projectile landing position has to be determined for its recovery and range in the targeting test. In this paper, a global search method based on the velocity variance is proposed. In order to verify the applicability of this method, simulation analysis within the scope of four million square meters has been conducted in the same array structure of the commonly used linear positioning method, and MATLAB was used to compare and analyze the two methods. The compared simulation results show that the global search method based on the speed of variance has high positioning accuracy and stability, which can meet the needs of impact point location.
Multiscale Modeling using Molecular Dynamics and Dual Domain Material Point Method
Energy Technology Data Exchange (ETDEWEB)
Dhakal, Tilak Raj [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Theoretical Division. Fluid Dynamics and Solid Mechanics Group, T-3; Rice Univ., Houston, TX (United States)
2016-07-07
For problems involving large material deformation rate, the material deformation time scale can be shorter than the material takes to reach a thermodynamical equilibrium. For such problems, it is difficult to obtain a constitutive relation. History dependency become important because of thermodynamic non-equilibrium. Our goal is to build a multi-scale numerical method which can bypass the need for a constitutive relation. In conclusion, multi-scale simulation method is developed based on the dual domain material point (DDMP). Molecular dynamics (MD) simulation is performed to calculate stress. Since the communication among material points is not necessary, the computation can be done embarrassingly parallel in CPU-GPU platform.
DEFF Research Database (Denmark)
Skajaa, Anders; Andersen, Erling D.; Ye, Yinyu
2013-01-01
We present two strategies for warmstarting primal-dual interior point methods for the homogeneous self-dual model when applied to mixed linear and quadratic conic optimization problems. Common to both strategies is their use of only the final (optimal) iterate of the initial problem and their neg......We present two strategies for warmstarting primal-dual interior point methods for the homogeneous self-dual model when applied to mixed linear and quadratic conic optimization problems. Common to both strategies is their use of only the final (optimal) iterate of the initial problem...
A primal-dual interior point method for large-scale free material optimization
DEFF Research Database (Denmark)
Weldeyesus, Alemseged Gebrehiwot; Stolpe, Mathias
2015-01-01
Free Material Optimization (FMO) is a branch of structural optimization in which the design variable is the elastic material tensor that is allowed to vary over the design domain. The requirements are that the material tensor is symmetric positive semidefinite with bounded trace. The resulting...... optimization problem is a nonlinear semidefinite program with many small matrix inequalities for which a special-purpose optimization method should be developed. The objective of this article is to propose an efficient primal-dual interior point method for FMO that can robustly and accurately solve large...... of iterations the interior point method requires is modest and increases only marginally with problem size. The computed optimal solutions obtain a higher precision than other available special-purpose methods for FMO. The efficiency and robustness of the method is demonstrated by numerical experiments on a set...
Use of Finite Point Method for Wave Propagation in Nonhomogeneous Unbounded Domains
Directory of Open Access Journals (Sweden)
S. Moazam
2015-01-01
Full Text Available Wave propagation in an unbounded domain surrounding the stimulation resource is one of the important issues for engineers. Past literature is mainly concentrated on the modelling and estimation of the wave propagation in partially layered, homogeneous, and unbounded domains with harmonic properties. In this study, a new approach based on the Finite Point Method (FPM has been introduced to analyze and solve the problems of wave propagation in any nonhomogeneous unbounded domain. The proposed method has the ability to use the domain properties by coordinate as an input. Therefore, there is no restriction in the form of the domain properties, such as being periodical as in the case of existing similar numerical methods. The proposed method can model the boundary points between phases with trace of errors and the results of this method satisfy both conditions of decay and radiation.
The Three-Point Sinuosity Method for Calculating the Fractal Dimension of Machined Surface Profile
Zhou, Yuankai; Li, Yan; Zhu, Hua; Zuo, Xue; Yang, Jianhua
2015-04-01
The three-point sinuosity (TPS) method is proposed to calculate the fractal dimension of surface profile accurately. In this method, a new measure, TPS is defined to present the structural complexity of fractal curves, and has been proved to follow the power law. Thus, the fractal dimension can be calculated through the slope of the fitted line in the log-log plot. The Weierstrass-Mandelbrot (W-M) fractal curves, as well as the real surface profiles obtained by grinding, sand blasting and turning, are used to validate the effectiveness of the proposed method. The calculation values are compared to those obtained from root-mean-square (RMS) method, box-counting (BC) method and variation method. The results show that the TPS method has the widest scaling region, the least fit error and the highest accuracy among the methods examined, which demonstrates that the fractal characteristics of the fractal curves can be well revealed by the proposed method.
The Curvature-Augmented Closest Point method with vesicle inextensibility application
Vogl, Christopher J.
2017-09-01
The Closest Point method, initially developed by Ruuth and Merriman, allows for the numerical solution of surface partial differential equations without the need for a parameterization of the surface itself. Surface quantities are embedded into the surrounding domain by assigning each value at a given spatial location to the corresponding value at the closest point on the surface. This embedding allows for surface derivatives to be replaced by their Cartesian counterparts (e.g. ∇s = ∇). This equivalence is only valid on the surface, and thus, interpolation is used to enforce what is known as the side condition away from the surface. To improve upon the method, this work derives an operator embedding that incorporates curvature information, making it valid in a neighborhood of the surface. With this, direct enforcement of the side condition is no longer needed. Comparisons in R2 and R3 show that the resulting Curvature-Augmented Closest Point method has better accuracy and requires less memory, through increased matrix sparsity, than the Closest Point method, while maintaining similar matrix condition numbers. To demonstrate the utility of the method in a physical application, simulations of inextensible, bi-lipid vesicles evolving toward equilibrium shapes are also included.
The Curvature-Augmented Closest Point method with vesicle inextensibility application
Energy Technology Data Exchange (ETDEWEB)
Vogl, Christopher J.
2017-09-01
The Closest Point method, initially developed by Ruuth and Merriman, allows for the numerical solution of surface partial differential equations without the need for a parameterization of the surface itself. Surface quantities are embedded into the surrounding domain by assigning each value at a given spatial location to the corresponding value at the closest point on the surface. This embedding allows for surface derivatives to be replaced by their Cartesian counterparts (e.g. ∇s=∇). This equivalence is only valid on the surface, and thus, interpolation is used to enforce what is known as the side condition away from the surface. To improve upon the method, this work derives an operator embedding that incorporates curvature information, making it valid in a neighborhood of the surface. With this, direct enforcement of the side condition is no longer needed. Comparisons in R2 and R3 show that the resulting Curvature-Augmented Closest Point method has better accuracy and requires less memory, through increased matrix sparsity, than the Closest Point method, while maintaining similar matrix condition numbers. To demonstrate the utility of the method in a physical application, simulations of inextensible, bi-lipid vesicles evolving toward equilibrium shapes are also included.
Nosikov, I. A.; Klimenko, M. V.; Bessarab, P. F.; Zhbankov, G. A.
2017-07-01
Point-to-point ray tracing is an important problem in many fields of science. While direct variational methods where some trajectory is transformed to an optimal one are routinely used in calculations of pathways of seismic waves, chemical reactions, diffusion processes, etc., this approach is not widely known in ionospheric point-to-point ray tracing. We apply the Nudged Elastic Band (NEB) method to a radio wave propagation problem. In the NEB method, a chain of points which gives a discrete representation of the radio wave ray is adjusted iteratively to an optimal configuration satisfying the Fermat's principle, while the endpoints of the trajectory are kept fixed according to the boundary conditions. Transverse displacements define the radio ray trajectory, while springs between the points control their distribution along the ray. The method is applied to a study of point-to-point ionospheric ray tracing, where the propagation medium is obtained with the International Reference Ionosphere model taking into account traveling ionospheric disturbances. A 2-dimensional representation of the optical path functional is developed and used to gain insight into the fundamental difference between high and low rays. We conclude that high and low rays are minima and saddle points of the optical path functional, respectively.
On the maximum likelihood method for estimating molecular trees: uniqueness of the likelihood point.
Fukami, K; Tateno, Y
1989-05-01
Studies are carried out on the uniqueness of the stationary point on the likelihood function for estimating molecular phylogenetic trees, yielding proof that there exists at most one stationary point, i.e., the maximum point, in the parameter range for the one parameter model of nucleotide substitution. The proof is simple yet applicable to any type of tree topology with an arbitrary number of operational taxonomic units (OTUs). The proof ensures that any valid approximation algorithm be able to reach the unique maximum point under the conditions mentioned above. An algorithm developed incorporating Newton's approximation method is then compared with the conventional one by means of computer simulation. The results show that the newly developed algorithm always requires less CPU time than the conventional one, whereas both algorithms lead to identical molecular phylogenetic trees in accordance with the proof.
Deng, Kai-Wen; He, Fu-Yuan
2013-05-01
To analyze the status of reaching meridian research for the Chinese Matria Medica and to raise point-medicine method. To review and analyze the studied situation of the corresponding relationships between "materials", as the constituents in the Chinese materia medica (CMM) as reaching meridian material foundation, and "image", as the function states of the zang-fu viscera, to investigate the problems and the measures to solve it. There are imprinting relationships among "materials", as the constituents alike metabolic pathway in the CMM as reaching meridian material foundation, and "image", as the function of the zang-fu viscera related with meridians, and "symptom", the states of them, retroacted, represented and explored by the corresponding meridianed constituents in the CMM as quantitative pharmacologic parameters,also modified by special acupuncture points, finally to establish the new method of reaching meridian according to meridian point-medicine action and also to investigate the relations between the constituents in the CMM and network targets of disease as to kill two birds with one arrow. There are imprinting relationships among "materials", "image", "symptom" versus CMM, zang-fu viscera function related with meridians, their function status respectively, which are modified by acupuncture merisian points. The point-medicine method for assuring reaching meridian is the most simple way to investigate reaching meridian for CMM, is also a important way to investigate visceral and meridianal manifestations.
Kholeif, S A
2001-06-01
A new method that belongs to the differential category for determining the end points from potentiometric titration curves is presented. It uses a preprocess to find first derivative values by fitting four data points in and around the region of inflection to a non-linear function, and then locate the end point, usually as a maximum or minimum, using an inverse parabolic interpolation procedure that has an analytical solution. The behavior and accuracy of the sigmoid and cumulative non-linear functions used are investigated against three factors. A statistical evaluation of the new method using linear least-squares method validation and multifactor data analysis are covered. The new method is generally applied to symmetrical and unsymmetrical potentiometric titration curves, and the end point is calculated using numerical procedures only. It outperforms the "parent" regular differential method in almost all factors levels and gives accurate results comparable to the true or estimated true end points. Calculated end points from selected experimental titration curves compatible with the equivalence point category of methods, such as Gran or Fortuin, are also compared with the new method.
1982-12-01
academically justifiable, there remained the co.cern that to stray t3o far fros the realities of the Naval Supply System woula -esult i an academic...lesire to produce a recommendation for the near future, and there- fore embraces the reality of the SPLICE environment mor - than the generic TM model...modlal presented in Chapter iV. F. REINHART MID ARANA rN In Reference I (p. 55), th- iathors proposed a TI approach based upon a " Vitual terminal" manag
Sigma-Point Particle Filter for Parameter Estimation in a Multiplicative Noise Environment
Directory of Open Access Journals (Sweden)
Youmin Tang
2011-12-01
Full Text Available A pre-requisite for the “optimal estimate” by the ensemble-based Kalman filter (EnKF is the Gaussian assumption for background and observation errors, which is often violated when the errors are multiplicative, even for a linear system. This study first explores the challenge of the multiplicative noise to the current EnKF schemes. Then, a Sigma Point Kalman Filter based Particle Filter (SPPF is presented as an alternative to solve the issues associated with multiplicative noise. The classic Lorenz '63 model and a higher dimensional Lorenz '96 model are used as test beds for the data assimilation experiments. Performance of the SPPF algorithm is compared against a standard EnKF as well as an advanced square-root Sigma-Point Kalman Filters (SPKF. The results show that the SPPF outperforms the EnKF and the square-root SPKF in the presence of multiplicative noise. The super ensemble structure of the SPPF makes it computationally attractive compared to the standard Particle Filter (PF.
Diagnosis of solid breast lesions by elastography 5-point score and strain ratio method
Energy Technology Data Exchange (ETDEWEB)
Zhao, Qiao Ling, E-mail: imagingzhaoql@126.com [Department of Ultrasound, the First Affiliated Hospital, Medical College of Xi' an Jiaotong University, Xi' an Yanta West Road No. 277, Shaanxi 710061 (China); Ruan, Li Tao, E-mail: ruanlitao@163.com [Department of Ultrasound, the First Affiliated Hospital, Medical College of Xi' an Jiaotong University, Xi' an Yanta West Road No. 277, Shaanxi 710061 (China); Zhang, Hua, E-mail: Zhanghua54322@163.com [Department of Ultrasound, the First Affiliated Hospital, Medical College of Xi' an Jiaotong University, Xi' an Yanta West Road No. 277, Shaanxi 710061 (China); Yin, Yi Min, E-mail: yymxbh@yahoo.cn [Department of Ultrasound, the First Affiliated Hospital, Medical College of Xi' an Jiaotong University, Xi' an Yanta West Road No. 277, Shaanxi 710061 (China); Duan, Shao Xue, E-mail: doujiaoyueer@163.com [Department of Ultrasound, the First Affiliated Hospital, Medical College of Xi' an Jiaotong University, Xi' an Yanta West Road No. 277, Shaanxi 710061 (China)
2012-11-15
Purpose: To compare the diagnostic performance of 5-point scoring system and strain ratio by sonoelastography in the assessment of solid breast lesions. Material and methods: One hundred and eighty-seven solid masses in 155 patients were scanned by two-dimensional ultrasonography and sonoelastography. Elasticity scores were determined with a 5-point scoring method, and the strain ratio was based on the comparison of the average strain measured in the lesion with the adjacent breast tissue in the same depth. Pathological results were taken as gold standards to compare the diagnostic efficacy of two methods with clinical diagnostic test and receiver operating characteristic (ROC) curves. Results: Among 187 lesions, 130 were benign and 57 were malignant. The mean scores (1.62 {+-} 0.69 vs 4.07 {+-} 0.26, P < 0.05) and strain ratios (2.06 {+-} 1.27 vs 6.66 {+-} 4.62, P < 0.05) were significantly higher of malignant than benign lesions. The area under the curve for the 5-point scoring system and for strain ratio-based elastographic analysis was 0.892 and 0.909, respectively (P > 0.05). For 5-point scoring, sonoelastography had 84.2% sensitivity, 84.6% specificity, 84.5% accuracy, 70.6% positive predictive value and 92.4% negative predictive value. When a cutoff point of 3.06 was used, sensitivity, specificity, accuracy, positive and negative predictive values were 87.7%, 88.5%, 88.2%, 76.9% and 94.3%, respectively for the strain ratio (P > 0.05). Conclusions: The 5-point scoring system and strain ratio has similar diagnostic performance, and the strain ratio could be more objective to differentiate the masses when those masses were difficult to be judged by using 5-point scoring system in sonoelastographic images.
Energy Technology Data Exchange (ETDEWEB)
Liu, Wenyang [Department of Bioengineering, University of California, Los Angeles, California 90095 (United States); Cheung, Yam; Sabouri, Pouya; Arai, Tatsuya J.; Sawant, Amit [Department of Radiation Oncology, University of Texas Southwestern, Dallas, Texas 75390 (United States); Ruan, Dan, E-mail: druan@mednet.ucla.edu [Department of Bioengineering, University of California, Los Angeles, California 90095 and Department of Radiation Oncology, University of California, Los Angeles, California 90095 (United States)
2015-11-15
Purpose: To accurately and efficiently reconstruct a continuous surface from noisy point clouds captured by a surface photogrammetry system (VisionRT). Methods: The authors have developed a level-set based surface reconstruction method on point clouds captured by a surface photogrammetry system (VisionRT). The proposed method reconstructs an implicit and continuous representation of the underlying patient surface by optimizing a regularized fitting energy, offering extra robustness to noise and missing measurements. By contrast to explicit/discrete meshing-type schemes, their continuous representation is particularly advantageous for subsequent surface registration and motion tracking by eliminating the need for maintaining explicit point correspondences as in discrete models. The authors solve the proposed method with an efficient narrowband evolving scheme. The authors evaluated the proposed method on both phantom and human subject data with two sets of complementary experiments. In the first set of experiment, the authors generated a series of surfaces each with different black patches placed on one chest phantom. The resulting VisionRT measurements from the patched area had different degree of noise and missing levels, since VisionRT has difficulties in detecting dark surfaces. The authors applied the proposed method to point clouds acquired under these different configurations, and quantitatively evaluated reconstructed surfaces by comparing against a high-quality reference surface with respect to root mean squared error (RMSE). In the second set of experiment, the authors applied their method to 100 clinical point clouds acquired from one human subject. In the absence of ground-truth, the authors qualitatively validated reconstructed surfaces by comparing the local geometry, specifically mean curvature distributions, against that of the surface extracted from a high-quality CT obtained from the same patient. Results: On phantom point clouds, their method
Mathematical Method for Predicting Nickel Deposit Based on Data from Drilling Points
Directory of Open Access Journals (Sweden)
Edi Cahyono
2011-01-01
Full Text Available In this article we discuss several methods for predicting nickel ore content inside the soil under a given area/region. The prediction is the main objective of the exploration activity which is very important for conducting the exploitation activity from economic point of view. The prediction methods are based on the data obtained from the drilling activity at several ‘points’. The data yields information on the nickel density at those points. Nickel density over the region is approximated (with an approximate function by applying interpolation and/or extrapolation based on the data from those points. The nickel content is predicted by applying integral of the approximate function over the given region
Rachakonda, Prem; Muralikrishnan, Bala; Cournoyer, Luc; Cheok, Geraldine; Lee, Vincent; Shilling, Meghan; Sawyer, Daniel
2017-10-01
The Dimensional Metrology Group at the National Institute of Standards and Technology is performing research to support the development of documentary standards within the ASTM E57 committee. This committee is addressing the point-to-point performance evaluation of a subclass of 3D imaging systems called terrestrial laser scanners (TLSs), which are laser-based and use a spherical coordinate system. This paper discusses the usage of sphere targets for this effort, and methods to minimize the errors due to the determination of their centers. The key contributions of this paper include methods to segment sphere data from a TLS point cloud, and the study of some of the factors that influence the determination of sphere centers.
Interior Point Methods on GPU with application to Model Predictive Control
DEFF Research Database (Denmark)
Gade-Nielsen, Nicolai Fog
The goal of this thesis is to investigate the application of interior point methods to solve dynamical optimization problems, using a graphical processing unit (GPU) with a focus on problems arising in Model Predictice Control (MPC). Multi-core processors have been available for over ten years now......, and manycore processors, such as GPUs, have also become a standard component in any consumer computer. The GPU offers faster floating point operations and higher memory bandwidth than the CPU, but requires algorithms to be redesigned and implemented, to match the underlying architecture. A large number...... software package called GPUOPT, available under the non-restrictive MIT license. GPUOPT includes includes a primal-dual interior-point method, which supports both the CPU and the GPU. It is implemented as multiple components, where the matrix operations and solver for the Newton directions is separated...
Using MACBETH method for supplier selection in manufacturing environment
Directory of Open Access Journals (Sweden)
Prasad Karande
2013-04-01
Full Text Available Supplier selection is always found to be a complex decision-making problem in manufacturing environment. The presence of several independent and conflicting evaluation criteria, either qualitative or quantitative, makes the supplier selection problem a candidate to be solved by multi-criteria decision-making (MCDM methods. Even several MCDM methods have already been proposed for solving the supplier selection problems, the need for an efficient method that can deal with qualitative judgments related to supplier selection still persists. In this paper, the applicability and usefulness of measuring attractiveness by a categorical-based evaluation technique (MACBETH is demonstrated to act as a decision support tool while solving two real time supplier selection problems having qualitative performance measures. The ability of MACBETH method to quantify the qualitative performance measures helps to provide a numerical judgment scale for ranking the alternative suppliers and selecting the best one. The results obtained from MACBETH method exactly corroborate with those derived by the past researchers employing different mathematical approaches.
Alternate Location Method of a Robot Team in Unknown Environment
Institute of Scientific and Technical Information of China (English)
WANG Jian-zhong; LIU Jing-jing
2008-01-01
The alternate location method of a robot team is proposed. Three of the robots are kept still as beacon robots, not always the same ones, while the others are regarded as mobile robots. The mobile robots alternatively measure the distance between one of them and three beacon robots with ultrasonic measurement module. The distance data are combined with its dead-reckoning information using iterated extended Kalman filter(IEKF) to realize the optimal estimate of its position. According to the condition the future beacon robots positions should be desired ones, the target function and the nonlinear constrain equations are set up which are used by nonlinear optimization algorithm to estimate the position of the future beacon robots. By alternately changing the robots roles as active beacon, the alternate location in unknown environment can be realized. Process and result of the simulation test are given and the position estimation error is within±10mm, which proves the validity of this method.
Multiple Break-Points Detection in Array CGH Data via the Cross-Entropy Method.
Priyadarshana, W J R M; Sofronov, Georgy
2015-01-01
Array comparative genome hybridization (aCGH) is a widely used methodology to detect copy number variations of a genome in high resolution. Knowing the number of break-points and their corresponding locations in genomic sequences serves different biological needs. Primarily, it helps to identify disease-causing genes that have functional importance in characterizing genome wide diseases. For human autosomes the normal copy number is two, whereas at the sites of oncogenes it increases (gain of DNA) and at the tumour suppressor genes it decreases (loss of DNA). The majority of the current detection methods are deterministic in their set-up and use dynamic programming or different smoothing techniques to obtain the estimates of copy number variations. These approaches limit the search space of the problem due to different assumptions considered in the methods and do not represent the true nature of the uncertainty associated with the unknown break-points in genomic sequences. We propose the Cross-Entropy method, which is a model-based stochastic optimization technique as an exact search method, to estimate both the number and locations of the break-points in aCGH data. We model the continuous scale log-ratio data obtained by the aCGH technique as a multiple break-point problem. The proposed methodology is compared with well established publicly available methods using both artificially generated data and real data. Results show that the proposed procedure is an effective way of estimating number and especially the locations of break-points with high level of precision. Availability: The methods described in this article are implemented in the new R package breakpoint and it is available from the Comprehensive R Archive Network at http://CRAN.R-project.org/package=breakpoint.
A novel method of measuring the melting point of animal fats.
Lloyd, S S; Dawkins, S T; Dawkins, R L
2014-10-01
The melting point (TM) of fat is relevant to health, but available methods of determining TM are cumbersome. One of the standard methods of measuring TM for animal and vegetable fats is the slip point, also known as the open capillary method. This method is imprecise and not amenable to automation or mass testing. We have developed a technique for measuring TM of animal fat using the Rotor-Gene Q (Qiagen, Hilden, Germany). The assay has an intra-assay SD of 0.08°C. A single operator can extract and assay up to 250 samples of animal fat in 24 h, including the time to extract the fat from the adipose tissue. This technique will improve the quality of research into genetic and environmental contributions to fat composition of meat.
Experimental methods for studying microbial survival in extraterrestrial environments.
Olsson-Francis, Karen; Cockell, Charles S
2010-01-01
Microorganisms can be used as model systems for studying biological responses to extraterrestrial conditions; however, the methods for studying their response are extremely challenging. Since the first high altitude microbiological experiment in 1935 a large number of facilities have been developed for short- and long-term microbial exposure experiments. Examples are the BIOPAN facility, used for short-term exposure, and the EXPOSE facility aboard the International Space Station, used for long-term exposure. Furthermore, simulation facilities have been developed to conduct microbiological experiments in the laboratory environment. A large number of microorganisms have been used for exposure experiments; these include pure cultures and microbial communities. Analyses of these experiments have involved both culture-dependent and independent methods. This review highlights and discusses the facilities available for microbiology experiments, both in space and in simulation environments. A description of the microorganisms and the techniques used to analyse survival is included. Finally we discuss the implications of microbiological studies for future missions and for space applications. Copyright 2009 Elsevier B.V. All rights reserved.
DEFF Research Database (Denmark)
an overview of existing triangulation methods with emphasis on performance versus optimality, and will suggest a fast triangulation algorithm based on linear constraints. The structure and camera motion estimation in a SFM system is based on the minimization of some norm of the reprojection error between...... the 3D points and their images in the cameras. Most classical methods are based on minimizing the sum of squared errors, the L2 norm, after initializing the structure by an algebraic method ([2]). It has been shown (in [4] amongst others) that first, the algebraic method can produce initial estimates...
Liu, Wenyang; Cheung, Yam; Sabouri, Pouya; Arai, Tatsuya J; Sawant, Amit; Ruan, Dan
2015-11-01
To accurately and efficiently reconstruct a continuous surface from noisy point clouds captured by a surface photogrammetry system (VisionRT). The authors have developed a level-set based surface reconstruction method on point clouds captured by a surface photogrammetry system (VisionRT). The proposed method reconstructs an implicit and continuous representation of the underlying patient surface by optimizing a regularized fitting energy, offering extra robustness to noise and missing measurements. By contrast to explicit/discrete meshing-type schemes, their continuous representation is particularly advantageous for subsequent surface registration and motion tracking by eliminating the need for maintaining explicit point correspondences as in discrete models. The authors solve the proposed method with an efficient narrowband evolving scheme. The authors evaluated the proposed method on both phantom and human subject data with two sets of complementary experiments. In the first set of experiment, the authors generated a series of surfaces each with different black patches placed on one chest phantom. The resulting VisionRT measurements from the patched area had different degree of noise and missing levels, since VisionRT has difficulties in detecting dark surfaces. The authors applied the proposed method to point clouds acquired under these different configurations, and quantitatively evaluated reconstructed surfaces by comparing against a high-quality reference surface with respect to root mean squared error (RMSE). In the second set of experiment, the authors applied their method to 100 clinical point clouds acquired from one human subject. In the absence of ground-truth, the authors qualitatively validated reconstructed surfaces by comparing the local geometry, specifically mean curvature distributions, against that of the surface extracted from a high-quality CT obtained from the same patient. On phantom point clouds, their method achieved submillimeter
Liu, Youshan; Teng, Jiwen; Xu, Tao; Badal, José
2017-05-01
The mass-lumped method avoids the cost of inverting the mass matrix and simultaneously maintains spatial accuracy by adopting additional interior integration points, known as cubature points. To date, such points are only known analytically in tensor domains, such as quadrilateral or hexahedral elements. Thus, the diagonal-mass-matrix spectral element method (SEM) in non-tensor domains always relies on numerically computed interpolation points or quadrature points. However, only the cubature points for degrees 1 to 6 are known, which is the reason that we have developed a p-norm-based optimization algorithm to obtain higher-order cubature points. In this way, we obtain and tabulate new cubature points with all positive integration weights for degrees 7 to 9. The dispersion analysis illustrates that the dispersion relation determined from the new optimized cubature points is comparable to that of the mass and stiffness matrices obtained by exact integration. Simultaneously, the Lebesgue constant for the new optimized cubature points indicates its surprisingly good interpolation properties. As a result, such points provide both good interpolation properties and integration accuracy. The Courant-Friedrichs-Lewy (CFL) numbers are tabulated for the conventional Fekete-based triangular spectral element (TSEM), the TSEM with exact integration, and the optimized cubature-based TSEM (OTSEM). A complementary study demonstrates the spectral convergence of the OTSEM. A numerical example conducted on a half-space model demonstrates that the OTSEM improves the accuracy by approximately one order of magnitude compared to the conventional Fekete-based TSEM. In particular, the accuracy of the 7th-order OTSEM is even higher than that of the 14th-order Fekete-based TSEM. Furthermore, the OTSEM produces a result that can compete in accuracy with the quadrilateral SEM (QSEM). The high accuracy of the OTSEM is also tested with a non-flat topography model. In terms of computational
Raykov, Tenko; Marcoulides, George A.
2015-01-01
A direct approach to point and interval estimation of Cronbach's coefficient alpha for multiple component measuring instruments is outlined. The procedure is based on a latent variable modeling application with widely circulated software. As a by-product, using sample data the method permits ascertaining whether the population discrepancy…
William J. Zielinski; Fredrick V. Schlexer; T. Luke George; Kristine L. Pilgrim; Michael K. Schwartz
2013-01-01
The Point Arena mountain beaver (Aplodontia rufa nigra) is federally listed as an endangered subspecies that is restricted to a small geographic range in coastal Mendocino County, California. Management of this imperiled taxon requires accurate information on its demography and vital rates. We developed noninvasive survey methods, using hair snares to sample DNA and to...
Novel methods for point-of-care diagnosis of nerve agent exposure (Abstract)
Noort, D.; Schans, M.J. van der; Fidder, A.; Verstappen, D.R.W.; Hulst, A.G.; Mars-Groenendijk, R.
2012-01-01
Methods to unequivocally and rapidly assess exposure to nerve agents are highly valuable from a military and security perspective. Within this framework we currently follow two different approaches towards rapid point-of-care diagnosis. Regarding the first approach we hypothesized that proteins in
Infeasible Interior-Point Methods for Linear Optimization Based on Large Neighborhood
Asadi, A.R.; Roos, C.
2015-01-01
In this paper, we design a class of infeasible interior-point methods for linear optimization based on large neighborhood. The algorithm is inspired by a full-Newton step infeasible algorithm with a linear convergence rate in problem dimension that was recently proposed by the second author.
Exploring the potential of the descending-point method to measure ...
African Journals Online (AJOL)
The descending-point method of vegetation survey proved effective in measuring meaningful plant cover changes during a grazing period. No significant changes in basal cover or plant height were detected. Changes in canopy spread and canopy cover could only be used to detect changes in utilization at levels lighter ...
DEFF Research Database (Denmark)
Sørensen, Chris Khadgi; Thach, Tine; Hovmøller, Mogens Støvring
2016-01-01
The fungus Puccinia striiformis causes yellow (stripe) rust on wheat worldwide. In the present article, new methods utilizing an engineered fluid (Novec 7100) as a carrier of urediniospores were compared with commonly used inoculation methods. In general, Novec 7100 facilitated a faster and more...... for the assessment of quantitative epidemiological parameters. New protocols for spray and point inoculation of P. striiformis on wheat are presented, along with the prospect for applying these in rust research and resistance breeding activities....
A Point-Set-Based Footprint Model and Spatial Ranking Method for Geographic Information Retrieval
Directory of Open Access Journals (Sweden)
Yong Gao
2016-07-01
Full Text Available In the recent big data era, massive spatial related data are continuously generated and scrambled from various sources. Acquiring accurate geographic information is also urgently demanded. How to accurately retrieve desired geographic information has become the prominent issue, needing to be resolved in high priority. The key technologies in geographic information retrieval are modeling document footprints and ranking documents based on their similarity evaluation. The traditional spatial similarity evaluation methods are mainly performed using a MBR (Minimum Bounding Rectangle footprint model. However, due to its nature of simplification and roughness, the results of traditional methods tend to be isotropic and space-redundant. In this paper, a new model that constructs the footprints in the form of point-sets is presented. The point-set-based footprint coincides the nature of place names in web pages, so it is redundancy-free, consistent, accurate, and anisotropic to describe the spatial extents of documents, and can handle multi-scale geographic information. The corresponding spatial ranking method is also presented based on the point-set-based model. The new similarity evaluation algorithm of this method firstly measures multiple distances for the spatial proximity across different scales, and then combines the frequency of place names to improve the accuracy and precision. The experimental results show that the proposed method outperforms the traditional methods with higher accuracies under different searching scenarios.
A point-value enhanced finite volume method based on approximate delta functions
Xuan, Li-Jun; Majdalani, Joseph
2018-02-01
We revisit the concept of an approximate delta function (ADF), introduced by Huynh (2011) [1], in the form of a finite-order polynomial that holds identical integral properties to the Dirac delta function when used in conjunction with a finite-order polynomial integrand over a finite domain. We show that the use of generic ADF polynomials can be effective at recovering and generalizing several high-order methods, including Taylor-based and nodal-based Discontinuous Galerkin methods, as well as the Correction Procedure via Reconstruction. Based on the ADF concept, we then proceed to formulate a Point-value enhanced Finite Volume (PFV) method, which stores and updates the cell-averaged values inside each element as well as the unknown quantities and, if needed, their derivatives on nodal points. The sharing of nodal information with surrounding elements saves the number of degrees of freedom compared to other compact methods at the same order. To ensure conservation, cell-averaged values are updated using an identical approach to that adopted in the finite volume method. Here, the updating of nodal values and their derivatives is achieved through an ADF concept that leverages all of the elements within the domain of integration that share the same nodal point. The resulting scheme is shown to be very stable at successively increasing orders. Both accuracy and stability of the PFV method are verified using a Fourier analysis and through applications to the linear wave and nonlinear Burgers' equations in one-dimensional space.
A Two-Point Newton Method Suitable for Nonconvergent Cases and with Super-Quadratic Convergence
Directory of Open Access Journals (Sweden)
Ababu Teklemariam Tiruneh
2013-01-01
Full Text Available An iterative formula based on Newton’s method alone is presented for the iterative solutions of equations that ensures convergence in cases where the traditional Newton Method may fail to converge to the desired root. In addition, the method has super-quadratic convergence of order 2.414 (i.e., . Newton method is said to fail in certain cases leading to oscillation, divergence to increasingly large number, or offshooting away to another root further from the desired domain or offshooting to an invalid domain where the function may not be defined. In addition when the derivative at the iteration point is zero, Newton method stalls. In most of these cases, hybrids of several methods such as Newton, bisection, and secant methods are suggested as substitute methods and Newton method is essentially blended with other methods or altogether abandoned. This paper argues that a solution is still possible in most of these cases by the application of Newton method alone without resorting to other methods and with the same computational effort (two functional evaluations per iteration like the traditional Newton method. In addition, the proposed modified formula based on Newton method has better convergence characteristics than the traditional Newton method.
DEFF Research Database (Denmark)
Choi, Uimin; Lee, Kyo-Beum; Blaabjerg, Frede
2013-01-01
This paper proposes a method to reduce the low-frequency neutral-point voltage oscillations. The neutral-point voltage oscillations are considerably reduced by adding a time-offset to the three phase turn-on times. The proper time-offset is simply calculated considering the phase currents and dwe...
Precision nutrition - review of methods for point-of-care assessment of nutritional status.
Srinivasan, Balaji; Lee, Seoho; Erickson, David; Mehta, Saurabh
2017-04-01
Precision nutrition encompasses prevention and treatment strategies for optimizing health that consider individual variability in diet, lifestyle, environment and genes by accurately determining an individual's nutritional status. This is particularly important as malnutrition now affects a third of the global population, with most of those affected or their care providers having limited means of determining their nutritional status. Similarly, program implementers often have no way of determining the impact or success of their interventions, thus hindering their scale-up. Exciting new developments in the area of point-of-care diagnostics promise to provide improved access to nutritional status assessment, as a first step towards enabling precision nutrition and tailored interventions at both the individual and community levels. In this review, we focus on the current advances in developing portable diagnostics for assessment of nutritional status at point-of-care, along with the numerous design challenges in this process and potential solutions. Copyright © 2016 Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Takahiro Yamaguchi
2015-05-01
Full Text Available As smartphones become widespread, a variety of smartphone applications are being developed. This paper proposes a method for indoor localization (i.e., positioning that uses only smartphones, which are general-purpose mobile terminals, as reference point devices. This method has the following features: (a the localization system is built with smartphones whose movements are confined to respective limited areas. No fixed reference point devices are used; (b the method does not depend on the wireless performance of smartphones and does not require information about the propagation characteristics of the radio waves sent from reference point devices, and (c the method determines the location at the application layer, at which location information can be easily incorporated into high-level services. We have evaluated the level of localization accuracy of the proposed method by building a software emulator that modeled an underground shopping mall. We have confirmed that the determined location is within a small area in which the user can find target objects visually.
A new method to extract stable feature points based on self-generated simulation images
Long, Fei; Zhou, Bin; Ming, Delie; Tian, Jinwen
2015-10-01
Recently, image processing has got a lot of attention in the field of photogrammetry, medical image processing, etc. Matching two or more images of the same scene taken at different times, by different cameras, or from different viewpoints, is a popular and important problem. Feature extraction plays an important part in image matching. Traditional SIFT detectors reject the unstable points by eliminating the low contrast and edge response points. The disadvantage is the need to set the threshold manually. The main idea of this paper is to get the stable extremums by machine learning algorithm. Firstly we use ASIFT approach coupled with the light changes and blur to generate multi-view simulated images, which make up the set of the simulated images of the original image. According to the way of generating simulated images set, affine transformation of each generated image is also known. Instead of the traditional matching process which contain the unstable RANSAC method to get the affine transformation, this approach is more stable and accurate. Secondly we calculate the stability value of the feature points by the set of image with its affine transformation. Then we get the different feature properties of the feature point, such as DOG features, scales, edge point density, etc. Those two form the training set while stability value is the dependent variable and feature property is the independent variable. At last, a process of training by Rank-SVM is taken. We will get a weight vector. In use, based on the feature properties of each points and weight vector calculated by training, we get the sort value of each feature point which refers to the stability value, then we sort the feature points. In conclusion, we applied our algorithm and the original SIFT detectors to test as a comparison. While in different view changes, blurs, illuminations, it comes as no surprise that experimental results show that our algorithm is more efficient.
Human health and climate change: leverage points for adaptation in urban environments.
Proust, Katrina; Newell, Barry; Brown, Helen; Capon, Anthony; Browne, Chris; Burton, Anthony; Dixon, Jane; Mu, Lisa; Zarafu, Monica
2012-06-01
The design of adaptation strategies that promote urban health and well-being in the face of climate change requires an understanding of the feedback interactions that take place between the dynamical state of a city, the health of its people, and the state of the planet. Complexity, contingency and uncertainty combine to impede the growth of such systemic understandings. In this paper we suggest that the collaborative development of conceptual models can help a group to identify potential leverage points for effective adaptation. We describe a three-step procedure that leads from the development of a high-level system template, through the selection of a problem space that contains one or more of the group's adaptive challenges, to a specific conceptual model of a sub-system of importance to the group. This procedure is illustrated by a case study of urban dwellers' maladaptive dependence on private motor vehicles. We conclude that a system dynamics approach, revolving around the collaborative construction of a set of conceptual models, can help communities to improve their adaptive capacity, and so better meet the challenge of maintaining, and even improving, urban health in the face of climate change.
Human Health and Climate Change: Leverage Points for Adaptation in Urban Environments
Directory of Open Access Journals (Sweden)
Monica Zarafu
2012-06-01
Full Text Available The design of adaptation strategies that promote urban health and well-being in the face of climate change requires an understanding of the feedback interactions that take place between the dynamical state of a city, the health of its people, and the state of the planet. Complexity, contingency and uncertainty combine to impede the growth of such systemic understandings. In this paper we suggest that the collaborative development of conceptual models can help a group to identify potential leverage points for effective adaptation. We describe a three-step procedure that leads from the development of a high-level system template, through the selection of a problem space that contains one or more of the group’s adaptive challenges, to a specific conceptual model of a sub-system of importance to the group. This procedure is illustrated by a case study of urban dwellers’ maladaptive dependence on private motor vehicles. We conclude that a system dynamics approach, revolving around the collaborative construction of a set of conceptual models, can help communities to improve their adaptive capacity, and so better meet the challenge of maintaining, and even improving, urban health in the face of climate change.
A Novel Machine Learning Based Method of Combined Dynamic Environment Prediction
Directory of Open Access Journals (Sweden)
Wentao Mao
2013-01-01
Full Text Available In practical engineerings, structures are often excited by different kinds of loads at the same time. How to effectively analyze and simulate this kind of dynamic environment of structure, named combined dynamic environment, is one of the key issues. In this paper, a novel prediction method of combined dynamic environment is proposed from the perspective of data analysis. First, the existence of dynamic similarity between vibration responses of the same structure under different boundary conditions is theoretically proven. It is further proven that this similarity can be established by a multiple-input multiple-output regression model. Second, two machine learning algorithms, multiple-dimensional support vector machine and extreme learning machine, are introduced to establish this model. To test the effectiveness of this method, shock and stochastic white noise excitations are acted on a cylindrical shell with two clamps to simulate different dynamic environments. The prediction errors on various measuring points are all less than ±3 dB, which shows that the proposed method can predict the structural vibration response under one boundary condition by means of the response under another condition in terms of precision and numerical stability.
A Survey on Methods for Reconstructing Surfaces from Unorganized Point Sets
Directory of Open Access Journals (Sweden)
Vilius Matiukas
2011-08-01
Full Text Available This paper addresses the issue of reconstructing and visualizing surfaces from unorganized point sets. These can be acquired using different techniques, such as 3D-laser scanning, computerized tomography, magnetic resonance imaging and multi-camera imaging. The problem of reconstructing surfaces from their unorganized point sets is common for many diverse areas, including computer graphics, computer vision, computational geometry or reverse engineering. The paper presents three alternative methods that all use variations in complementary cones to triangulate and reconstruct the tested 3D surfaces. The article evaluates and contrasts three alternatives.Article in English
A DATA DRIVEN METHOD FOR BUILDING RECONSTRUCTION FROM LiDAR POINT CLOUDS
Directory of Open Access Journals (Sweden)
M. Sajadian
2014-10-01
Full Text Available Airborne laser scanning, commonly referred to as LiDAR, is a superior technology for three-dimensional data acquisition from Earth's surface with high speed and density. Building reconstruction is one of the main applications of LiDAR system which is considered in this study. For a 3D reconstruction of the buildings, the buildings points should be first separated from the other points such as; ground and vegetation. In this paper, a multi-agent strategy has been proposed for simultaneous extraction and segmentation of buildings from LiDAR point clouds. Height values, number of returned pulse, length of triangles, direction of normal vectors, and area are five criteria which have been utilized in this step. Next, the building edge points are detected using a new method named "Grid Erosion". A RANSAC based technique has been employed for edge line extraction. Regularization constraints are performed to achieve the final lines. Finally, by modelling of the roofs and walls, 3D building model is reconstructed. The results indicate that the proposed method could successfully extract the building from LiDAR data and generate the building models automatically. A qualitative and quantitative assessment of the proposed method is then provided.
[Method to Calculate the Yield Load of Bone Plate in Four-point Bending Test].
Jia, Xiaohang; Zhou, Jun; Ma, Jun; Wen, Yan
2015-09-01
This paper developed a calculation method to acquire the yield load P of bone plate during four-point bending test. This method is based on the displacement--force (δ-F) curve function f(M)(δ) obtained from the test, each slope of the curve was calculated using piecewise smooth function and the line segment in f(M)(δ) elastic deformation area was searched by setting the minimum slope T. Slope S was obtained through linear fit so as to build parallel displacement function f(L)(δ). Then, approximating intersection point of f(M)(δ) and f(L)(δ) was obtained through linear interpolation. Thus, yield load P was acquired. The method in the paper was loyal to YY/T 0342-2002 regulation and was liable to program calculation. The calculating process was nothing to do with whether the initial point during the test was preloaded or unloaded, and there was no need to correct the original point. In addition, T was set in an ideal fitting level guaranteed by the fitting coefficient of determination R2, and thus S was very close to the real value, and P was with a high accuracy.
An Improved Method for Power-Line Reconstruction from Point Cloud Data
Directory of Open Access Journals (Sweden)
Bo Guo
2016-01-01
Full Text Available This paper presents a robust algorithm to reconstruct power-lines using ALS technology. Point cloud data are automatically classified into five target classes before reconstruction. In order to improve upon the defaults of only using the local shape properties of a single power-line span in traditional methods, the distribution properties of power-line group between two neighbor pylons and contextual information of related pylon objects are used to improve the reconstruction results. First, the distribution properties of power-line sets are detected using a similarity detection method. Based on the probability of neighbor points belonging to the same span, a RANSAC rule based algorithm is then introduced to reconstruct power-lines through two important advancements: reliable initial parameters fitting and efficient candidate sample detection. Our experiments indicate that the proposed method is effective for reconstruction of power-lines from complex scenarios.
Directory of Open Access Journals (Sweden)
Wilson Rodríguez Calderón
2015-04-01
Full Text Available When we need to determine the solution of a nonlinear equation there are two options: closed-methods which use intervals that contain the root and during the iterative process reduce the size of natural way, and, open-methods that represent an attractive option as they do not require an initial interval enclosure. In general, we know open-methods are more efficient computationally though they do not always converge. In this paper we are presenting a divergence case analysis when we use the method of fixed point iteration to find the normal height in a rectangular channel using the Manning equation. To solve this problem, we propose applying two strategies (developed by authors that allow to modifying the iteration function making additional formulations of the traditional method and its convergence theorem. Although Manning equation is solved with other methods like Newton when we use the iteration method of fixed-point an interesting divergence situation is presented which can be solved with a convergence higher than quadratic over the initial iterations. The proposed strategies have been tested in two cases; a study of divergence of square root of real numbers was made previously by authors for testing. Results in both cases have been successful. We present comparisons because are important for seeing the advantage of proposed strategies versus the most representative open-methods.
Ghane, Alireza; Mazaheri, Mehdi; Mohammad Vali Samani, Jamal
2016-09-15
The pollution of rivers due to accidental spills is a major threat to environment and human health. To protect river systems from accidental spills, it is essential to introduce a reliable tool for identification process. Backward Probability Method (BPM) is one of the most recommended tools that is able to introduce information related to the prior location and the release time of the pollution. This method was originally developed and employed in groundwater pollution source identification problems. One of the objectives of this study is to apply this method in identifying the pollution source location and release time in surface waters, mainly in rivers. To accomplish this task, a numerical model is developed based on the adjoint analysis. Then the developed model is verified using analytical solution and some real data. The second objective of this study is to extend the method to pollution source identification in river networks. In this regard, a hypothetical test case is considered. In the later simulations, all of the suspected points are identified, using only one backward simulation. The results demonstrated that all suspected points, determined by the BPM could be a possible pollution source. The proposed approach is accurate and computationally efficient and does not need any simplification in river geometry and flow. Due to this simplicity, it is highly recommended for practical purposes. Copyright © 2016. Published by Elsevier Ltd.
[Online soft sensing method for freezing point of diesel fuel based on NIR spectrometry].
Wu, De-Hui
2008-07-01
To solve the problems of real-time online measurement for the freezing point of diesel fuel products, a soft sensing method by near-infrared (NIR) spectrometry was proposed. Firstly, the information of diesel fuel samples in the spectral region of 750-1 550 nm was extracted by spectrum analyzer, and the polynomial convolution algorithm was also applied in spectrogram smoothness, baseline correction and standardization. Principal component analysis (PCA) was then used to extract the features of NIR spectrum data sets, which not only reduced the number of input dimension, but increased their sensitivity to output. Finally the soft sensing model for freezing point was built using SVR algorithm. One hundred fifty diesel fuel samples were used as experimental materials, 100 of which were used as training (calibrating) samples and the others as testing samples. Four hundred and one dimensional original NIR absorption spectrum data sets, through PCA, were reduced to 6 dimensions. To investigate the measuring effect, the freezing points of the testing samples were estimated by four different soft sensing models, BP, SVR, PCA-BP and PCA+SVR. Experimental results show that (1) the soft sensing models using PCA to extract features are generally better than those used directly in spectrum wavelength domain; (2) SVR based model outperforms its main competitors-BP model in the limited training data, the error of which is only half of the latter; (3) The MSE between the estimated values by the presented method and the standard chemical values of freezing point by condensing method are less than 4.2. The research suggests that the proposed method can be used in fast measurement of the freezing point of diesel fuel products by NIRS.
da Silva, Rodrigo; Pearce, Jonathan V.; Machin, Graham
2017-06-01
The fixed points of the International Temperature Scale of 1990 (ITS-90) are the basis of the calibration of standard platinum resistance thermometers (SPRTs). Impurities in the fixed point material at the level of parts per million can give rise to an elevation or depression of the fixed point temperature of order of millikelvins, which often represents the most significant contribution to the uncertainty of SPRT calibrations. A number of methods for correcting for the effect of impurities have been advocated, but it is becoming increasingly evident that no single method can be used in isolation. In this investigation, a suite of five aluminium fixed point cells (defined ITS-90 freezing temperature 660.323 °C) have been constructed, each cell using metal sourced from a different supplier. The five cells have very different levels and types of impurities. For each cell, chemical assays based on the glow discharge mass spectroscopy (GDMS) technique have been obtained from three separate laboratories. In addition a series of high quality, long duration freezing curves have been obtained for each cell, using three different high quality SPRTs, all measured under nominally identical conditions. The set of GDMS analyses and freezing curves were then used to compare the different proposed impurity correction methods. It was found that the most consistent corrections were obtained with a hybrid correction method based on the sum of individual estimates (SIE) and overall maximum estimate (OME), namely the SIE/Modified-OME method. Also highly consistent was the correction technique based on fitting a Scheil solidification model to the measured freezing curves, provided certain well defined constraints are applied. Importantly, the most consistent methods are those which do not depend significantly on the chemical assay.
Directory of Open Access Journals (Sweden)
Agostinho Linhares
2014-01-01
Full Text Available A base station (BS antenna operates in accordance with the established exposure limits if the values of electromagnetic fields (EMF measured in points of maximum exposure are below these limits. In the case of BS in open areas, the maximum exposure to EMF probably occurs in the antenna’s boresight direction, from a few tens to a few hundred meters away. This is not a typical scenery for urban environments. However, in the line of sight (LOS situation, the region of maximum exposure can still be analytically estimated with good results. This paper presents a methodology for the choice of measurement points in urban areas in order to assess compliance with the limits for exposure to EMF.
Multi-point probe for testing electrical properties and a method of producing a multi-point probe
DEFF Research Database (Denmark)
2011-01-01
A multi-point probe for testing electrical properties of a number of specific locations of a test sample comprises a supporting body defining a first surface, a first multitude of conductive probe arms (101-101'''), each of the probe arms defining a proximal end and a distal end. The probe arms...... are connected to the supporting body (105) at the proximal ends, and the distal ends are freely extending from the supporting body, giving individually flexible motion to the probe arms. Each of the probe arms defines a maximum width perpendicular to its perpendicular bisector and parallel with its line...... of contact with the supporting body, and a maximum thickness perpendicular to its perpendicular bisector and its line of contact with the supporting body. Each of the probe arms has a specific area or point of contact (111-111''') at its distal end for contacting a specific location among the number...
Comparison of four nonstationary hydrologic design methods for changing environment
Yan, Lei; Xiong, Lihua; Guo, Shenglian; Xu, Chong-Yu; Xia, Jun; Du, Tao
2017-08-01
The hydrologic design of nonstationary flood extremes is an emerging field that is essential for water resources management and hydrologic engineering design to cope with changing environment. This paper aims to investigate and compare the capability of four nonstationary hydrologic design strategies, including the expected number of exceedances (ENE), design life level (DLL), equivalent reliability (ER), and average design life level (ADLL), with the last three methods taking into consideration the design life of the project. The confidence intervals of the calculated design floods were also estimated using the nonstationary bootstrap approach. A comparison of these four methods was performed using the annual maximum flood series (AMFS) of the Weihe River basin, Jinghe River basin, and Assunpink Creek basin. The results indicated that ENE, ER and ADLL yielded the same or very similar design values and confidence intervals for both increasing and decreasing trends of AMFS considered. DLL also yields similar design values if the relationship between DLL and ER/ADLL return periods is considered. Both ER and ADLL are recommended for practical use as they have associated design floods with the design life period of projects and yield reasonable design quantiles and confidence intervals. Furthermore, by assuming that the design results using either a stationary or nonstationary hydrologic design strategy should have the same reliability, the ER method enables us to solve the nonstationary hydrologic design problems by adopting the stationary design reliability, thus bridging the gap between stationary and nonstationary design criteria.
Authentication Method for Privacy Protection in Smart Grid Environment
Directory of Open Access Journals (Sweden)
Do-Eun Cho
2014-01-01
Full Text Available Recently, the interest in green energy is increasing as a means to resolve problems including the exhaustion of the energy source and, effective management of energy through the convergence of various fields. Therefore, the projects of smart grid which is called intelligent electrical grid for the accomplishment of low carbon green growth are being carried out in a rush. However, as the IT is centered upon the electrical grid, the shortage of IT also appears in smart grid and the complexity of convergence is aggravating the problem. Also, various personal information and payment information within the smart grid are gradually becoming big data and target for external invasion and attack; thus, there is increase in concerns for this matter. The purpose of this study is to analyze the security vulnerabilities and security requirement within smart grid and the authentication and access control method for privacy protection within home network. Therefore, we propose a secure access authentication and remote control method for user’s home device within home network environment, and we present their security analysis. The proposed access authentication method blocks the unauthorized external access and enables secure remote access to home network and its devices with a secure message authentication protocol.
SINGLE TREE DETECTION FROM AIRBORNE LASER SCANNING DATA USING A MARKED POINT PROCESS BASED METHOD
Directory of Open Access Journals (Sweden)
J. Zhang
2013-05-01
Full Text Available Tree detection and reconstruction is of great interest in large-scale city modelling. In this paper, we present a marked point process model to detect single trees from airborne laser scanning (ALS data. We consider single trees in ALS recovered canopy height model (CHM as a realization of point process of circles. Unlike traditional marked point process, we sample the model in a constraint configuration space by making use of image process techniques. A Gibbs energy is defined on the model, containing a data term which judge the fitness of the model with respect to the data, and prior term which incorporate the prior knowledge of object layouts. We search the optimal configuration through a steepest gradient descent algorithm. The presented hybrid framework was test on three forest plots and experiments show the effectiveness of the proposed method.
A novel point-of-use water treatment method by antimicrobial nanosilver textile material.
Liu, Hongjun; Tang, Xiaosheng; Liu, Qishan
2014-12-01
Pathogenic bacteria are one of the main reasons for worldwide water-borne disease causing a big threat to public health, hence there is an urgent need to develop cost-effective water treatment technologies. Nano-materials in point-of-use systems have recently attracted considerable research and commercial interests as they can overcome the drawbacks of traditional water treatment techniques. We have developed a new point-of-use water disinfection kit with nanosilver textile material. The silver nanoparticles were in-situ generated and immobilized onto cotton textile, followed by fixing to a plastic tube to make a water disinfection kit. By soaking and stirring the kit in water, pathogenic bacteria have been killed within minutes. The silver leaching from the kit was insignificant, with values water. Herein, the nanosilver textile water disinfection kit could be a new, efficient and cost-effective point-of-use water treatment method for rural areas and emergency preparedness.
Estes, S.; Haynes, J.; Hamdan, M. Al; Estes, M.; Sprigg, W.
2009-01-01
Health providers/researchers need environmental data to study and understand the geographic, environmental, and meteorological differences in disease. Satellite remote sensing of the environment offers a unique vantage point that can fill in the gaps of environmental, spatial, and temporal data for tracking disease. The field of geospatial health remains in its infancy, and this program will demonstrate the need for collaborations between multi-disciplinary research groups to develop the full potential. NASA will discuss the Public Health Projects developed to work with Grantees and the CDC while providing them with information on opportunities for future collaborations with NASA for future research.
The Well Organised Working Environment: A mixed methods study.
Bradley, Dominique Kim Frances; Griffin, Murray
2016-03-01
The English National Health Service Institute for Innovation and Improvement designed a series of programmes called The Productive Series. These are innovations designed to help healthcare staff reduce inefficiency and improve quality, and have been implemented in healthcare organisations in at least 14 different countries. This paper examines an implementation of the first module of the Productive Community Services programme called 'The Well Organised Working Environment'. The quantitative component aims to identify the quantitative outcomes and impact of the implementation of the Well Organised Working Environment module. The qualitative component aims to describe the contexts, mechanisms and outcomes evident during the implementation, and to consider the implication of these findings for healthcare staff, commissioners and implementation teams. Mixed methods explanatory sequential design. Community Healthcare Organisation in East Anglia, England. For the quantitative data, participants were 73 staff members that completed End of Module Assessments. Data from 25 services that carried out an inventory of stock items stored were also analysed. For the qualitative element, participants were 45 staff members working in the organisation during the implementation, and four members of the Productive Community Services Implementation Team. Staff completed assessments at the end of the module implementation, and the value of items stored by clinical services was recorded. After the programme concluded, semi-structured interviews with staff and a focus group with members of the Productive Community Services implementation team were analysed using Framework Analysis employing the principles of Realist Evaluation. 62.5% respondents (n=45) to the module assessment reported an improvement in their working environment, 37.5% (n=27) reported that their working environment stayed the same or deteriorated. The reduction of the value of items stored by services ranged from £4 to
Energy Technology Data Exchange (ETDEWEB)
Becquart, C.S. [Laboratoire de Metalurgie Physique et Genie des Materiaux, UMR 8517, Universite Lille-1, F-59655 Villeneuve d' Ascq Cedex (France)]. E-mail: charlotte.becquart@univ-lille1.fr; Domain, C. [Laboratoire de Metalurgie Physique et Genie des Materiaux, UMR 8517, Universite Lille-1, F-59655 Villeneuve d' Ascq Cedex (France); EDF-R and D Dpt Materiaux et Mecanique des Composants, Les renardieres, F-77818 Moret sur Loing Cedex (France); Malerba, L. [SCK.CEN, Reactor Materials Research Unit, B-2400 Mol (Belgium); Hou, M. [Physique des Solides Irradies et des Nanostructures CP234, Universite Libre de Bruxelles, Bd du Triomphe, B-1050 Brussels (Belgium)
2005-01-01
Displacement cascades obtained by full molecular dynamics and in its binary collision approximation, as well as random point defect distributions, all having similar overall morphologies, are used as input for long-term radiation damage simulation by an object kinetic Monte Carlo method in {alpha}-iron. This model treats naturally point defect fluxes on cascades regions, resulting from cascades generated in other regions, in a realistic way. This study shows the significant effect of the internal structure of displacement cascades and, in particular, SIA agglomeration on the long-term evolution of defect cluster growth under irradiation.
Estimation Methods of the Point Spread Function Axial Position: A Comparative Computational Study
Directory of Open Access Journals (Sweden)
Javier Eduardo Diaz Zamboni
2017-01-01
Full Text Available The precise knowledge of the point spread function is central for any imaging system characterization. In fluorescence microscopy, point spread function (PSF determination has become a common and obligatory task for each new experimental device, mainly due to its strong dependence on acquisition conditions. During the last decade, algorithms have been developed for the precise calculation of the PSF, which fit model parameters that describe image formation on the microscope to experimental data. In order to contribute to this subject, a comparative study of three parameter estimation methods is reported, namely: I-divergence minimization (MIDIV, maximum likelihood (ML and non-linear least square (LSQR. They were applied to the estimation of the point source position on the optical axis, using a physical model. Methods’ performance was evaluated under different conditions and noise levels using synthetic images and considering success percentage, iteration number, computation time, accuracy and precision. The main results showed that the axial position estimation requires a high SNR to achieve an acceptable success level and higher still to be close to the estimation error lower bound. ML achieved a higher success percentage at lower SNR compared to MIDIV and LSQR with an intrinsic noise source. Only the ML and MIDIV methods achieved the error lower bound, but only with data belonging to the optical axis and high SNR. Extrinsic noise sources worsened the success percentage, but no difference was found between noise sources for the same method for all methods studied.
A Novel Complementary Method for the Point-Scan Nondestructive Tests Based on Lamb Waves
Directory of Open Access Journals (Sweden)
Rahim Gorgin
2014-01-01
Full Text Available This study presents a novel area-scan damage identification method based on Lamb waves which can be used as a complementary method for point-scan nondestructive techniques. The proposed technique is able to identify the most probable locations of damages prior to point-scan test which lead to decreasing the time and cost of inspection. The test-piece surface was partitioned with some smaller areas and the damage probability presence of each area was evaluated. A0 mode of Lamb wave was generated and collected using a mobile handmade transducer set at each area. Subsequently, a damage presence probability index (DPPI based on the energy of captured responses was defined for each area. The area with the highest DPPI value highlights the most probable locations of damages in test-piece. Point-scan nondestructive methods can then be used once these areas are found to identify the damage in detail. The approach was validated by predicting the most probable locations of representative damages including through-thickness hole and crack in aluminum plates. The obtained experimental results demonstrated the high potential of developed method in defining the most probable locations of damages in structures.
Comparison of point-of-care-compatible lysis methods for bacteria and viruses.
Heiniger, Erin K; Buser, Joshua R; Mireles, Lillian; Zhang, Xiaohong; Ladd, Paula D; Lutz, Barry R; Yager, Paul
2016-09-01
Nucleic acid sample preparation has been an especially challenging barrier to point-of-care nucleic acid amplification tests in low-resource settings. Here we provide a head-to-head comparison of methods for lysis of, and nucleic acid release from, several pathogenic bacteria and viruses-methods that are adaptable to point-of-care usage in low-resource settings. Digestion with achromopeptidase, a mixture of proteases and peptidoglycan-specific hydrolases, followed by thermal deactivation in a boiling water bath, effectively released amplifiable nucleic acid from Staphylococcus aureus, Bordetella pertussis, respiratory syncytial virus, and influenza virus. Achromopeptidase was functional after dehydration and reconstitution, even after eleven months of dry storage without refrigeration. Mechanical lysis methods proved to be effective against a hard-to-lyse Mycobacterium species, and a miniature bead-mill, the AudioLyse, is shown to be capable of releasing amplifiable DNA and RNA from this species. We conclude that point-of-care-compatible sample preparation methods for nucleic acid tests need not introduce amplification inhibitors, and can provide amplification-ready lysates from a wide range of bacterial and viral pathogens. Copyright © 2016. Published by Elsevier B.V.
Point-source localization in blurred images by a frequency-domain eigenvector-based method.
Gunsay, M; Jeffs, B D
1995-01-01
We address the problem of resolving and localizing blurred point sources in intensity images. Telescopic star-field images blurred by atmospheric turbulence or optical aberrations are typical examples of this class of images, a new approach to image restoration is introduced, which is a generalization of 2-D sensor array processing techniques originating from the field of direction of arrival estimation (DOA). It is shown that in the frequency domain, blurred point source images can be modeled with a structure analogous to the response of linear sensor arrays to coherent signal sources. Thus, the problem may be cast into the form of DOA estimation, and eigenvector based subspace decomposition algorithms, such as MUSIC, may be adapted to search for these point sources. For deterministic point images the signal subspace is degenerate, with rank one, so rank enhancement techniques are required before MUSIC or related algorithms may be used. The presence of blur prohibits the use of existing rank enhancement methods. A generalized array smoothing method is introduced for rank enhancement in the presence of blur, and to regularize the ill posed nature of the image restoration. The new algorithm achieves inter-pixel super-resolution and is computationally efficient. Examples of star image deblurring using the algorithm are presented.
A simple method for determining the critical point of the soil water retention curve
DEFF Research Database (Denmark)
Chen, Chong; Hu, Kelin; Ren, Tusheng
2017-01-01
he transition point between capillary water and adsorbed water, which is the critical point Pc [defined by the critical matric potential (ψc) and the critical water content (θc)] of the soil water retention curve (SWRC), demarcates the energy and water content region where flow is dominated...... by capillarity or liquid film flow. Accurate estimation of Pc is crucial for modeling water movement in the vadose zone. By modeling the dry-end (matric potential –104.2 cm H2O) sections of the SWRC using the models of Campbell and Shiozawa, and of van Genuchten......, a fixed tangent line method was developed to estimate Pc as an alternative to the commonly used flexible tangent line method. The relationships between Pc, and particle-size distribution and specific surface area (SSA) were analyzed. For 27 soils with various textures, the mean RMSE of water content from...
Lee, Joseph G L; Henriksen, Lisa; Myers, Allison E; Dauphinee, Amanda L; Ribisl, Kurt M
2014-03-01
Over four-fifths of reported expenditures for marketing tobacco products occur at the retail point of sale (POS). To date, no systematic review has synthesised the methods used for surveillance of POS marketing. This review sought to describe the audit objectives, methods and measures used to study retail tobacco environments. We systematically searched 11 academic databases for papers indexed on or before 14 March 2012, identifying 2906 papers. Two coders independently reviewed each abstract or full text to identify papers with the following criteria: (1) data collectors visited and assessed (2) retail environments using (3) a data collection instrument for (4) tobacco products or marketing. We excluded papers where limited measures of products and/or marketing were incidental. Two abstractors independently coded included papers for research aims, locale, methods, measures used and measurement properties. We calculated descriptive statistics regarding the use of four P's of marketing (product, price, placement, promotion) and for measures of study design, sampling strategy and sample size. We identified 88 store audit studies. Most studies focus on enumerating the number of signs or other promotions. Several strengths, particularly in sampling, are noted, but substantial improvements are indicated in the reporting of reliability, validity and audit procedures. Audits of POS tobacco marketing have made important contributions to understanding industry behaviour, the uses of marketing and resulting health behaviours. Increased emphasis on standardisation and the use of theory are needed in the field. We propose key components of audit methodology that should be routinely reported.
An ECL-PCR method for quantitative detection of point mutation
Zhu, Debin; Xing, Da; Shen, Xingyan; Chen, Qun; Liu, Jinfeng
2005-04-01
A new method for identification of point mutations was proposed. Polymerase chain reaction (PCR) amplification of a sequence from genomic DNA was followed by digestion with a kind of restriction enzyme, which only cut the wild-type amplicon containing its recognition site. Reaction products were detected by electrochemiluminescence (ECL) assay after adsorption of the resulting DNA duplexes to the solid phase. One strand of PCR products carries biotin to be bound on a streptavidin-coated microbead for sample selection. Another strand carries Ru(bpy)32+ (TBR) to react with tripropylamine (TPA) to emit light for ECL detection. The method was applied to detect a specific point mutation in H-ras oncogene in T24 cell line. The results show that the detection limit for H-ras amplicon is 100 fmol and the linear range is more than 3 orders of magnitude, thus, make quantitative analysis possible. The genotype can be clearly discriminated. Results of the study suggest that ECL-PCR is a feasible quantitative method for safe, sensitive and rapid detection of point mutation in human genes.
Energy Technology Data Exchange (ETDEWEB)
Xia, Donghui [State Key Laboratory of Advanced Electromagnetic Engineering and Technology, Huazhong University of Science and Technology, 430074 Wuhan (China); Huang, Mei [Southwestern Institute of Physics, 610041 Chengdu (China); Wang, Zhijiang, E-mail: wangzj@hust.edu.cn [State Key Laboratory of Advanced Electromagnetic Engineering and Technology, Huazhong University of Science and Technology, 430074 Wuhan (China); Zhang, Feng [Southwestern Institute of Physics, 610041 Chengdu (China); Zhuang, Ge [State Key Laboratory of Advanced Electromagnetic Engineering and Technology, Huazhong University of Science and Technology, 430074 Wuhan (China)
2016-10-15
Highlights: • The integral staggered point-matching method for design of polarizers on the ECH systems is presented. • The availability of the integral staggered point-matching method is checked by numerical calculations. • Two polarizers are designed with the integral staggered point-matching method and the experimental results are given. - Abstract: The reflective diffraction gratings are widely used in the high power electron cyclotron heating systems for polarization strategy. This paper presents a method which we call “the integral staggered point-matching method” for design of reflective diffraction gratings. This method is based on the integral point-matching method. However, it effectively removes the convergence problems and tedious calculations of the integral point-matching method, making it easier to be used for a beginner. A code is developed based on this method. The calculation results of the integral staggered point-matching method are compared with the integral point-matching method, the coordinate transformation method and the low power measurement results. It indicates that the integral staggered point-matching method can be used as an optional method for the design of reflective diffraction gratings in electron cyclotron heating systems.
Leuko, S; Goh, F; Ibáñez-Peral, R; Burns, B P; Walter, M R; Neilan, B A
2008-03-01
The extraction of nucleic acids from a given environment marks a crucial and essential starting point in any molecular investigation. Members of Halococcus spp. are known for their rigid cell walls, and are thus difficult to lyse and could potentially be overlooked in an environment. Furthermore, the lack of a suitable lysis method hinders subsequent molecular analysis. The effects of six different DNA extraction methods were tested on Halococcus hamelinensis, Halococcus saccharolyticus and Halobacterium salinarum NRC-1 as well as on an organic rich, highly carbonated sediment from stromatolites spiked with Halococcus hamelinensis. The methods tested were based on physical disruption (boiling and freeze/thawing), chemical lysis (Triton X-100, potassium ethyl xanthogenate (XS) buffer and CTAB) and on enzymatic lysis (lysozyme). Results showed that boiling and freeze/thawing had little effect on the lysis of both Halococcus strains. Methods based on chemical lysis (Triton X-100, XS-buffer, and CTAB) showed the best results, however, Triton X-100 treatment failed to produce visible DNA fragments. Using a combination of bead beating, chemical lysis with lysozyme, and thermal shock, lysis of cells was achieved however DNA was badly sheared. Lysis of cells and DNA extraction of samples from spiked sediment proved to be difficult, with the XS-buffer method indicating the best results. This study provides an evaluation of six commonly used methods of cell lysis and DNA extraction of Halococcus spp., and the suitability of the resulting DNA for molecular analysis.
A Direct Maximum Power Point Tracking Method for Single-Phase Grid Connected PV Inverters
DEFF Research Database (Denmark)
EL Aamri, Faicel; Maker, Hattab; Sera, Dezso
2018-01-01
in dynamic conditions, especially in low irradiance when the measurement of signals becomes more sensitive to noise. The proposed MPPT is designed for single-phase single-stage grid-connected PV inverters, and is based on estimating the instantaneous PV power and voltage ripples, using second......A direct maximum power point tracking (MPPT) method for PV systems has been proposed in this work. This method solves two of the main drawbacks of the Perturb and Observe (P&O) MPPT, namely: i) the tradeoff between the speed and the oscillations in steady-state, ii) and the poor effectiveness...
Energy Technology Data Exchange (ETDEWEB)
Laurie, M.; Vlahovic, L.; Rondinella, V.V. [European Commission, Joint Research Centre, Institute for Transuranium Elements, P.O. Box 2340, D-76125 Karlsruhe, (Germany); Sadli, M.; Failleau, G. [Laboratoire Commun de Metrologie, LNE-Cnam, Saint-Denis, (France); Fuetterer, M.; Lapetite, J.M. [European Commission, Joint Research Centre, Institute for Energy and Transport, P.O. Box 2, NL-1755 ZG Petten, (Netherlands); Fourrez, S. [Thermocoax, 8 rue du pre neuf, F-61100 St Georges des Groseillers, (France)
2015-07-01
Temperature measurements in the nuclear field require a high degree of reliability and accuracy. Despite their sheathed form, thermocouples subjected to nuclear radiations undergo changes due to radiation damage and transmutation that lead to significant EMF drift during long-term fuel irradiation experiment. For the purpose of a High Temperature Reactor fuel irradiation to take place in the High Flux Reactor Petten, a dedicated fixed-point cell was jointly developed by LNE-Cnam and JRC-IET. The developed cell to be housed in the irradiation rig was tailor made to quantify the thermocouple drift during the irradiation (about two year duration) and withstand high temperature (in the range 950 deg. C - 1100 deg. C) in the presence of contaminated helium in a graphite environment. Considering the different levels of temperature achieved in the irradiation facility and the large palette of thermocouple types aimed at surveying the HTR fuel pebble during the qualification test both copper (1084.62 deg. C) and gold (1064.18 deg. C) fixed-point materials were considered. The aim of this paper is to first describe the fixed-point mini-cell designed to be embedded in the reactor rig and to discuss the preliminary results achieved during some out of pile tests as much as some robustness tests representative of the reactor scram scenarios. (authors)
Loizou, Nicolas
2017-12-27
In this paper we study several classes of stochastic optimization algorithms enriched with heavy ball momentum. Among the methods studied are: stochastic gradient descent, stochastic Newton, stochastic proximal point and stochastic dual subspace ascent. This is the first time momentum variants of several of these methods are studied. We choose to perform our analysis in a setting in which all of the above methods are equivalent. We prove global nonassymptotic linear convergence rates for all methods and various measures of success, including primal function values, primal iterates (in L2 sense), and dual function values. We also show that the primal iterates converge at an accelerated linear rate in the L1 sense. This is the first time a linear rate is shown for the stochastic heavy ball method (i.e., stochastic gradient descent method with momentum). Under somewhat weaker conditions, we establish a sublinear convergence rate for Cesaro averages of primal iterates. Moreover, we propose a novel concept, which we call stochastic momentum, aimed at decreasing the cost of performing the momentum step. We prove linear convergence of several stochastic methods with stochastic momentum, and show that in some sparse data regimes and for sufficiently small momentum parameters, these methods enjoy better overall complexity than methods with deterministic momentum. Finally, we perform extensive numerical testing on artificial and real datasets, including data coming from average consensus problems.
The research of motion in a neighborhood of collinear libration point by conservative methods
Shmyrov, A.; Shmyrov, V.; Shymanchuk, D.
2017-10-01
In this paper we research the orbital motion described by equations in hamiltonian form. The shift mapping along a trajectory of motion is canonical one and it makes possible to apply conservative methods. The examples of application of such methods in problems of celestial mechanics are given. The first order approximation of generating function of shift mapping along the trajectory is constructed for uncontrolled motion in a neighborhood of collinear libration point of Sun-Earth system. Also this approach is applied to controllable motion with special kind of control, which ensuring the preservation of hamiltonian form of the equations of motion. The form of iterative schemes for numerical modeling of motion is given. For fixed number of iterations the accuracy of presented numerical method is estimated in comparison with Runge-Kutta method of the fourth order. The analytical representation of the generating function up to second-order terms with respect to time increment is given.
Optimization of Control Points Number at Coordinate Measurements based on the Monte-Carlo Method
Korolev, A. A.; Kochetkov, A. V.; Zakharov, O. V.
2018-01-01
Improving the quality of products causes an increase in the requirements for the accuracy of the dimensions and shape of the surfaces of the workpieces. This, in turn, raises the requirements for accuracy and productivity of measuring of the workpieces. The use of coordinate measuring machines is currently the most effective measuring tool for solving similar problems. The article proposes a method for optimizing the number of control points using Monte Carlo simulation. Based on the measurement of a small sample from batches of workpieces, statistical modeling is performed, which allows one to obtain interval estimates of the measurement error. This approach is demonstrated by examples of applications for flatness, cylindricity and sphericity. Four options of uniform and uneven arrangement of control points are considered and their comparison is given. It is revealed that when the number of control points decreases, the arithmetic mean decreases, the standard deviation of the measurement error increases and the probability of the measurement α-error increases. In general, it has been established that it is possible to repeatedly reduce the number of control points while maintaining the required measurement accuracy.
Generalized four-point characterization method using capacitive and ohmic contacts.
Kim, Brian S; Zhou, Wang; Shah, Yash D; Zhou, Chuanle; Işık, N; Grayson, M
2012-02-01
In this paper, a four-point characterization method is developed for samples that have either capacitive or ohmic contacts. When capacitive contacts are used, capacitive current- and voltage-dividers result in a capacitive scaling factor not present in four-point measurements with only ohmic contacts. From a circuit equivalent of the complete measurement system, one can determine both the measurement frequency band and capacitive scaling factor for various four-point characterization configurations. This technique is first demonstrated with a discrete element four-point test device and then with a capacitively and ohmically contacted Hall bar sample over a wide frequency range (1 Hz-100 kHz) using lock-in measurement techniques. In all the cases, data fit well to a circuit simulation of the entire measurement system, and best results are achieved with large area capacitive contacts and a high input-impedance preamplifier stage. An undesirable asymmetry offset in the measurement signal is described which can arise due to asymmetric voltage contacts.
King, Nathan D.; Ruuth, Steven J.
2017-05-01
Maps from a source manifold M to a target manifold N appear in liquid crystals, color image enhancement, texture mapping, brain mapping, and many other areas. A numerical framework to solve variational problems and partial differential equations (PDEs) that map between manifolds is introduced within this paper. Our approach, the closest point method for manifold mapping, reduces the problem of solving a constrained PDE between manifolds M and N to the simpler problems of solving a PDE on M and projecting to the closest points on N. In our approach, an embedding PDE is formulated in the embedding space using closest point representations of M and N. This enables the use of standard Cartesian numerics for general manifolds that are open or closed, with or without orientation, and of any codimension. An algorithm is presented for the important example of harmonic maps and generalized to a broader class of PDEs, which includes p-harmonic maps. Improved efficiency and robustness are observed in convergence studies relative to the level set embedding methods. Harmonic and p-harmonic maps are computed for a variety of numerical examples. In these examples, we denoise texture maps, diffuse random maps between general manifolds, and enhance color images.
Analysis of tree stand horizontal structure using random point field methods
Directory of Open Access Journals (Sweden)
O. P. Sekretenko
2015-06-01
Full Text Available This paper uses the model approach to analyze the horizontal structure of forest stands. The main types of models of random point fields and statistical procedures that can be used to analyze spatial patterns of trees of uneven and even-aged stands are described. We show how modern methods of spatial statistics can be used to address one of the objectives of forestry – to clarify the laws of natural thinning of forest stand and the corresponding changes in its spatial structure over time. Studying natural forest thinning, we describe the consecutive stages of modeling: selection of the appropriate parametric model, parameter estimation and generation of point patterns in accordance with the selected model, the selection of statistical functions to describe the horizontal structure of forest stands and testing of statistical hypotheses. We show the possibilities of a specialized software package, spatstat, which is designed to meet the challenges of spatial statistics and provides software support for modern methods of analysis of spatial data. We show that a model of stand thinning that does not consider inter-tree interaction can project the size distribution of the trees properly, but the spatial pattern of the modeled stand is not quite consistent with observed data. Using data of three even-aged pine forest stands of 25, 55, and 90-years old, we demonstrate that the spatial point process models are useful for combining measurements in the forest stands of different ages to study the forest stand natural thinning.
3D registration method based on scattered point cloud from B-model ultrasound image
Hu, Lei; Xu, Xiaojun; Wang, Lifeng; Guo, Na; Xie, Feng
2017-01-01
The paper proposes a registration method on 3D point cloud of the bone tissue surface extracted by B-mode ultrasound image and the CT model . The B-mode ultrasound is used to get two-dimensional images of the femur tissue . The binocular stereo vision tracker is used to obtain spatial position and orientation of the optical positioning device fixed on the ultrasound probe. The combining of the two kind of data generates 3D point cloud of the bone tissue surface. The pixel coordinates of the bone surface are automatically obtained from ultrasound image using an improved local phase symmetry (phase symmetry, PS) . The mapping of the pixel coordinates on the ultrasound image and 3D space is obtained through a series of calibration methods. In order to detect the effect of registration, six markers are implanted on a complete fresh pig femoral .The actual coordinates of the marks are measured with two methods. The first method is to get the coordinates with measuring tools under a coordinate system. The second is to measure the coordinates of the markers in the CT model registered with 3D point cloud using the ICP registration algorithm under the same coordinate system. Ten registration experiments are carried out in the same way. Error results are obtained by comparing the two sets of mark point coordinates obtained by two different methods. The results is that a minimum error is 1.34mm, the maximum error is 3.22mm,and the average error of 2.52mm; ICP registration algorithm calculates the average error of 0.89mm and a standard deviation of 0.62mm.This evaluation standards of registration accuracy is different from the average error obtained by the ICP registration algorithm. It can be intuitive to show the error caused by the operation of clinical doctors. Reference to the accuracy requirements of different operation in the Department of orthopedics, the method can be apply to the bone reduction and the anterior cruciate ligament surgery.
Energy Technology Data Exchange (ETDEWEB)
Tumelero, Fernanda, E-mail: fernanda.tumelero@yahoo.com.br [Universidade Federal do Rio Grande do Sul (UFRGS), Porto Alegre, RS (Brazil). Programa de Pos-Graduacao em Engenharia Mecanica; Petersen, Claudio Z.; Goncalves, Glenio A.; Lazzari, Luana, E-mail: claudiopeteren@yahoo.com.br, E-mail: gleniogoncalves@yahoo.com.br, E-mail: luana-lazzari@hotmail.com [Universidade Federal de Pelotas (DME/UFPEL), Capao do Leao, RS (Brazil). Instituto de Fisica e Matematica
2015-07-01
In this work, we present a solution of the Neutron Point Kinetics Equations with temperature feedback effects applying the Polynomial Approach Method. For the solution, we consider one and six groups of delayed neutrons precursors with temperature feedback effects and constant reactivity. The main idea is to expand the neutron density, delayed neutron precursors and temperature as a power series considering the reactivity as an arbitrary function of the time in a relatively short time interval around an ordinary point. In the first interval one applies the initial conditions of the problem and the analytical continuation is used to determine the solutions of the next intervals. With the application of the Polynomial Approximation Method it is possible to overcome the stiffness problem of the equations. In such a way, one varies the time step size of the Polynomial Approach Method and performs an analysis about the precision and computational time. Moreover, we compare the method with different types of approaches (linear, quadratic and cubic) of the power series. The answer of neutron density and temperature obtained by numerical simulations with linear approximation are compared with results in the literature. (author)
Beam-pointing error compensation method of phased array radar seeker with phantom-bit technology
Directory of Open Access Journals (Sweden)
Qiuqiu WEN
2017-06-01
Full Text Available A phased array radar seeker (PARS must be able to effectively decouple body motion and accurately extract the line-of-sight (LOS rate for target missile tracking. In this study, the real-time two-channel beam pointing error (BPE compensation method of PARS for LOS rate extraction is designed. The PARS discrete beam motion principium is analyzed, and the mathematical model of beam scanning control is finished. According to the principle of the antenna element shift phase, both the antenna element shift phase law and the causes of beam-pointing error under phantom-bit conditions are analyzed, and the effect of BPE caused by phantom-bit technology (PBT on the extraction accuracy of the LOS rate is examined. A compensation method is given, which includes coordinate transforms, beam angle margin compensation, and detector dislocation angle calculation. When the method is used, the beam angle margin in the pitch and yaw directions is calculated to reduce the effect of the missile body disturbance and to improve LOS rate extraction precision by compensating for the detector dislocation angle. The simulation results validate the proposed method.
Iterative method to compute the Fermat points and Fermat distances of multiquarks
Energy Technology Data Exchange (ETDEWEB)
Bicudo, P. [CFTP, Departamento de Fisica, Instituto Superior Tecnico, Av. Rovisco Pais, 1049-001 Lisboa (Portugal)], E-mail: bicudo@ist.utl.pt; Cardoso, M. [CFTP, Departamento de Fisica, Instituto Superior Tecnico, Av. Rovisco Pais, 1049-001 Lisboa (Portugal)
2009-04-13
The multiquark confining potential is proportional to the total distance of the fundamental strings linking the quarks and antiquarks. We address the computation of the total string distance and of the Fermat points where the different strings meet. For a meson the distance is trivially the quark-antiquark distance. For a baryon the problem was solved geometrically from the onset by Fermat and by Torricelli, it can be determined just with a rule and a compass, and we briefly review it. However we also show that for tetraquarks, pentaquarks, hexaquarks, etc., the geometrical solution is much more complicated. Here we provide an iterative method, converging fast to the correct Fermat points and the total distances, relevant for the multiquark potentials.
Energy Technology Data Exchange (ETDEWEB)
Tumelero, Fernanda; Petersen, Claudio Zen; Goncalves, Glenio Aguiar [Universidade Federal de Pelotas, Capao do Leao, RS (Brazil). Programa de Pos Graduacao em Modelagem Matematica; Schramm, Marcelo [Universidade Federal do Rio Grande do Sul, Porto Alegre, RS (Brazil). Programa de Pos-Graduacao em Engenharia Mecanica
2016-12-15
In this work, we report a solution to solve the Neutron Point Kinetics Equations applying the Polynomial Approach Method. The main idea is to expand the neutron density and delayed neutron precursors as a power series considering the reactivity as an arbitrary function of the time in a relatively short time interval around an ordinary point. In the first interval one applies the initial conditions and the analytical continuation is used to determine the solutions of the next intervals. A genuine error control is developed based on an analogy with the Rest Theorem. For illustration, we also report simulations for different approaches types (linear, quadratic and cubic). The results obtained by numerical simulations for linear approximation are compared with results in the literature.
A new method for retrieving silver points and separated instruments from root canals.
Suter, B
1998-06-01
A new method for the removal of metallic canal obstructions is presented. After gaining access to the coronal end of the separated instrument or silver point, a circular groove is prepared around it using ultrasonic tips. A short piece of fine stainless-steel tubing can now be pushed over the exposed end of the object. A Hedström file is pushed in a clockwise turning motion through the tube to wedge between the tube and end of the object. This produces a good interlocking between the separated instrument or silver point, the tube, and Hedström file. The three connected objects can now be removed coronally using relatively high forces. This technique may be more efficient than the endo extractor technique which is using a tube and cyanoacrylate.
Directory of Open Access Journals (Sweden)
Urriza I
2010-01-01
Full Text Available Abstract This paper presents a word length selection method for the implementation of digital controllers in both fixed-point and floating-point hardware on FPGAs. This method uses the new types defined in the VHDL-2008 fixed-point and floating-point packages. These packages allow customizing the word length of fixed and floating point representations and shorten the design cycle simplifying the design of arithmetic operations. The method performs bit-true simulations in order to determine the word length to represent the constant coefficients and the internal signals of the digital controller while maintaining the control system specifications. A mixed-signal simulation tool is used to simulate the closed loop system as a whole in order to analyze the impact of the quantization effects and loop delays on the control system performance. The method is applied to implement a digital controller for a switching power converter. The digital circuit is implemented on an FPGA, and the simulations are experimentally verified.
Lague, D.
2014-12-01
High Resolution Topographic (HRT) datasets are predominantly stored and analyzed as 2D raster grids of elevations (i.e., Digital Elevation Models). Raster grid processing is common in GIS software and benefits from a large library of fast algorithms dedicated to geometrical analysis, drainage network computation and topographic change measurement. Yet, all instruments or methods currently generating HRT datasets (e.g., ALS, TLS, SFM, stereo satellite imagery) output natively 3D unstructured point clouds that are (i) non-regularly sampled, (ii) incomplete (e.g., submerged parts of river channels are rarely measured), and (iii) include 3D elements (e.g., vegetation, vertical features such as river banks or cliffs) that cannot be accurately described in a DEM. Interpolating the raw point cloud onto a 2D grid generally results in a loss of position accuracy, spatial resolution and in more or less controlled interpolation. Here I demonstrate how studying earth surface topography and processes directly on native 3D point cloud datasets offers several advantages over raster based methods: point cloud methods preserve the accuracy of the original data, can better handle the evaluation of uncertainty associated to topographic change measurements and are more suitable to study vegetation characteristics and steep features of the landscape. In this presentation, I will illustrate and compare Point Cloud based and Raster based workflows with various examples involving ALS, TLS and SFM for the analysis of bank erosion processes in bedrock and alluvial rivers, rockfall statistics (including rockfall volume estimate directly from point clouds) and the interaction of vegetation/hydraulics and sedimentation in salt marshes. These workflows use 2 recently published algorithms for point cloud classification (CANUPO) and point cloud comparison (M3C2) now implemented in the open source software CloudCompare.
Optimization Research on Ampacity of Underground High Voltage Cable Based on Interior Point Method
Huang, Feng; Li, Jing
2017-12-01
The conservative operation method which takes unified current-carrying capacity as maximum load current can’t make full use of the overall power transmission capacity of the cable. It’s not the optimal operation state for the cable cluster. In order to improve the transmission capacity of underground cables in cluster, this paper regards the maximum overall load current as the objective function and the temperature of any cables lower than maximum permissible temperature as constraint condition. The interior point method which is very effective for nonlinear problem is put forward to solve the extreme value of the problem and determine the optimal operating current of each loop. The results show that the optimal solutions obtained with the purposed method is able to increase the total load current about 5%. It greatly improves the economic performance of the cable cluster.
Matsuki, Keisuke; Kenmoku, Tomonori; Ochiai, Nobuyasu; Sugaya, Hiroyuki; Banks, Scott A
2016-06-14
Several published articles have reported 3-dimensional glenohumeral kinematics using model-image registration techniques. However, different methods to compute the translations were used in these articles. The purpose of this study was to compare glenohumeral translations calculated with three different methods. Fifteen healthy males with a mean age of 31 years (range, 27-36 years old) were enrolled in this study. Fluoroscopic images during scapular plane elevation were recorded at 30 frames per second for the right shoulder in each subject, and CT-derived models of the humerus and the scapula were matched with the silhouette of the bones in the fluoroscopic images using model-image registration techniques. Glenohumeral translations were computed with three methods: relative position of the origins of the humeral and scapular models, contact points of the two models, and relative positions based upon the calculated glenohumeral center of rotation (CoR). In the supero-inferior direction, translations calculated with the three methods were roughly parallel, with the maximum difference of 1.6mm (Ptranslations with the origins and CoR were parallel; however, translations computed with the origins and contact point describe arcs that differ by almost 2mm at low humeral elevation angles and converge at higher degrees of humeral elevation (Ptranslations calculated using three methods showed statistically significant differences that may be important when comparing detailed results of different studies. However, these relatively small differences are likely subclinical, so that all three methods can reasonably be used for description of glenohumeral translations. Copyright © 2016 Elsevier Ltd. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Carey, P.
1977-07-01
Racial, sexual, and ethnic discrimination, it is contended, creates as great a crisis in the environment as the threat of nuclear war since it also threatens social survival. Individual freedom, human dignity and socio-political equality are resources vital for the survival of Americans; White racism deprives Blacks and other minorities' members of these essentials for humane living. Survival today depends on Renewal, for which nothing is more decisive than mobility of talent. Much has been accomplished recently in bringing about the participation of minorities' members in higher education but data are presented which indicate that, in terms of income, minorities' members tend to be discriminated against greatest as they increase their education. An 8-point program is presented to achieve equity and equality in and through education.
Curvature computation in volume-of-fluid method based on point-cloud sampling
Kassar, Bruno B. M.; Carneiro, João N. E.; Nieckele, Angela O.
2018-01-01
This work proposes a novel approach to compute interface curvature in multiphase flow simulation based on Volume of Fluid (VOF) method. It is well documented in the literature that curvature and normal vector computation in VOF may lack accuracy mainly due to abrupt changes in the volume fraction field across the interfaces. This may cause deterioration on the interface tension forces estimates, often resulting in inaccurate results for interface tension dominated flows. Many techniques have been presented over the last years in order to enhance accuracy in normal vectors and curvature estimates including height functions, parabolic fitting of the volume fraction, reconstructing distance functions, coupling Level Set method with VOF, convolving the volume fraction field with smoothing kernels among others. We propose a novel technique based on a representation of the interface by a cloud of points. The curvatures and the interface normal vectors are computed geometrically at each point of the cloud and projected onto the Eulerian grid in a Front-Tracking manner. Results are compared to benchmark data and significant reduction on spurious currents as well as improvement in the pressure jump are observed. The method was developed in the open source suite OpenFOAM® extending its standard VOF implementation, the interFoam solver.
Directory of Open Access Journals (Sweden)
L. Gézero
2017-05-01
Full Text Available In the last few years, LiDAR sensors installed in terrestrial vehicles have been revealed as an efficient method to collect very dense 3D georeferenced information. The possibility of creating very dense point clouds representing the surface surrounding the sensor, at a given moment, in a very fast, detailed and easy way, shows the potential of this technology to be used for cartography and digital terrain models production in large scale. However, there are still some limitations associated with the use of this technology. When several acquisitions of the same area with the same device, are made, differences between the clouds can be observed. The range of that differences can go from few centimetres to some several tens of centimetres, mainly in urban and high vegetation areas where the occultation of the GNSS system introduces a degradation of the georeferenced trajectory. Along this article a different method point cloud registration is proposed. In addition to the efficiency and speed of execution, the main advantages of the method are related to the fact that the adjustment is continuously made over the trajectory, based on the GPS time. The process is fully automatic and only information recorded in the standard LAS files is used, without the need for any auxiliary information, in particular regarding the trajectory.
Gézero, L.; Antunes, C.
2017-05-01
In the last few years, LiDAR sensors installed in terrestrial vehicles have been revealed as an efficient method to collect very dense 3D georeferenced information. The possibility of creating very dense point clouds representing the surface surrounding the sensor, at a given moment, in a very fast, detailed and easy way, shows the potential of this technology to be used for cartography and digital terrain models production in large scale. However, there are still some limitations associated with the use of this technology. When several acquisitions of the same area with the same device, are made, differences between the clouds can be observed. The range of that differences can go from few centimetres to some several tens of centimetres, mainly in urban and high vegetation areas where the occultation of the GNSS system introduces a degradation of the georeferenced trajectory. Along this article a different method point cloud registration is proposed. In addition to the efficiency and speed of execution, the main advantages of the method are related to the fact that the adjustment is continuously made over the trajectory, based on the GPS time. The process is fully automatic and only information recorded in the standard LAS files is used, without the need for any auxiliary information, in particular regarding the trajectory.
METHOD OF GREEN FUNCTIONS IN MATHEMATICAL MODELLING FOR TWO-POINT BOUNDARY-VALUE PROBLEMS
Directory of Open Access Journals (Sweden)
E. V. Dikareva
2015-01-01
Full Text Available Summary. In many applied problems of control, optimization, system theory, theoretical and construction mechanics, for problems with strings and nods structures, oscillation theory, theory of elasticity and plasticity, mechanical problems connected with fracture dynamics and shock waves, the main instrument for study these problems is a theory of high order ordinary differential equations. This methodology is also applied for studying mathematical models in graph theory with different partitioning based on differential equations. Such equations are used for theoretical foundation of mathematical models but also for constructing numerical methods and computer algorithms. These models are studied with use of Green function method. In the paper first necessary theoretical information is included on Green function method for multi point boundary-value problems. The main equation is discussed, notions of multi-point boundary conditions, boundary functionals, degenerate and non-degenerate problems, fundamental matrix of solutions are introduced. In the main part the problem to study is formulated in terms of shocks and deformations in boundary conditions. After that the main results are formulated. In theorem 1 conditions for existence and uniqueness of solutions are proved. In theorem 2 conditions are proved for strict positivity and equal measureness for a pair of solutions. In theorem 3 existence and estimates are proved for the least eigenvalue, spectral properties and positivity of eigenfunctions. In theorem 4 the weighted positivity is proved for the Green function. Some possible applications are considered for a signal theory and transmutation operators.
Approximate Dual Averaging Method for Multiagent Saddle-Point Problems with Stochastic Subgradients
Directory of Open Access Journals (Sweden)
Deming Yuan
2014-01-01
Full Text Available This paper considers the problem of solving the saddle-point problem over a network, which consists of multiple interacting agents. The global objective function of the problem is a combination of local convex-concave functions, each of which is only available to one agent. Our main focus is on the case where the projection steps are calculated approximately and the subgradients are corrupted by some stochastic noises. We propose an approximate version of the standard dual averaging method and show that the standard convergence rate is preserved, provided that the projection errors decrease at some appropriate rate and the noises are zero-mean and have bounded variance.
A Riccati-Based Interior Point Method for Efficient Model Predictive Control of SISO Systems
DEFF Research Database (Denmark)
Hagdrup, Morten; Johansson, Rolf; Bagterp Jørgensen, John
2017-01-01
model parts separate. The controller is designed based on the deterministic model, while the Kalman filter results from the stochastic part. The controller is implemented as a primal-dual interior point (IP) method using Riccati recursion and the computational savings possible for SISO systems......This paper presents an algorithm for Model Predictive Control of SISO systems. Based on a quadratic objective in addition to (hard) input constraints it features soft upper as well as lower constraints on the output and an input rate-of-change penalty term. It keeps the deterministic and stochastic...
New spatial upscaling methods for multi-point measurements: From normal to p-normal
Liu, Feng; Li, Xin
2017-12-01
Careful attention must be given to determining whether the geophysical variables of interest are normally distributed, since the assumption of a normal distribution may not accurately reflect the probability distribution of some variables. As a generalization of the normal distribution, the p-normal distribution and its corresponding maximum likelihood estimation (the least power estimation, LPE) were introduced in upscaling methods for multi-point measurements. Six methods, including three normal-based methods, i.e., arithmetic average, least square estimation, block kriging, and three p-normal-based methods, i.e., LPE, geostatistics LPE and inverse distance weighted LPE are compared in two types of experiments: a synthetic experiment to evaluate the performance of the upscaling methods in terms of accuracy, stability and robustness, and a real-world experiment to produce real-world upscaling estimates using soil moisture data obtained from multi-scale observations. The results show that the p-normal-based methods produced lower mean absolute errors and outperformed the other techniques due to their universality and robustness. We conclude that introducing appropriate statistical parameters into an upscaling strategy can substantially improve the estimation, especially if the raw measurements are disorganized; however, further investigation is required to determine which parameter is the most effective among variance, spatial correlation information and parameter p.
A Semantic Modelling Framework-Based Method for Building Reconstruction from Point Clouds
Directory of Open Access Journals (Sweden)
Qingdong Wang
2016-09-01
Full Text Available Over the past few years, there has been an increasing need for semantic information in automatic city modelling. However, due to the complexity of building structure, the semantic reconstruction of buildings is still a challenging task because it is difficult to extract architectural rules and semantic information from the data. To improve the insufficiencies, we present a semantic modelling framework-based approach for automated building reconstruction using the semantic information extracted from point clouds or images. In this approach, a semantic modelling framework is designed to describe and generate the building model, and a workflow is established for extracting the semantic information of buildings from an unorganized point cloud and converting the semantic information into the semantic modelling framework. The technical feasibility of our method is validated using three airborne laser scanning datasets, and the results are compared with other related works comprehensively, which indicate that our approach can simplify the reconstruction process from a point cloud and generate 3D building models with high accuracy and rich semantic information.
Feature extraction from 3D lidar point clouds using image processing methods
Zhu, Ling; Shortridge, Ashton; Lusch, David; Shi, Ruoming
2011-10-01
Airborne LiDAR data have become cost-effective to produce at local and regional scales across the United States and internationally. These data are typically collected and processed into surface data products by contractors for state and local communities. Current algorithms for advanced processing of LiDAR point cloud data are normally implemented in specialized, expensive software that is not available for many users, and these users are therefore unable to experiment with the LiDAR point cloud data directly for extracting desired feature classes. The objective of this research is to identify and assess automated, readily implementable GIS procedures to extract features like buildings, vegetated areas, parking lots and roads from LiDAR data using standard image processing tools, as such tools are relatively mature with many effective classification methods. The final procedure adopted employs four distinct stages. First, interpolation is used to transfer the 3D points to a high-resolution raster. Raster grids of both height and intensity are generated. Second, multiple raster maps - a normalized surface model (nDSM), difference of returns, slope, and the LiDAR intensity map - are conflated to generate a multi-channel image. Third, a feature space of this image is created. Finally, supervised classification on the feature space is implemented. The approach is demonstrated in both a conceptual model and on a complex real-world case study, and its strengths and limitations are addressed.
An unconventional GIS-based method to assess landslide susceptibility using point data features
Adami, S.; Bresolin, M.; Carraretto, M.; Castelletti, P.; Corò, D.; Di Mario, F.; Fiaschi, S.; Frasson, T.; Gandolfo, L.; Mazzalai, L.; Padovan, T.; Sartori, F.; Viganò, A.; Zulian, A.; De Agostini, A.; Pajola, M.; Floris, M.
2012-04-01
In this work are reported the results of a project performed by the students attending the course "GIS techniques in Applied Geology", in the master level of the Geological Sciences degree from the Department of Geosciences, University of Padua. The project concerns the evaluation of landslide susceptibility in the Val d'Agno basin, located in the North-Eastern Italian Alps and included in the Vicenza Province (Veneto Region, NE Italy). As well known, most of the models proposed to assess landslide susceptibility are based on the availability of spatial information on landslides and related predisposing environmental factors. Landslides and related factors are spatially combined in GIS systems to weight the influence of each predisposing factor and produce landslide susceptibility maps. The first and most important input factor is the layer landslide, which has to contain as minimum information shape and type of landslides, so it must be a polygon feature. In Italy, as well as in many countries all around the world, location and type of landslides are available in the main spatial databases (AVI project and IFFI project), but in few cases mass movements are delimited, thus they are spatially represented by point features. As an example, in the Vicenza Province, the IFFI database contains 1692 landslides stored in a point feature, but only 383 were delimited and stored in a polygon feature. In order to provide a method that allows to use all the information available and make an effective spatial prediction also in areas where mass movements are mainly stored in point features, punctual data representing landslide in the Val d'Agno basin have been buffered obtaining polygon features, which have been combined with morphometric (elevation, slope, aspect and curvature) and non-morphometric (land use, distance of roads and distance of river) factors. Two buffers have been created: the first has a radius of 10 meters, the minimum required for the analysis, and the second
Directory of Open Access Journals (Sweden)
Ilaria Iaconeta
2017-09-01
Full Text Available The simulation of large deformation problems, involving complex history-dependent constitutive laws, is of paramount importance in several engineering fields. Particular attention has to be paid to the choice of a suitable numerical technique such that reliable results can be obtained. In this paper, a Material Point Method (MPM and a Galerkin Meshfree Method (GMM are presented and verified against classical benchmarks in solid mechanics. The aim is to demonstrate the good behavior of the methods in the simulation of cohesive-frictional materials, both in static and dynamic regimes and in problems dealing with large deformations. The vast majority of MPM techniques in the literatrue are based on some sort of explicit time integration. The techniques proposed in the current work, on the contrary, are based on implicit approaches, which can also be easily adapted to the simulation of static cases. The two methods are presented so as to highlight the similarities to rather than the differences from “standard” Updated Lagrangian (UL approaches commonly employed by the Finite Elements (FE community. Although both methods are able to give a good prediction, it is observed that, under very large deformation of the medium, GMM lacks robustness due to its meshfree natrue, which makes the definition of the meshless shape functions more difficult and expensive than in MPM. On the other hand, the mesh-based MPM is demonstrated to be more robust and reliable for extremely large deformation cases.
Chen, Yong; Chen, Chang
2014-08-01
In optical pressure measurement of wind-tunnel test, triangle mesh is usually built to rectify the images that are distorted in geometry. In this paper, a novel method of control points selection of triangle mesh is proposed by combining the artificial points and margin control points. For the problem that in the condition of wind the margin control point is difficult to extract due to model distortion and grey variation, an improved Smallest Univalue Segment Assimilating Nucleus algorithm based on region selection and adaptive threshold is designed. The connection method is employed to verify the availability of points, which avoids that the noisy points are mistakenly regarded as the angular points. The distorted images of aircraft model are rectified and the results are analyzed. Experiments demonstrate that the proposed method greatly improves the rectification effect.
Data and methods in the environment-migration nexus
DEFF Research Database (Denmark)
Eklund, Lina; Romankiewicz, Clemens; Brandt, Martin Stefan
2016-01-01
assessments. This review article introduces the concept of scale to environment-migration research as an important methodological issue for the reliability of conclusions drawn. The review of case studies shows that scale issues are highly present in environment-migration research but rarely discussed....... Several case studies base their results on data at very coarse resolutions that have undergone strong modifications and generalizations. We argue that scale-related shortcomings must be considered in all stages of environment-migration research.......The relationship between environment and migration has gained increased attention since the 1990s when the Intergovernmental Panel on Climate Change projected climate change to become a major driver of human migration. Evaluations of this relationship include both quantitative and qualitative...
Creating the Data Basis for Environmental Evaluations with the Oil Point Method
DEFF Research Database (Denmark)
Bey, Niki; Lenau, Torben Anker
1999-01-01
it is the case with rules-of-thumb. The central idea is that missing indicators can be calculated or estimated by the designers themselves.After discussing energy-related environmental evaluation and arguing for its application in evaluation of concepts, the paper focuses on the basic problem of missing data...... and describes the way in which the problem may be solved by making Oil Point evaluations. Sources of energy data are mentioned. Typical deficits to be aware of - such as the negligence of efficiency factors - are revealed and discussed. Comparative case studies which have shown encouraging results are mentioned......In order to support designers in decision-making, some methods have been developed which are based on environmental indicators. These methods, however, can only be used, if indicators for the specific product concept exist and are readily available.Based on this situation, the authors developed...
Distance-based microfluidic quantitative detection methods for point-of-care testing.
Tian, Tian; Li, Jiuxing; Song, Yanling; Zhou, Leiji; Zhu, Zhi; Yang, Chaoyong James
2016-04-07
Equipment-free devices with quantitative readout are of great significance to point-of-care testing (POCT), which provides real-time readout to users and is especially important in low-resource settings. Among various equipment-free approaches, distance-based visual quantitative detection methods rely on reading the visual signal length for corresponding target concentrations, thus eliminating the need for sophisticated instruments. The distance-based methods are low-cost, user-friendly and can be integrated into portable analytical devices. Moreover, such methods enable quantitative detection of various targets by the naked eye. In this review, we first introduce the concept and history of distance-based visual quantitative detection methods. Then, we summarize the main methods for translation of molecular signals to distance-based readout and discuss different microfluidic platforms (glass, PDMS, paper and thread) in terms of applications in biomedical diagnostics, food safety monitoring, and environmental analysis. Finally, the potential and future perspectives are discussed.
A node-based smoothed point interpolation method for dynamic analysis of rotating flexible beams
Du, C. F.; Zhang, D. G.; Li, L.; Liu, G. R.
2017-10-01
We proposed a mesh-free method, the called node-based smoothed point interpolation method (NS-PIM), for dynamic analysis of rotating beams. A gradient smoothing technique is used, and the requirements on the consistence of the displacement functions are further weakened. In static problems, the beams with three types of boundary conditions are analyzed, and the results are compared with the exact solution, which shows the effectiveness of this method and can provide an upper bound solution for the deflection. This means that the NS-PIM makes the system soften. The NS-PIM is then further extended for solving a rigid-flexible coupled system dynamics problem, considering a rotating flexible cantilever beam. In this case, the rotating flexible cantilever beam considers not only the transverse deformations, but also the longitudinal deformations. The rigid-flexible coupled dynamic equations of the system are derived via employing Lagrange's equations of the second type. Simulation results of the NS-PIM are compared with those obtained using finite element method (FEM) and assumed mode method. It is found that compared with FEM, the NS-PIM has anti-ill solving ability under the same calculation conditions.
A novel method for fast Change-Point detection on simulated time series and electrocardiogram data.
Directory of Open Access Journals (Sweden)
Jin-Peng Qi
Full Text Available Although Kolmogorov-Smirnov (KS statistic is a widely used method, some weaknesses exist in investigating abrupt Change Point (CP problems, e.g. it is time-consuming and invalid sometimes. To detect abrupt change from time series fast, a novel method is proposed based on Haar Wavelet (HW and KS statistic (HWKS. First, the two Binary Search Trees (BSTs, termed TcA and TcD, are constructed by multi-level HW from a diagnosed time series; the framework of HWKS method is implemented by introducing a modified KS statistic and two search rules based on the two BSTs; and then fast CP detection is implemented by two HWKS-based algorithms. Second, the performance of HWKS is evaluated by simulated time series dataset. The simulations show that HWKS is faster, more sensitive and efficient than KS, HW, and T methods. Last, HWKS is applied to analyze the electrocardiogram (ECG time series, the experiment results show that the proposed method can find abrupt change from ECG segment with maximal data fluctuation more quickly and efficiently, and it is very helpful to inspect and diagnose the different state of health from a patient's ECG signal.
A method of 3D object recognition and localization in a cloud of points
Bielicki, Jerzy; Sitnik, Robert
2013-12-01
The proposed method given in this article is prepared for analysis of data in the form of cloud of points directly from 3D measurements. It is designed for use in the end-user applications that can directly be integrated with 3D scanning software. The method utilizes locally calculated feature vectors (FVs) in point cloud data. Recognition is based on comparison of the analyzed scene with reference object library. A global descriptor in the form of a set of spatially distributed FVs is created for each reference model. During the detection process, correlation of subsets of reference FVs with FVs calculated in the scene is computed. Features utilized in the algorithm are based on parameters, which qualitatively estimate mean and Gaussian curvatures. Replacement of differentiation with averaging in the curvatures estimation makes the algorithm more resistant to discontinuities and poor quality of the input data. Utilization of the FV subsets allows to detect partially occluded and cluttered objects in the scene, while additional spatial information maintains false positive rate at a reasonably low level.
N. Somaratne; K. R. J. Smettem
2014-01-01
Application of the conventional chloride mass balance (CMB) method to point recharge dominant groundwater basins can substantially under-estimate long-term average annual recharge by not accounting for the effects of localized surface water inputs. This is because the conventional CMB method ignores the duality of infiltration and recharge found in karstic systems, where point recharge can be a contributing factor. When point recharge is present in groundwater basins,...
Directory of Open Access Journals (Sweden)
Jae Joon Hwang
Full Text Available Superimposition has been used as a method to evaluate the changes of orthodontic or orthopedic treatment in the dental field. With the introduction of cone beam CT (CBCT, evaluating 3 dimensional changes after treatment became possible by superimposition. 4 point plane orientation is one of the simplest ways to achieve superimposition of 3 dimensional images. To find factors influencing superimposition error of cephalometric landmarks by 4 point plane orientation method and to evaluate the reproducibility of cephalometric landmarks for analyzing superimposition error, 20 patients were analyzed who had normal skeletal and occlusal relationship and took CBCT for diagnosis of temporomandibular disorder. The nasion, sella turcica, basion and midpoint between the left and the right most posterior point of the lesser wing of sphenoidal bone were used to define a three-dimensional (3D anatomical reference co-ordinate system. Another 15 reference cephalometric points were also determined three times in the same image. Reorientation error of each landmark could be explained substantially (23% by linear regression model, which consists of 3 factors describing position of each landmark towards reference axes and locating error. 4 point plane orientation system may produce an amount of reorientation error that may vary according to the perpendicular distance between the landmark and the x-axis; the reorientation error also increases as the locating error and shift of reference axes viewed from each landmark increases. Therefore, in order to reduce the reorientation error, accuracy of all landmarks including the reference points is important. Construction of the regression model using reference points of greater precision is required for the clinical application of this model.
Hwang, Jae Joon; Kim, Kee-Deog; Park, Hyok; Park, Chang Seo; Jeong, Ho-Gul
2014-01-01
Superimposition has been used as a method to evaluate the changes of orthodontic or orthopedic treatment in the dental field. With the introduction of cone beam CT (CBCT), evaluating 3 dimensional changes after treatment became possible by superimposition. 4 point plane orientation is one of the simplest ways to achieve superimposition of 3 dimensional images. To find factors influencing superimposition error of cephalometric landmarks by 4 point plane orientation method and to evaluate the reproducibility of cephalometric landmarks for analyzing superimposition error, 20 patients were analyzed who had normal skeletal and occlusal relationship and took CBCT for diagnosis of temporomandibular disorder. The nasion, sella turcica, basion and midpoint between the left and the right most posterior point of the lesser wing of sphenoidal bone were used to define a three-dimensional (3D) anatomical reference co-ordinate system. Another 15 reference cephalometric points were also determined three times in the same image. Reorientation error of each landmark could be explained substantially (23%) by linear regression model, which consists of 3 factors describing position of each landmark towards reference axes and locating error. 4 point plane orientation system may produce an amount of reorientation error that may vary according to the perpendicular distance between the landmark and the x-axis; the reorientation error also increases as the locating error and shift of reference axes viewed from each landmark increases. Therefore, in order to reduce the reorientation error, accuracy of all landmarks including the reference points is important. Construction of the regression model using reference points of greater precision is required for the clinical application of this model.
Yang, Fengping; Xiao, Fangfei
2017-03-01
Current control methods include hardware control and software control corresponding to the inherent unbalance problem of neutral point voltage in three level NPC inverter. The hardware control is rarely used due to its high cost. In this paper, a new compound control method has been presented based on the vector method of virtual space and traditional hysteresis control of neutral point voltage, which can make up the shortcoming of the virtual control without the feedback control system of neutral point voltage and the blind area of hysteresis control and control the deviation and wave of neutral point voltage. The accuracy of this method has been demonstrated by simulation.
Nikazad, Touraj; Abbasi, Mokhtar
2017-04-01
In this paper, we introduce a subclass of strictly quasi-nonexpansive operators which consists of well-known operators as paracontracting operators (e.g., strictly nonexpansive operators, metric projections, Newton and gradient operators), subgradient projections, a useful part of cutter operators, strictly relaxed cutter operators and locally strongly Féjer operators. The members of this subclass, which can be discontinuous, may be employed by fixed point iteration methods; in particular, iterative methods used in convex feasibility problems. The closedness of this subclass, with respect to composition and convex combination of operators, makes it useful and remarkable. Another advantage with members of this subclass is the possibility to adapt them to handle convex constraints. We give convergence result, under mild conditions, for a perturbation resilient iterative method which is based on an infinite pool of operators in this subclass. The perturbation resilient iterative methods are relevant and important for their possible use in the framework of the recently developed superiorization methodology for constrained minimization problems. To assess the convergence result, the class of operators and the assumed conditions, we illustrate some extensions of existence research works and some new results.
Hartig, Dave; Waluga, Thomas; Scholl, Stephan
2015-09-25
The elution by characteristic point (ECP) method provides a rapid approach to determine whole isotherm data with small material usage. It is especially desired wherever the adsorbent or the adsorbate is expensive, toxic or only available in small amounts. However, the ECP method is limited to adsorbents that are well optimized for chromatographic use and therefore provide a high number of theoretical plates when packed into columns (2000 or more for Langmuir type isotherms are suggested). Here we present a novel approach that uses a new profile correction to apply the ECP method to poorly optimized adsorbents with less than 200 theoretical plates. Non-ideality effects are determined using a dead volume marker injection and the resulting marker profile is used to compensate the named effects considering their dependency from the actual concentration instead of assuming rectangular profiles. Experimental and literature data are used to compare the new ECP approach with batch method results. Copyright © 2015 Elsevier B.V. All rights reserved.
Graphical programming interface: A development environment for MRI methods.
Zwart, Nicholas R; Pipe, James G
2015-11-01
To introduce a multiplatform, Python language-based, development environment called graphical programming interface for prototyping MRI techniques. The interface allows developers to interact with their scientific algorithm prototypes visually in an event-driven environment making tasks such as parameterization, algorithm testing, data manipulation, and visualization an integrated part of the work-flow. Algorithm developers extend the built-in functionality through simple code interfaces designed to facilitate rapid implementation. This article shows several examples of algorithms developed in graphical programming interface including the non-Cartesian MR reconstruction algorithms for PROPELLER and spiral as well as spin simulation and trajectory visualization of a FLORET example. The graphical programming interface framework is shown to be a versatile prototyping environment for developing numeric algorithms used in the latest MR techniques. © 2014 Wiley Periodicals, Inc.
Directory of Open Access Journals (Sweden)
YAN Li
2016-04-01
Full Text Available This paper proposes a rigorous registration method of multi-view point clouds constrained by closed-loop conditions for the problems of existing algorithms. In our approach, the point-to-tangent-plane iterative closest point algorithm is used firstly to calculate coordinate transformation parameters of all adjacent point clouds respectively. Then the single-site point cloud is regarded as registration unit and the transformation parameters are considered as random observations to construct conditional equations, and then the transformation parameters can be corrected by conditional adjustments to achieve global optimum. Two practical experiments of point clouds acquired by a terrestrial laser scanner are shown for demonstrating the feasibility and validity of our methods. Experimental results show that the registration accuracy and reliability of the point clouds with sampling interval of millimeter or centimeter level can be improved by increasing the scanning overlap.
Adu-Brimpong, Joel; Coffey, Nathan; Ayers, Colby; Berrigan, David; Yingling, Leah R; Thomas, Samantha; Mitchell, Valerie; Ahuja, Chaarushi; Rivers, Joshua; Hartz, Jacob; Powell-Wiley, Tiffany M
2017-03-08
Optimization of existing measurement tools is necessary to explore links between aspects of the neighborhood built environment and health behaviors or outcomes. We evaluate a scoring method for virtual neighborhood audits utilizing the Active Neighborhood Checklist (the Checklist), a neighborhood audit measure, and assess street segment representativeness in low-income neighborhoods. Eighty-two home neighborhoods of Washington, D.C. Cardiovascular Health/Needs Assessment (NCT01927783) participants were audited using Google Street View imagery and the Checklist (five sections with 89 total questions). Twelve street segments per home address were assessed for (1) Land-Use Type; (2) Public Transportation Availability; (3) Street Characteristics; (4) Environment Quality and (5) Sidewalks/Walking/Biking features. Checklist items were scored 0-2 points/question. A combinations algorithm was developed to assess street segments' representativeness. Spearman correlations were calculated between built environment quality scores and Walk Score®, a validated neighborhood walkability measure. Street segment quality scores ranged 10-47 (Mean = 29.4 ± 6.9) and overall neighborhood quality scores, 172-475 (Mean = 352.3 ± 63.6). Walk scores® ranged 0-91 (Mean = 46.7 ± 26.3). Street segment combinations' correlation coefficients ranged 0.75-1.0. Significant positive correlations were found between overall neighborhood quality scores, four of the five Checklist subsection scores, and Walk Scores® (r = 0.62, p < 0.001). This scoring method adequately captures neighborhood features in low-income, residential areas and may aid in delineating impact of specific built environment features on health behaviors and outcomes.
Directory of Open Access Journals (Sweden)
Joel Adu-Brimpong
2017-03-01
Full Text Available Optimization of existing measurement tools is necessary to explore links between aspects of the neighborhood built environment and health behaviors or outcomes. We evaluate a scoring method for virtual neighborhood audits utilizing the Active Neighborhood Checklist (the Checklist, a neighborhood audit measure, and assess street segment representativeness in low-income neighborhoods. Eighty-two home neighborhoods of Washington, D.C. Cardiovascular Health/Needs Assessment (NCT01927783 participants were audited using Google Street View imagery and the Checklist (five sections with 89 total questions. Twelve street segments per home address were assessed for (1 Land-Use Type; (2 Public Transportation Availability; (3 Street Characteristics; (4 Environment Quality and (5 Sidewalks/Walking/Biking features. Checklist items were scored 0–2 points/question. A combinations algorithm was developed to assess street segments’ representativeness. Spearman correlations were calculated between built environment quality scores and Walk Score®, a validated neighborhood walkability measure. Street segment quality scores ranged 10–47 (Mean = 29.4 ± 6.9 and overall neighborhood quality scores, 172–475 (Mean = 352.3 ± 63.6. Walk scores® ranged 0–91 (Mean = 46.7 ± 26.3. Street segment combinations’ correlation coefficients ranged 0.75–1.0. Significant positive correlations were found between overall neighborhood quality scores, four of the five Checklist subsection scores, and Walk Scores® (r = 0.62, p < 0.001. This scoring method adequately captures neighborhood features in low-income, residential areas and may aid in delineating impact of specific built environment features on health behaviors and outcomes.
Robust numerical method for integration of point-vortex trajectories in two dimensions
Smith, Spencer A.; Boghosian, Bruce M.
2011-05-01
The venerable two-dimensional (2D) point-vortex model plays an important role as a simplified version of many disparate physical systems, including superfluids, Bose-Einstein condensates, certain plasma configurations, and inviscid turbulence. This system is also a veritable mathematical playground, touching upon many different disciplines from topology to dynamic systems theory. Point-vortex dynamics are described by a relatively simple system of nonlinear ordinary differential equations which can easily be integrated numerically using an appropriate adaptive time stepping method. As the separation between a pair of vortices relative to all other intervortex length scales decreases, however, the computational time required diverges. Accuracy is usually the most discouraging casualty when trying to account for such vortex motion, though the varying energy of this ostensibly Hamiltonian system is a potentially more serious problem. We solve these problems by a series of coordinate transformations: We first transform to action-angle coordinates, which, to lowest order, treat the close pair as a single vortex amongst all others with an internal degree of freedom. We next, and most importantly, apply Lie transform perturbation theory to remove the higher-order correction terms in succession. The overall transformation drastically increases the numerical efficiency and ensures that the total energy remains constant to high accuracy.
Field sampling method for quantifying odorants in humid environments
Most air quality studies in agricultural environments typically use thermal desorption analysis for quantifying volatile organic compounds (VOC) associated with odor. Carbon molecular sieves (CMS) are popular sorbent materials used in these studies. However, there is little information on the effe...
Synthetic Environments as visualization method for product design
Meijer, F.; van den Broek, Egon; Schouten, Theo E.; Damgrave, Roy Gerhardus Johannes; Damgrave, Roy G.J.; de Ridder, Huib; Rogowitz, Bernice E.; Pappas, Thrasyvoulos N.
2010-01-01
In this paper, we explored the use of low fidelity Synthetic Environments (SE; i.e., a combination of simulation techniques) for product design. We explored the usefulness of low fidelity SE to make design problems explicit. In particular, we were interested in the influence of interactivity on user
PROTONEGOCIATIONS - SALES FORECAST AND COMPETITIVE ENVIRONMENT ANALYSIS METHOD
Directory of Open Access Journals (Sweden)
Lupu Adrian Gelu
2009-05-01
Full Text Available Protonegotiation management, as part of successful negotiations of the organizations, is an issue for analysis extremely important for today’s managers in the confrontations generated by the changes of the environments in the period of transition to marke
Wen, Xiaodong; He, Lei; Shi, Chunsheng; Deng, Qingwen; Wang, Jiwei; Zhao, Xia
2013-11-01
In this work, the analytical performance of conventional spectrophotometer was improved through the coupling of effective preconcentration method with spectrophotometric determination. Rapidly synergistic cloud point extraction (RS-CPE) was used to pre-concentrate ultra trace cobalt and firstly coupled with spectrophotometric determination. The developed coupling was simple, rapid and efficient. The factors influencing RS-CPE and spectrophotometer were optimized. Under the optimal conditions, the limit of detection (LOD) was 0.6 μg L-1, with sensitivity enhancement factor of 23. The relative standard deviation (RSD) for seven replicate measurements of 50 μg L-1 of cobalt was 4.3%. The recoveries for the spiked samples were in the acceptable range of 93.8-105%.
The Methods of Hilbert Spaces and Structure of the Fixed-Point Set of Lipschitzian Mapping
Directory of Open Access Journals (Sweden)
Jarosław Górnicki
2009-01-01
Full Text Available The purpose of this paper is to prove, by asymptotic center techniques and the methods of Hilbert spaces, the following theorem. Let H be a Hilbert space, let C be a nonempty bounded closed convex subset of H, and let M=[an,k]n,k≥1 be a strongly ergodic matrix. If T:C→C is a lipschitzian mapping such that liminfn→∞infm=0,1,...∑k=1∞an,k·‖Tk+m‖2<2, then the set of fixed points Fix T={x∈C:Tx=x} is a retract of C. This result extends and improves the corresponding results of [7, Corollary 9] and [8, Corollary 1].
Improved incremental conductance method for maximum power point tracking using cuk converter
Directory of Open Access Journals (Sweden)
M. Saad Saoud
2014-03-01
Full Text Available The Algerian government relies on a strategy focused on the development of inexhaustible resources such as solar and uses to diversify energy sources and prepare the Algeria of tomorrow: about 40% of the production of electricity for domestic consumption will be from renewable sources by 2030, Therefore it is necessary to concentrate our forces in order to reduce the application costs and to increment their performances, Their performance is evaluated and compared through theoretical analysis and digital simulation. This paper presents simulation of improved incremental conductance method for maximum power point tracking (MPPT using DC-DC cuk converter. This improved algorithm is used to track MPPs because it performs precise control under rapidly changing Atmospheric conditions, Matlab/ Simulink were employed for simulation studies.
Solving eigenvalue problems on curved surfaces using the Closest Point Method
Macdonald, Colin B.
2011-06-01
Eigenvalue problems are fundamental to mathematics and science. We present a simple algorithm for determining eigenvalues and eigenfunctions of the Laplace-Beltrami operator on rather general curved surfaces. Our algorithm, which is based on the Closest Point Method, relies on an embedding of the surface in a higher-dimensional space, where standard Cartesian finite difference and interpolation schemes can be easily applied. We show that there is a one-to-one correspondence between a problem defined in the embedding space and the original surface problem. For open surfaces, we present a simple way to impose Dirichlet and Neumann boundary conditions while maintaining second-order accuracy. Convergence studies and a series of examples demonstrate the effectiveness and generality of our approach. © 2011 Elsevier Inc.
Combining Accuracy and Efficiency: An Incremental Focal-Point Method Based on Pair Natural Orbitals.
Fiedler, Benjamin; Schmitz, Gunnar; Hättig, Christof; Friedrich, Joachim
2017-12-12
In this work, we present a new pair natural orbitals (PNO)-based incremental scheme to calculate CCSD(T) and CCSD(T0) reaction, interaction, and binding energies. We perform an extensive analysis, which shows small incremental errors similar to previous non-PNO calculations. Furthermore, slight PNO errors are obtained by using TPNO = TTNO with appropriate values of 10-7 to 10-8 for reactions and 10-8 for interaction or binding energies. The combination with the efficient MP2 focal-point approach yields chemical accuracy relative to the complete basis-set (CBS) limit. In this method, small basis sets (cc-pVDZ, def2-TZVP) for the CCSD(T) part are sufficient in case of reactions or interactions, while some larger ones (e.g., (aug)-cc-pVTZ) are necessary for molecular clusters. For these larger basis sets, we show the very high efficiency of our scheme. We obtain not only tremendous decreases of the wall times (i.e., factors >102) due to the parallelization of the increment calculations as well as of the total times due to the application of PNOs (i.e., compared to the normal incremental scheme) but also smaller total times with respect to the standard PNO method. That way, our new method features a perfect applicability by combining an excellent accuracy with a very high efficiency as well as the accessibility to larger systems due to the separation of the full computation into several small increments.
Lenton, T M; Livina, V N; Dakos, V; van Nes, E H; Scheffer, M
2012-03-13
We address whether robust early warning signals can, in principle, be provided before a climate tipping point is reached, focusing on methods that seek to detect critical slowing down as a precursor of bifurcation. As a test bed, six previously analysed datasets are reconsidered, three palaeoclimate records approaching abrupt transitions at the end of the last ice age and three models of varying complexity forced through a collapse of the Atlantic thermohaline circulation. Approaches based on examining the lag-1 autocorrelation function or on detrended fluctuation analysis are applied together and compared. The effects of aggregating the data, detrending method, sliding window length and filtering bandwidth are examined. Robust indicators of critical slowing down are found prior to the abrupt warming event at the end of the Younger Dryas, but the indicators are less clear prior to the Bølling-Allerød warming, or glacial termination in Antarctica. Early warnings of thermohaline circulation collapse can be masked by inter-annual variability driven by atmospheric dynamics. However, rapidly decaying modes can be successfully filtered out by using a long bandwidth or by aggregating data. The two methods have complementary strengths and weaknesses and we recommend applying them together to improve the robustness of early warnings.
Directory of Open Access Journals (Sweden)
Mohammed F. Hadi
2012-01-01
Full Text Available It is argued here that more accurate though more compute-intensive alternate algorithms to certain computational methods which are deemed too inefficient and wasteful when implemented within serial codes can be more efficient and cost-effective when implemented in parallel codes designed to run on today's multicore and many-core environments. This argument is most germane to methods that involve large data sets with relatively limited computational density—in other words, algorithms with small ratios of floating point operations to memory accesses. The examples chosen here to support this argument represent a variety of high-order finite-difference time-domain algorithms. It will be demonstrated that a three- to eightfold increase in floating-point operations due to higher-order finite-differences will translate to only two- to threefold increases in actual run times using either graphical or central processing units of today. It is hoped that this argument will convince researchers to revisit certain numerical techniques that have long been shelved and reevaluate them for multicore usability.
Methods for geothermal reservoir detection emphasizing submerged environments
Energy Technology Data Exchange (ETDEWEB)
Case, C.W.; Wilde, P.
1976-05-21
This report has been prepared for the California State Lands Commission to aid them in evaluating exploration programs for geothermal reservoirs, particularly in submerged land environments. Three charts show: (1) a logical progression of specific geologic, geochemical, and geophysical exploration techniques for detecting geothermal reservoirs in various geologic environments with emphasis on submerged lands, (2) various exploration techniques which can be used to develop specific information in geothermal areas, and (3) if various techniques will apply to geothermal exploration according to a detailed geologic classification. A narrative in semi-outline form supplements these charts, providing for each technique; a brief description, advantages, disadvantages, special geologic considerations, and specific references. The specific geologic situation will control the exploration criterion to be used for reservoir detection. General guidelines are established which may be of use in evaluating such a program, but the optimum approach will vary with each situation.
Wireless Communication Enhancement Methods for Mobile Robots in Radiation Environments
Nattanmai Parasuraman, Ramviyas; Ferre, Manuel
In hostile environments such as in scientific facilities where ionising radiation is a dominant hazard, reducing human interventions by increasing robotic operations are desirable. CERN, the European Organization for Nuclear Research, has around 50 km of underground scientific facilities, where wireless mobile robots could help in the operation of the accelerator complex, e.g. in conducting remote inspections and radiation surveys in different areas. The main challenges to be considered here are not only that the robots should be able to go over long distances and operate for relatively long periods, but also the underground tunnel environment, the possible presence of electromagnetic fields, radiation effects, and the fact that the robots shall in no way interrupt the operation of the accelerators. Having a reliable and robust wireless communication system is essential for successful execution of such robotic missions and to avoid situations of manual recovery of the robots in the event that the robot runs ...
Food Environments around American Indian Reservations: A Mixed Methods Study.
Directory of Open Access Journals (Sweden)
Gwen M Chodur
Full Text Available To describe the food environments experienced by American Indians living on tribal lands in California.Geocoded statewide food business data were used to define and categorize existing food vendors into healthy, unhealthy, and intermediate composite categories. Distance to and density of each of the composite food vendor categories for tribal lands and nontribal lands were compared using multivariate linear regression. Quantitative results were concurrently triangulated with qualitative data from in-depth interviews with tribal members (n = 24.After adjusting for census tract-level urbanicity and per capita income, results indicate there were significantly fewer healthy food outlets per square mile for tribal areas compared to non-tribal areas. Density of unhealthy outlets was not significantly different for tribal versus non-tribal areas. Tribal members perceived their food environment negatively and reported barriers to the acquisition of healthy food.Urbanicity and per capita income do not completely account for disparities in food environments among American Indians tribal lands compared to nontribal lands. This disparity in access to healthy food may present a barrier to acting on the intention to consume healthy food.
Food Environments around American Indian Reservations: A Mixed Methods Study.
Chodur, Gwen M; Shen, Ye; Kodish, Stephen; Oddo, Vanessa M; Antiporta, Daniel A; Jock, Brittany; Jones-Smith, Jessica C
2016-01-01
To describe the food environments experienced by American Indians living on tribal lands in California. Geocoded statewide food business data were used to define and categorize existing food vendors into healthy, unhealthy, and intermediate composite categories. Distance to and density of each of the composite food vendor categories for tribal lands and nontribal lands were compared using multivariate linear regression. Quantitative results were concurrently triangulated with qualitative data from in-depth interviews with tribal members (n = 24). After adjusting for census tract-level urbanicity and per capita income, results indicate there were significantly fewer healthy food outlets per square mile for tribal areas compared to non-tribal areas. Density of unhealthy outlets was not significantly different for tribal versus non-tribal areas. Tribal members perceived their food environment negatively and reported barriers to the acquisition of healthy food. Urbanicity and per capita income do not completely account for disparities in food environments among American Indians tribal lands compared to nontribal lands. This disparity in access to healthy food may present a barrier to acting on the intention to consume healthy food.
Application of distributed point source method (DPSM) to wave propagation in anisotropic media
Fooladi, Samaneh; Kundu, Tribikram
2017-04-01
Distributed Point Source Method (DPSM) was developed by Placko and Kundu1, as a technique for modeling electromagnetic and elastic wave propagation problems. DPSM has been used for modeling ultrasonic, electrostatic and electromagnetic fields scattered by defects and anomalies in a structure. The modeling of such scattered field helps to extract valuable information about the location and type of defects. Therefore, DPSM can be used as an effective tool for Non-Destructive Testing (NDT). Anisotropy adds to the complexity of the problem, both mathematically and computationally. Computation of the Green's function which is used as the fundamental solution in DPSM is considerably more challenging for anisotropic media, and it cannot be reduced to a closed-form solution as is done for isotropic materials. The purpose of this study is to investigate and implement DPSM for an anisotropic medium. While the mathematical formulation and the numerical algorithm will be considered for general anisotropic media, more emphasis will be placed on transversely isotropic materials in the numerical example presented in this paper. The unidirectional fiber-reinforced composites which are widely used in today's industry are good examples of transversely isotropic materials. Development of an effective and accurate NDT method based on these modeling results can be of paramount importance for in-service monitoring of damage in composite structures.
LSHSIM: A Locality Sensitive Hashing based method for multiple-point geostatistics
Moura, Pedro; Laber, Eduardo; Lopes, Hélio; Mesejo, Daniel; Pavanelli, Lucas; Jardim, João; Thiesen, Francisco; Pujol, Gabriel
2017-10-01
Reservoir modeling is a very important task that permits the representation of a geological region of interest, so as to generate a considerable number of possible scenarios. Since its inception, many methodologies have been proposed and, in the last two decades, multiple-point geostatistics (MPS) has been the dominant one. This methodology is strongly based on the concept of training image (TI) and the use of its characteristics, which are called patterns. In this paper, we propose a new MPS method that combines the application of a technique called Locality Sensitive Hashing (LSH), which permits to accelerate the search for patterns similar to a target one, with a Run-Length Encoding (RLE) compression technique that speeds up the calculation of the Hamming similarity. Experiments with both categorical and continuous images show that LSHSIM is computationally efficient and produce good quality realizations. In particular, for categorical data, the results suggest that LSHSIM is faster than MS-CCSIM, one of the state-of-the-art methods.
Numerical simulation of electromagnetic acoustic transducers using distributed point source method.
Eskandarzade, M; Kundu, T; Liebeaux, N; Placko, D; Mobadersani, F
2010-05-01
In spite of many advances in analytical and numerical modeling techniques for solving different engineering problems, an efficient solution technique for wave propagation modeling of an electromagnetic acoustic transducer (EMAT) system is still missing. Distributed point source method (DPSM) is a newly developed semi-analytical technique developed since 2000 by Placko and Kundu (2007) [12] that is very powerful and straightforward for solving various engineering problems, including acoustic and electromagnetic modeling problems. In this study DPSM has been employed to model the Lorentz type EMAT with a meander line and flat spiral type coil. The problem of wave propagation has been solved and eddy currents and Lorentz forces have been calculated. The displacement field has been obtained as well. While modeling the Lorentz force the effect of dynamic magnetic field has been considered that most current analyses ignore. Results from this analysis have been compared with the finite element method (FEM) based predictions. It should be noted that with the current state of knowledge this problem can be solved only by FEM. Copyright 2009 Elsevier B.V. All rights reserved.
Arahman, Nasrul; Maimun, Teuku; Mukramah, Syawaliah
2017-01-01
The composition of polymer solution and the methods of membrane preparation determine the solidification process of membrane. The formation of membrane structure prepared via non-solvent induced phase separation (NIPS) method is mostly determined by phase separation process between polymer, solvent, and non-solvent. This paper discusses the phase separation process of polymer solution containing Polyethersulfone (PES), N-methylpirrolidone (NMP), and surfactant Tetronic 1307 (Tet). Cloud point experiment is conducted to determine the amount of non-solvent needed on induced phase separation. Amount of water required as a non-solvent decreases by the addition of surfactant Tet. Kinetics of phase separation for such system is studied by the light scattering measurement. With the addition of Tet., the delayed phase separation is observed and the structure growth rate decreases. Moreover, the morphology of fabricated membrane from those polymer systems is analyzed by scanning electron microscopy (SEM). The images of both systems show the formation of finger-like macrovoids through the cross-section.
Directory of Open Access Journals (Sweden)
Ibrahim Karahan
2016-04-01
Full Text Available Let C be a nonempty closed convex subset of a real Hilbert space H. Let {T_{n}}:C›H be a sequence of nearly nonexpansive mappings such that F:=?_{i=1}^{?}F(T_{i}?Ø. Let V:C›H be a ?-Lipschitzian mapping and F:C›H be a L-Lipschitzian and ?-strongly monotone operator. This paper deals with a modified iterative projection method for approximating a solution of the hierarchical fixed point problem. It is shown that under certain approximate assumptions on the operators and parameters, the modified iterative sequence {x_{n}} converges strongly to x^{*}?F which is also the unique solution of the following variational inequality: ?0, ?x?F. As a special case, this projection method can be used to find the minimum norm solution of above variational inequality; namely, the unique solution x^{*} to the quadratic minimization problem: x^{*}=argmin_{x?F}?x?². The results here improve and extend some recent corresponding results of other authors.
Fernández-Peña, Rosario; Fuentes-Pumarola, Concepció; Malagón-Aguilera, M Carme; Bonmatí-Tomàs, Anna; Bosch-Farré, Cristina; Ballester-Ferrando, David
2016-09-01
Adapting university programmes to European Higher Education Area criteria has required substantial changes in curricula and teaching methodologies. Reflective learning (RL) has attracted growing interest and occupies an important place in the scientific literature on theoretical and methodological aspects of university instruction. However, fewer studies have focused on evaluating the RL methodology from the point of view of nursing students. To assess nursing students' perceptions of the usefulness and challenges of RL methodology. Mixed method design, using a cross-sectional questionnaire and focus group discussion. The research was conducted via self-reported reflective learning questionnaire complemented by focus group discussion. Students provided a positive overall evaluation of RL, highlighting the method's capacity to help them better understand themselves, engage in self-reflection about the learning process, optimize their strengths and discover additional training needs, along with searching for continuous improvement. Nonetheless, RL does not help them as much to plan their learning or identify areas of weakness or needed improvement in knowledge, skills and attitudes. Among the difficulties or challenges, students reported low motivation and lack of familiarity with this type of learning, along with concerns about the privacy of their reflective journals and about the grading criteria. In general, students evaluated RL positively. The results suggest areas of needed improvement related to unfamiliarity with the methodology, ethical aspects of developing a reflective journal and the need for clear evaluation criteria. Copyright © 2016 Elsevier Ltd. All rights reserved.
New encapsulation method using low-melting-point alloy for sealing micro heat pipes
Energy Technology Data Exchange (ETDEWEB)
Li, Congming; Wang, Xiaodong; Zhou, Chuanpeng; Luo, Yi; Li, Zhixin; Li, Sidi [Dalian University of Technology, Dalian (China)
2017-06-15
This study proposed a method using Low-melting-point alloy (LMPA) to seal Micro heat pipes (MHPs), which were made of Si substrates and glass covers. Corresponding MHP structures with charging and sealing channels were designed. Three different auxiliary structures were investigated to study the sealability of MHPs with LMPA. One structure is rectangular and the others are triangular with corner angles of 30° and 45°, respectively. Each auxiliary channel for LMPA is 0.5 mm wide and 135 μm deep. LMPA was heated to molten state, injected to channels, and then cooled to room temperature. According to the material characteristic of LMPA, the alloy should swell in the following 12 hours to form strong interaction force between LMPA and Si walls. Experimental results show that the flow speed of liquid LMPA in channels plays an important role in sealing MHPs, and the sealing performance of triangular structures is always better than that of rectangular structure. Therefore, triangular structures are more suitable in sealing MHPs than rectangular ones. LMPA sealing is a plane packaging method that can be applied in the thermal management of high-power IC device and LEDs. Meanwhile, implanting in commercialized fabrication of MHP is easy.
Fixed point theorems in locally convex spacesÃ¢Â€Â”the Schauder mapping method
Directory of Open Access Journals (Sweden)
S. Cobzaş
2006-03-01
Full Text Available In the appendix to the book by F. F. Bonsal, Lectures on Some Fixed Point Theorems of Functional Analysis (Tata Institute, Bombay, 1962 a proof by Singbal of the Schauder-Tychonoff fixed point theorem, based on a locally convex variant of Schauder mapping method, is included. The aim of this note is to show that this method can be adapted to yield a proof of Kakutani fixed point theorem in the locally convex case. For the sake of completeness we include also the proof of Schauder-Tychonoff theorem based on this method. As applications, one proves a theorem of von Neumann and a minimax result in game theory.
CAD ACTIVE MODELS: AN INNOVATIVE METHOD IN ASSEMBLY ENVIRONMENT
Directory of Open Access Journals (Sweden)
NADDEO Alessandro
2010-07-01
Full Text Available The aim of this work is to show the use and the versatility of the active models in different applications. It has been realized an active model of a cylindrical spring and it has been applied in two mechanisms, different for typology and for backlash loads. The first example is a dynamometer in which nthe cylindrical spring is loaded by traction forces, while the second example is made up from a pressure valve in which the cylindrical-conic spring works under compression. The imposition of the loads in both cases, has allowed us to evaluate the model of the mechanism in different working conditions, also in assembly environment.
Xu, Ying; Zhou, Hongde
2017-09-01
Soluble microbial products, consisting of protein, carbohydrate and humics, are generally considered as the main membrane foulants during the performance of membrane bioreactors. Nitrate and nitrite have been proved to affect the determination of carbohydrate when anthrone-sulfuric acid photometric method is used. In this study, three chemical analytical methods based on photometric assay, including the standard curve method, conventional standard addition method and H-point standard addition method, were assessed for the quantification of carbohydrate in order to reduce the interference. Three methods were carried out for both artificial and real wastewater sample analysis. The results indicated a significant amount of matrix interference, which could be eliminated through the use of H-point standard addition. This study proposed the H-point standard addition method as a more accurate and convenient option for carbohydrate determination.
A novel all-fiber optic flow cytometer technology for Point-of Care and Remote Environments
Mermut, Ozzy
Traditional flow cytometry designs tend to be bulky systems with a complex optical-fluidic sub-system and often require trained personnel for operation. This makes them difficult to readily translate to remote site testing applications. A new compact and portable fiber-optic flow cell (FOFC) technology has been developed at INO. We designed and engineered a specialty optical fiber through which a square hole is transversally bored by laser micromachining. A capillary is fitted into that hole to flow analyte within the fiber square cross-section for detection and counting. With demonstrated performance benchmarks potentially comparable to commercial flow cytometers, our FOFC provides several advantages compared to classic free-space con-figurations, e.g., sheathless flow, low cost, reduced number of optical components, no need for alignment (occurring in the fabrication process only), ease-of-use, miniaturization, portability, and robustness. This sheathless configuration, based on a fiber optic flow module, renders this cytometer amenable to space-grade microgravity environments. We present our recent results for an all-fiber approach to achieve a miniature FOFC to translate flow cytometry from bench to a portable, point-of-care device for deployment in remote settings. Our unique fiber approach provides the capability to illuminate a large surface with a uniform intensity distri-bution, independently of the initial shape originating from the light source, and without loss of optical power. The CVs and sensitivities are measured and compared to industry benchmarks. Finally, integration of LEDs enable several advantages in cost, compactness, and wavelength availability.
Guang, Yang; Ge, Song; Han, Liu
2016-01-01
The harmonious development in society, economy and environment are crucial to regional sustained boom. However, the society, economy and environment are not respectively independent, but both mutually promotes one which, or restrict mutually complex to have the long-enduring overall process. The present study is an attempt to investigate the relationship and interaction of society, economy and environment in China based on the data from 2004 to 2013. The principal component analysis (PCA) model was employed to identify the main factors effecting the society, economy and environment subsystems, and SD (system dynamics) method used to carry out dynamic assessment for future state of sustainability from society, economy and environment perspective with future indicator values. Sustainable development in China was divided in the study into three phase from 2004 to 2013 based competitive values of these three subsystems. According to the results of PCA model, China is in third phase, and the economy growth is faster than the environment development, while the social development still maintained a steady and rapid growth, implying that the next step for sustainable development in China should focus on society development, especially the environment development.
Development of a Cloud-Point Extraction Method for Cobalt Determination in Natural Water Samples
Directory of Open Access Journals (Sweden)
Mohammad Reza Jamali
2013-01-01
Full Text Available A new, simple, and versatile cloud-point extraction (CPE methodology has been developed for the separation and preconcentration of cobalt. The cobalt ions in the initial aqueous solution were complexed with 4-Benzylpiperidinedithiocarbamate, and Triton X-114 was added as surfactant. Dilution of the surfactant-rich phase with acidified ethanol was performed after phase separation, and the cobalt content was measured by flame atomic absorption spectrometry. The main factors affecting CPE procedure, such as pH, concentration of ligand, amount of Triton X-114, equilibrium temperature, and incubation time were investigated and optimized. Under the optimal conditions, the limit of detection (LOD for cobalt was 0.5 μg L-1, with sensitivity enhancement factor (EF of 67. Calibration curve was linear in the range of 2–150 μg L-1, and relative standard deviation was 3.2% (c=100 μg L-1; n=10. The proposed method was applied to the determination of trace cobalt in real water samples with satisfactory analytical results.
CaFE: a tool for binding affinity prediction using end-point free energy methods.
Liu, Hui; Hou, Tingjun
2016-07-15
Accurate prediction of binding free energy is of particular importance to computational biology and structure-based drug design. Among those methods for binding affinity predictions, the end-point approaches, such as MM/PBSA and LIE, have been widely used because they can achieve a good balance between prediction accuracy and computational cost. Here we present an easy-to-use pipeline tool named Calculation of Free Energy (CaFE) to conduct MM/PBSA and LIE calculations. Powered by the VMD and NAMD programs, CaFE is able to handle numerous static coordinate and molecular dynamics trajectory file formats generated by different molecular simulation packages and supports various force field parameters. CaFE source code and documentation are freely available under the GNU General Public License via GitHub at https://github.com/huiliucode/cafe_plugin It is a VMD plugin written in Tcl and the usage is platform-independent. tingjunhou@zju.edu.cn. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
An end-point method based on graphene oxide for RNase H analysis and inhibitors screening.
Zhao, Chuan; Fan, Jialong; Peng, Lan; Zhao, Lijian; Tong, Chunyi; Wang, Wei; Liu, Bin
2017-04-15
As a highly conserved damage repair protein, RNase H can hydrolysis DNA-RNA heteroduplex endonucleolytically and cleave RNA-DNA junctions as well. In this study, we have developed an accurate and sensitive RNase H assay based on fluorophore-labeled chimeric substrate hydrolysis and the differential affinity of graphene oxide on RNA strand with different length. This end-point measurement method can detect RNase H in a range of 0.01 to 1 units /mL with a detection limit of 5.0×10 -3 units/ mL under optimal conditions. We demonstrate the utility of the assay by screening antibiotics, resulting in the identification of gentamycin, streptomycin and kanamycin as inhibitors with IC 50 of 60±5µM, 70±8µM and 300±20µM, respectively. Furthermore, the assay was reliably used to detect RNase H in complicated biosamples and found that RNase H activity in tumor cells was inhibited by gentamycin and streptomycin sulfate in a concentration-dependent manner. The average level of RNase H in serums of HBV infection group was similar to that of control group. In summary, the assay provides an alternative tool for biochemical analysis for this enzyme and indicates the feasibility of high throughput screening inhibitors of RNase H in vitro and in vivo. Copyright © 2016 Elsevier B.V. All rights reserved.
An efficient method for removing point sources from full-sky radio interferometric maps
Berger, Philippe; Oppermann, Niels; Pen, Ue-Li; Shaw, J. Richard
2017-12-01
A new generation of wide-field radio interferometers designed for 21-cm surveys is being built as drift scan instruments allowing them to observe large fractions of the sky. With large numbers of antennas and frequency channels, the enormous instantaneous data rates of these telescopes require novel, efficient, data management and analysis techniques. The m-mode formalism exploits the periodicity of such data with the sidereal day, combined with the assumption of statistical isotropy of the sky, to achieve large computational savings and render optimal analysis methods computationally tractable. We present an extension to that work that allows us to adopt a more realistic sky model and treat objects such as bright point sources. We develop a linear procedure for deconvolving maps, using a Wiener filter reconstruction technique, which simultaneously allows filtering of these unwanted components. We construct an algorithm, based on the Sherman-Morrison-Woodbury formula, to efficiently invert the data covariance matrix, as required for any optimal signal-to-noise ratio weighting. The performance of our algorithm is demonstrated using simulations of a cylindrical transit telescope.
A Numerical Investigation of CFRP-Steel Interfacial Failure with Material Point Method
Shen, Luming; Faleh, Haydar; Al-Mahaidi, Riadh
2010-05-01
The success of retrofitting steel structures by using the Carbon Fibre Reinforced Polymers (CFRP) significantly depends on the performance and integrity of CFRP-steel joint and the effectiveness of the adhesive used. Many of the previous numerical studies focused on the design and structural performance of the CFRP-steel system and neglected the mechanical responses of adhesive layer, which results in the lack of understanding in how the adhesive layer between the CFRP and steel performs during the loading and failure stages. Based on the recent observation on the failure of CFRP-steel bond in the double lap shear tests [1], a numerical approach is proposed in this study to simulate the delamination process of CFRP sheet from steel plate using the Material Point Method (MPM). In the proposed approach, an elastoplasticity model with a linear hardening and softening law is used to model the epoxy layer. The MPM [2], which does not employ fixed mesh-connectivity, is employed as a robust spatial discretization method to accommodate the multi-scale discontinuities involved in the CFRP-steel bond failure process. To demonstrate the potential of the proposed approach, a parametric study is conducted to investigate the effects of bond length and loading rates on the capacity and failure modes of CFRP-steel system. The evolution of the CFRP-steel bond failure and the distribution of stress and strain along bond length direction will be presented. The simulation results not only well match the available experimental data but also provide a better understanding on the physics behind the CFRP sheet delamination process.
Simulation of size segregation in granular flow with material point method
Directory of Open Access Journals (Sweden)
Fei Minglong
2017-01-01
Full Text Available Segregation is common in granular flows consisting of mixtures of particles differing in size or density. In gravity-driven flows, both gradients in total pressure (induced by gravity and gradients in velocity fluctuation fields (often associated with shear rate gradients work together to govern the evolution of segregation. Since the local shear rate and velocity fluctuations are dependent on the local concentration of the components, understanding the co-evolution of segregation and flow is critical for understanding and predicting flows where there can be a variety of particle sizes and densities, such as in nature and industry. Kinetic theory has proven to be a robust framework for predicting this simultaneous evolution but has a limit in its applicability to dense systems where collisions are highly correlated. In this paper, we introduce a model that captures the coevolution of these evolving dynamics for high density gravity driven granular mixtures. For the segregation dynamics we use a recently developed mixture theory (Fan & Hill 2011, New J. Phys; Hill & Tan 2014, J. Fluid Mech. which captures the combined effects of gravity and fluctuation fields on segregation evolution in high density granular flows. For the mixture flow dynamics, we use a recently proposed viscous-elastic-plastic constitutive model, which can describe the multi-state behaviors of granular materials, i.e. the granular solid, granular liquid and granular gas mechanical states (Fei et al. 2016, Powder Technol.. The platform we use for implementing this model is a modified Material Point Method (MPM, and we use discrete element method simulations of gravity-driven flow in an inclined channel to demonstrate that this new MPM model can predict the final segregation distribution as well as flow velocity profile well. We then discuss ongoing work where we are using this platform to test the effectiveness of particular segregation models under different boundary conditions.
DEFF Research Database (Denmark)
Valentini, Chiara
2017-01-01
The term environment refers to the internal and external context in which organizations operate. For some scholars, environment is defined as an arrangement of political, economic, social and cultural factors existing in a given context that have an impact on organizational processes and structures....... For others, environment is a generic term describing a large variety of stakeholders and how these interact and act upon organizations. Organizations and their environment are mutually interdependent and organizational communications are highly affected by the environment. This entry examines the origin...... and development of organization-environment interdependence, the nature of the concept of environment and its relevance for communication scholarships and activities....
Energy Technology Data Exchange (ETDEWEB)
Liu, Wenyang [Department of Bioengineering, University of California, Los Angeles, Los Angeles, California 90095 (United States); Cheung, Yam [Department of Radiation Oncology, University of Texas Southwestern, Dallas, Texas 75390 (United States); Sawant, Amit [Department of Radiation Oncology, University of Texas Southwestern, Dallas, Texas, 75390 and Department of Radiation Oncology, University of Maryland, College Park, Maryland 20742 (United States); Ruan, Dan, E-mail: druan@mednet.ucla.edu [Department of Bioengineering, University of California, Los Angeles, Los Angeles, California 90095 and Department of Radiation Oncology, University of California, Los Angeles, Los Angeles, California 90095 (United States)
2016-05-15
Purpose: To develop a robust and real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system. Methods: The authors have developed a robust and fast surface reconstruction method on point clouds acquired by the photogrammetry system, without explicitly solving the partial differential equation required by a typical variational approach. Taking advantage of the overcomplete nature of the acquired point clouds, their method solves and propagates a sparse linear relationship from the point cloud manifold to the surface manifold, assuming both manifolds share similar local geometry. With relatively consistent point cloud acquisitions, the authors propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, assuming that the point correspondences built by the iterative closest point (ICP) is reasonably accurate and have residual errors following a Gaussian distribution. To accommodate changing noise levels and/or presence of inconsistent occlusions during the acquisition, the authors further propose a modified sparse regression (MSR) model to model the potentially large and sparse error built by ICP with a Laplacian prior. The authors evaluated the proposed method on both clinical point clouds acquired under consistent acquisition conditions and on point clouds with inconsistent occlusions. The authors quantitatively evaluated the reconstruction performance with respect to root-mean-squared-error, by comparing its reconstruction results against that from the variational method. Results: On clinical point clouds, both the SR and MSR models have achieved sub-millimeter reconstruction accuracy and reduced the reconstruction time by two orders of magnitude to a subsecond reconstruction time. On point clouds with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent and robust performance despite the introduced
Analysis and comparison of different methods to characterize turbulent environment
Kozak, Liudmyla; Lui, Antony; Kronberg, Elena; Grigorenko, Elena; Savin, Sergey; Budaev, Vyacheslav
2017-04-01
The methods and approaches that can be used to analyze the hydrodynamic and magnetohydrodynamic turbulent flows are selected. It is shown that the best methods to characterize the types of turbulent processes are the methods of statistical physics. Within the statistical approach we considered the fractal analysis (determination of fractal length and height of the maximum of the probability density fluctuations of the studied parameters), and multifractal analysis (study of a power dependence of high order statistical moments and construction of multifractal spectrum). It is indicated that the statistical analysis of properties of turbulent processes can be supplemented by the spectral studies: Fourier and wavelet analysis. In order to test the methods and approaches we have used the magnetic field measurements from the space mission Cluster-II with a sampling frequency of 22.5 Hz in different regions of Earth's magnetosphere and solar wind plasma. We got a good agreement between different approaches and their mutual complementing to provide a general view of the turbulence. The work is done in the frame of the grant Az. 90 312 from the Volkswagen Foundation.
Effect of processing methods and storage environment on moisture ...
African Journals Online (AJOL)
The objective of this study was to determine the effect of processing methods and storage parameters on moisture adsorption characteristics of dry matured yellow ginger (Zingiber officianale) to provide information for the prediction of shelf life and selection of packaging materials. Moisture adsorption was determined ...
Clausen, G; Høst, A; Toftum, J; Bekö, G; Weschler, C; Callesen, M; Buhl, S; Ladegaard, M B; Langer, S; Andersen, B; Sundell, J; Bornehag, C-G; Sigsgaard, T
2012-12-01
The principle objective of the Danish research program 'Indoor Environment and Children's Health' (IECH) was to explore associations between various exposures that children experience in their indoor environments (specifically their homes and daycare centers) and their well-being and health. The targeted health endpoints were allergy, asthma, and certain respiratory symptoms. The study was designed with two stages. In the first stage, a questionnaire survey was distributed to more than 17,000 families with children between the ages of 1 and 5. The questionnaire focused on the children's health and the environments within the homes they inhabited and daycare facilities they attended. More than 11,000 questionnaires were returned. In the second stage, a subsample of 500 children was selected for more detailed studies, including an extensive set of measurements in their homes and daycare centers and a clinical examination; all clinical examinations were carried out by the same physician. In this study, the methods used for data collection within the IECH research program are presented and discussed. Furthermore, initial findings are presented regarding descriptors of the study population and selected characteristics of the children's dwellings and daycare centers. This study outlines methods that might be followed by future investigators conducting large-scale field studies of potential connections between various indoor environmental factors and selected health endpoints. Of particular note are (i) the two-stage design - a broad questionnaire-based survey followed by a more intensive set of measurements among a subset of participants who have been selected based on their responses to the questionnaire; (ii) the case-base approach utilized in the stage 2 in contrast to the more commonly used case-control approach; (iii) the inclusion of the children's daycare environment when conducting intensive sampling to more fully capture the children's total indoor exposure; and
Wu, Yongren; Cisewski, Sarah E; Sun, Yi; Damon, Brooke J; Sachs, Barton L; Pellegrini, Vincent D; Slate, Elizabeth H; Yao, Hai
2017-09-01
Regional measurements of fixed charge densities (FCDs) of healthy human cartilage endplate (CEP) using a two-point electrical conductivity approach. The aim of this study was to determine the FCDs at four different regions (central, lateral, anterior, and posterior) of human CEP, and correlate the FCDs with tissue biochemical composition. The CEP, a thin layer of hyaline cartilage on the cranial and caudal surfaces of the intervertebral disc, plays an irreplaceable role in maintaining the unique physiological mechano-electrochemical environment inside the disc. FCD, arising from the carboxyl and sulfate groups of the glycosaminoglycans (GAG) in the extracellular matrix of the disc, is a key regulator of the disc ionic and osmotic environment through physicochemical and electrokinetic effects. Although FCDs in the annulus fibrosus (AF) and nucleus pulposus (NP) have been reported, quantitative baseline FCD in healthy human CEP has not been reported. CEP specimens were regionally isolated from human lumbar spines. FCD and ion diffusivity were concurrently investigated using a two-point electrical conductivity method. Biochemical assays were used to quantify regional GAG and water content. FCD in healthy human CEP was region-dependent, with FCD lowest in the lateral region (P = 0.044). Cross-region FCD was 30% to 60% smaller than FCD in NP, but similar to the AF and articular cartilage (AC). CEP FCD (average: 0.12 ± 0.03 mEq/g wet tissue) was correlated with GAG content (average: 31.24 ± 5.06 μg/mg wet tissue) (P = 0.005). In addition, the cross-region ion diffusivity in healthy CEP (2.97 ± 1.00 × 10 cm/s) was much smaller than the AF and NP. Healthy human CEP acts as a biomechanical interface, distributing loads between the bony vertebral body and soft disc tissues and as a gateway impeding rapid solute diffusion through the disc. N/A.
Application of a practical method for the isocenter point in vivo dosimetry by a transit signal
Energy Technology Data Exchange (ETDEWEB)
Piermattei, Angelo [UO di Fisica Sanitaria, Centro di Ricerca e Formazione ad Alta Tecnologia nelle Scienze Biomediche dell' Universita Cattolica Sacro Cuore, Campobasso (Italy); Fidanzio, Andrea [Istituto di Fisica, Universita Cattolica del Sacro Cuore, Rome (Italy); Azario, Luigi [Istituto di Fisica, Universita Cattolica del Sacro Cuore, Rome (Italy)] (and others)
2007-08-21
This work reports the results of the application of a practical method to determine the in vivo dose at the isocenter point, D{sub iso}, of brain thorax and pelvic treatments using a transit signal S{sub t}. The use of a stable detector for the measurement of the signal S{sub t} (obtained by the x-ray beam transmitted through the patient) reduces many of the disadvantages associated with the use of solid-state detectors positioned on the patient as their periodic recalibration, and their positioning is time consuming. The method makes use of a set of correlation functions, obtained by the ratio between S{sub t} and the mid-plane dose value, D{sub m}, in standard water-equivalent phantoms, both determined along the beam central axis. The in vivo measurement of D{sub iso} required the determination of the water-equivalent thickness of the patient along the beam central axis by the treatment planning system that uses the electron densities supplied by calibrated Hounsfield numbers of the computed tomography scanner. This way it is, therefore, possible to compare D{sub iso} with the stated doses, D{sub iso,TPS}, generally used by the treatment planning system for the determination of the monitor units. The method was applied in five Italian centers that used beams of 6 MV, 10 MV, 15 MV x-rays and {sup 60}Co {gamma}-rays. In particular, in four centers small ion-chambers were positioned below the patient and used for the S{sub t} measurement. In only one center, the S{sub t} signals were obtained directly by the central pixels of an EPID (electronic portal imaging device) equipped with commercial software that enabled its use as a stable detector. In the four centers where an ion-chamber was positioned on the EPID, 60 pelvic treatments were followed for two fields, an anterior-posterior or a posterior-anterior irradiation and a lateral-lateral irradiation. Moreover, ten brain tumors were checked for a lateral-lateral irradiation, and five lung tumors carried out with
Underwater Environment SDAP Method Using Multi Single-Beam Sonars
Directory of Open Access Journals (Sweden)
Zheping Yan
2013-01-01
Full Text Available A new autopilot system for unmanned underwater vehicle (UUV using multi-single-beam sonars is proposed for environmental exploration. The proposed autopilot system is known as simultaneous detection and patrolling (SDAP, which addresses two fundamental challenges: autonomous guidance and control. Autonomous guidance, autonomous path planning, and target tracking are based on the desired reference path which is reconstructed from the sonar data collected from the environmental contour with the predefined safety distance. The reference path is first estimated by using a support vector clustering inertia method and then refined by Bézier curves in order to satisfy the inertia property of the UUV. Differential geometry feedback linearization method is used to guide the vehicle entering into the predefined path while finite predictive stable inversion control algorithm is employed for autonomous target approaching. The experimental results from sea trials have demonstrated that the proposed system can provide satisfactory performance implying its great potential for future underwater exploration tasks.
Six Methods of Transaction Visualization in Virtual Environments
Energy Technology Data Exchange (ETDEWEB)
WALTHER, ELEANOR A.; TRAHAN, MICHAEL W.; SUMMERS, KENNETH L.; EYRING, TIM; CAUDELL, THOMAS P.
2002-06-01
Many governmental and corporate organizations are interested in tracking materials and/or information through a network. Often, as in the case of the U.S. Customs Service, the traffic is recorded as transactions through a large number of checkpoints with a correspondingly complex network. These networks will contain large numbers of uninteresting transactions that act as noise to conceal the chains of transactions of interest, such as drug trafficking. We are interested in finding significant paths in transaction data containing high noise levels, which tend to make traditional graph visualization methods complex and hard to understand. This paper covers the evolution of a series of graphing methods designed to assist in this search for paths-from 1-D to 2-D to 3-D and beyond.
Data distribution method of workflow in the cloud environment
Wang, Yong; Wu, Junjuan; Wang, Ying
2017-08-01
Cloud computing for workflow applications provides the required high efficiency calculation and large storage capacity and it also brings challenges to the protection of trade secrets and other privacy data. Because of privacy data will cause the increase of the data transmission time, this paper presents a new data allocation algorithm based on data collaborative damage degree, to improve the existing data allocation strategy? Safety and public cloud computer algorithm depends on the private cloud; the static allocation method in the initial stage only to the non-confidential data division to improve the original data, in the operational phase will continue to generate data to dynamically adjust the data distribution scheme. The experimental results show that the improved method is effective in reducing the data transmission time.
Liu, Wenyang; Cheung, Yam; Sawant, Amit; Ruan, Dan
2016-05-01
To develop a robust and real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system. The authors have developed a robust and fast surface reconstruction method on point clouds acquired by the photogrammetry system, without explicitly solving the partial differential equation required by a typical variational approach. Taking advantage of the overcomplete nature of the acquired point clouds, their method solves and propagates a sparse linear relationship from the point cloud manifold to the surface manifold, assuming both manifolds share similar local geometry. With relatively consistent point cloud acquisitions, the authors propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, assuming that the point correspondences built by the iterative closest point (ICP) is reasonably accurate and have residual errors following a Gaussian distribution. To accommodate changing noise levels and/or presence of inconsistent occlusions during the acquisition, the authors further propose a modified sparse regression (MSR) model to model the potentially large and sparse error built by ICP with a Laplacian prior. The authors evaluated the proposed method on both clinical point clouds acquired under consistent acquisition conditions and on point clouds with inconsistent occlusions. The authors quantitatively evaluated the reconstruction performance with respect to root-mean-squared-error, by comparing its reconstruction results against that from the variational method. On clinical point clouds, both the SR and MSR models have achieved sub-millimeter reconstruction accuracy and reduced the reconstruction time by two orders of magnitude to a subsecond reconstruction time. On point clouds with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent and robust performance despite the introduced occlusions. The authors have
A comparison of point counts with a new acoustic sampling method ...
African Journals Online (AJOL)
In our study, we compared results of traditional point counts with simultaneous acoustic samples obtained by automated soundscape recording units in the montane forest of Mount Cameroon. We showed that the estimates of species richness, abundance and community composition based on point counts and post-hoc ...
Baucom, Jason; Transue, Thomas; Fuentes-Cabrera, Miguel; Krahn, Joseph; Darden, Thomas; Sagui, Celeste
2004-03-01
Molecular dynamics simulations of the DNA duplex d(CCAACGTTGG)2 were used to study the relationship between DNA sequence and structure. Three different force fields were used: a traditional description based on atomic point charges, a polarizable force field and an ``extra-point" force field (with additional charges on extra-nuclear sites). It is found that in crystal environment all the force fields reproduce fairly well the sequence-dependent features of the experimental structure. The polarizable force fields, however, outperforms the other two, pointing out to the need of the inclusion of polarization for accurate descriptions of DNA.
Energy Technology Data Exchange (ETDEWEB)
York, A.R. II [Sandia National Labs., Albuquerque, NM (United States). Engineering and Process Dept.
1997-07-01
The material point method (MPM) is an evolution of the particle in cell method where Lagrangian particles or material points are used to discretize the volume of a material. The particles carry properties such as mass, velocity, stress, and strain and move through a Eulerian or spatial mesh. The momentum equation is solved on the Eulerian mesh. Modifications to the material point method are developed that allow the simulation of thin membranes, compressible fluids, and their dynamic interactions. A single layer of material points through the thickness is used to represent a membrane. The constitutive equation for the membrane is applied in the local coordinate system of each material point. Validation problems are presented and numerical convergence is demonstrated. Fluid simulation is achieved by implementing a constitutive equation for a compressible, viscous, Newtonian fluid and by solution of the energy equation. The fluid formulation is validated by simulating a traveling shock wave in a compressible fluid. Interactions of the fluid and membrane are handled naturally with the method. The fluid and membrane communicate through the Eulerian grid on which forces are calculated due to the fluid and membrane stress states. Validation problems include simulating a projectile impacting an inflated airbag. In some impact simulations with the MPM, bodies may tend to stick together when separating. Several algorithms are proposed and tested that allow bodies to separate from each other after impact. In addition, several methods are investigated to determine the local coordinate system of a membrane material point without relying upon connectivity data.
The importance of activity-based costing method (ABC) In Romania's business environment changes
Căpuşneanu, Sorinel/I; Cokins, Gary; Barbu, Cristian Marian
2011-01-01
The purpose of this paper is to present the importance of Activity-Based Costing method (ABC) in Romania's business environment changes. We analyzed the possibilities to adapt to a modern management accounting method and managerial accounting organization assumptions of the ABC (Activity-Based Costing) method in Romanian enterprises. The article ends with the authors' conclusions about the changes of the ABC method in the Romanian’s business environment.
Exploring Non-Traditional Learning Methods in Virtual and Real-World Environments
Lukman, Rebeka; Krajnc, Majda
2012-01-01
This paper identifies the commonalities and differences within non-traditional learning methods regarding virtual and real-world environments. The non-traditional learning methods in real-world have been introduced within the following courses: Process Balances, Process Calculation, and Process Synthesis, and within the virtual environment through…
Electroanalytical methods in characterization of sulfur species in aqueous environment
Directory of Open Access Journals (Sweden)
Irena Ciglenečki
2014-12-01
Full Text Available Electroanalytical (voltammetric, polarographic, chronoamperometric methods on an Hg electrode were applied for studying of different sulfur compounds in model and natural water systems (anoxic lakes, waste water, rain precipitation, sea-aerosols. In all investigated samples typical HgS reduction voltammetric peak, characteristic for many different reduced sulfur species (RSS: sulfide, elemental sulfur, polysulfide, labile metal sulfide and organosulfur species was recorded at about -0.6 V vs. Ag/AgCl reference electrode. In addition, in anoxic waters which are enriched with sulfide and iron species, voltammetric peaks characteristic for the presence of free Fe(II and FeS nanoparticles (NPs were recorded at -1.4 V and around -0.45 V, respectively. Depending on the used electroanalytical method and experimental conditions (varying deposition potential, varying time of oxidative and/or reductive accumulation, sample pretreatment i.e. acidification followed by purging it is possible to distinguish between different sulfur species. This work clearly shows a large potential of the electrochemistry as a powerful analytical technique for screening water quality regarding presence of different reduced sulfur species and their speciation between dissolved and colloidal/nanoparticle phases.
Directory of Open Access Journals (Sweden)
L. Gézero
2017-05-01
Full Text Available The digital terrain models (DTM assume an essential role in all types of road maintenance, water supply and sanitation projects. The demand of such information is more significant in developing countries, where the lack of infrastructures is higher. In recent years, the use of Mobile LiDAR Systems (MLS proved to be a very efficient technique in the acquisition of precise and dense point clouds. These point clouds can be a solution to obtain the data for the production of DTM in remote areas, due mainly to the safety, precision, speed of acquisition and the detail of the information gathered. However, the point clouds filtering and algorithms to separate “terrain points” from “no terrain points”, quickly and consistently, remain a challenge that has caught the interest of researchers. This work presents a method to create the DTM from point clouds collected by MLS. The method is based in two interactive steps. The first step of the process allows reducing the cloud point to a set of points that represent the terrain’s shape, being the distance between points inversely proportional to the terrain variation. The second step is based on the Delaunay triangulation of the points resulting from the first step. The achieved results encourage a wider use of this technology as a solution for large scale DTM production in remote areas.
Methods in the analysis of mobile robots behavior in unstructured environment
Mondoc, Alina; Dolga, Valer; Gorie, Nina
2012-11-01
A mobile robot can be described as a mechatronic system that must execute an application in a working environment. From mechatronic concept, the authors highlight mechatronic system structure based on its secondary function. Mobile robot will move, either in a known environment - structured environment may be described in time by an appropriate mathematical model or in an unfamiliar environment - unstructured - the random aspects prevail. Starting from a point robot must reach a START STOP point in the context of functional constraints imposed on the one hand, the application that, on the other hand, the working environment. The authors focus their presentation on unstructured environment. In this case the evolution of mobile robot is based on obtaining information in the work environment, their processing and integration results in action strategy. Number of sensory elements used is subject to optimization parameter. Starting from a known structure of mobile robot, the authors analyze the possibility of developing a mathematical model variants mathematical contact wheel - ground. It analyzes the various types of soil and the possibility of obtaining a "signature" on it based on sensory information. Theoretical aspects of the problem are compared to experimental results obtained in robot evolution. The mathematical model of the robot system allowed the simulation environment and its evolution in comparison with the experimental results estimated.
Johnston, James D; Magnusson, Brianna M; Eggett, Dennis; Collingwood, Scott C; Bernhardt, Scott A
2015-01-01
Residential temperature and humidity are associated with multiple health effects. Studies commonly use single-point measures to estimate indoor temperature and humidity exposures, but there is little evidence to support this sampling strategy. This study evaluated the relationship between single-point and continuous monitoring of air temperature, apparent temperature, relative humidity, and absolute humidity over four exposure intervals (5-min, 30-min, 24-hr, and 12-days) in 9 northern Utah homes, from March-June 2012. Three homes were sampled twice, for a total of 12 observation periods. Continuous data-logged sampling was conducted in homes for 2-3 wks, and simultaneous single-point measures (n = 114) were collected using handheld thermo-hygrometers. Time-centered single-point measures were moderately correlated with short-term (30-min) data logger mean air temperature (r = 0.76, β = 0.74), apparent temperature (r = 0.79, β = 0.79), relative humidity (r = 0.70, β = 0.63), and absolute humidity (r = 0.80, β = 0.80). Data logger 12-day means were also moderately correlated with single-point air temperature (r = 0.64, β = 0.43) and apparent temperature (r = 0.64, β = 0.44), but were weakly correlated with single-point relative humidity (r = 0.53, β = 0.35) and absolute humidity (r = 0.52, β = 0.39). Of the single-point RH measures, 59 (51.8%) deviated more than ±5%, 21 (18.4%) deviated more than ±10%, and 6 (5.3%) deviated more than ±15% from data logger 12-day means. Where continuous indoor monitoring is not feasible, single-point sampling strategies should include multiple measures collected at prescribed time points based on local conditions.
A Registration Method Based on Contour Point Cloud for 3D Whole-Body PET and CT Images.
Song, Zhiying; Jiang, Huiyan; Yang, Qiyao; Wang, Zhiguo; Zhang, Guoxu
2017-01-01
The PET and CT fusion image, combining the anatomical and functional information, has important clinical meaning. An effective registration of PET and CT images is the basis of image fusion. This paper presents a multithread registration method based on contour point cloud for 3D whole-body PET and CT images. Firstly, a geometric feature-based segmentation (GFS) method and a dynamic threshold denoising (DTD) method are creatively proposed to preprocess CT and PET images, respectively. Next, a new automated trunk slices extraction method is presented for extracting feature point clouds. Finally, the multithread Iterative Closet Point is adopted to drive an affine transform. We compare our method with a multiresolution registration method based on Mattes Mutual Information on 13 pairs (246~286 slices per pair) of 3D whole-body PET and CT data. Experimental results demonstrate the registration effectiveness of our method with lower negative normalization correlation (NC = -0.933) on feature images and less Euclidean distance error (ED = 2.826) on landmark points, outperforming the source data (NC = -0.496, ED = 25.847) and the compared method (NC = -0.614, ED = 16.085). Moreover, our method is about ten times faster than the compared one.
A Registration Method Based on Contour Point Cloud for 3D Whole-Body PET and CT Images
Directory of Open Access Journals (Sweden)
Zhiying Song
2017-01-01
Full Text Available The PET and CT fusion image, combining the anatomical and functional information, has important clinical meaning. An effective registration of PET and CT images is the basis of image fusion. This paper presents a multithread registration method based on contour point cloud for 3D whole-body PET and CT images. Firstly, a geometric feature-based segmentation (GFS method and a dynamic threshold denoising (DTD method are creatively proposed to preprocess CT and PET images, respectively. Next, a new automated trunk slices extraction method is presented for extracting feature point clouds. Finally, the multithread Iterative Closet Point is adopted to drive an affine transform. We compare our method with a multiresolution registration method based on Mattes Mutual Information on 13 pairs (246~286 slices per pair of 3D whole-body PET and CT data. Experimental results demonstrate the registration effectiveness of our method with lower negative normalization correlation (NC = −0.933 on feature images and less Euclidean distance error (ED = 2.826 on landmark points, outperforming the source data (NC = −0.496, ED = 25.847 and the compared method (NC = −0.614, ED = 16.085. Moreover, our method is about ten times faster than the compared one.
Comparative study of building footprint estimation methods from LiDAR point clouds
Rozas, E.; Rivera, F. F.; Cabaleiro, J. C.; Pena, T. F.; Vilariño, D. L.
2017-10-01
Building area calculation from LiDAR points is still a difficult task with no clear solution. Their different characteristics, such as shape or size, have made the process too complex to automate. However, several algorithms and techniques have been used in order to obtain an approximated hull. 3D-building reconstruction or urban planning are examples of important applications that benefit of accurate building footprint estimations. In this paper, we have carried out a study of accuracy in the estimation of the footprint of buildings from LiDAR points. The analysis focuses on the processing steps following the object recognition and classification, assuming that labeling of building points have been previously performed. Then, we perform an in-depth analysis of the influence of the point density over the accuracy of the building area estimation. In addition, a set of buildings with different size and shape were manually classified, in such a way that they can be used as benchmark.
Calculation Method for Equilibrium Points in Dynamical Systems Based on Adaptive Sinchronization
Directory of Open Access Journals (Sweden)
Manuel Prian Rodríguez
2017-12-01
Full Text Available In this work, a control system is proposed as an equivalent numerical procedure whose aim is to obtain the natural equilibrium points of a dynamical system. These equilibrium points may be employed later as setpoint signal for different control techniques. The proposed procedure is based on the adaptive synchronization between an oscillator and a reference model driven by the oscillator state variables. A stability analysis is carried out and a simplified algorithm is proposed. Finally, satisfactory simulation results are shown.
Two-Step Robust Diagnostic Method for Identification of Multiple High Leverage Points
Arezoo Bagheri; Habshah Midi; A. H.M.R. Imon
2009-01-01
Problem statement: High leverage points are extreme outliers in the X-direction. In regression analysis, the detection of these leverage points becomes important due to their arbitrary large effects on the estimations as well as multicollinearity problems. Mahalanobis Distance (MD) has been used as a diagnostic tool for identification of outliers in multivariate analysis where it finds the distance between normal and abnormal groups of the data. Since the computation of MD relies on non-robus...
On the String Averaging Method for Sparse Common Fixed Points Problems.
Censor, Yair; Segal, Alexander
2009-07-01
We study the common fixed point problem for the class of directed operators. This class is important because many commonly used nonlinear operators in convex optimization belong to it. We propose a definition of sparseness of a family of operators and investigate a string-averaging algorithmic scheme that favorably handles the common fixed points problem when the family of operators is sparse. The convex feasibility problem is treated as a special case and a new subgradient projections algorithmic scheme is obtained.
Implications of construction method and spatial scale on measures of the built environment.
Strominger, Julie; Anthopolos, Rebecca; Miranda, Marie Lynn
2016-04-28
Research surrounding the built environment (BE) and health has resulted in inconsistent findings. Experts have identified the need to examine methodological choices, such as development and testing of BE indices at varying spatial scales. We sought to examine the impact of construction method and spatial scale on seven measures of the BE using data collected at two time points. The Children's Environmental Health Initiative conducted parcel-level assessments of 57 BE variables in Durham, NC (parcel N = 30,319). Based on a priori defined variable groupings, we constructed seven mutually exclusive BE domains (housing damage, property disorder, territoriality, vacancy, public nuisances, crime, and tenancy). Domain-based indices were developed according to four different index construction methods that differentially account for number of parcels and parcel area. Indices were constructed at the census block level and two alternative spatial scales that better depict the larger neighborhood context experienced by local residents: the primary adjacency community and secondary adjacency community. Spearman's rank correlation was used to assess if indices and relationships among indices were preserved across methods. Territoriality, public nuisances, and tenancy were weakly to moderately preserved across methods at the block level while all other indices were well preserved. Except for the relationships between public nuisances and crime or tenancy, and crime and housing damage or territoriality, relationships among indices were poorly preserved across methods. The number of indices affected by construction method increased as spatial scale increased, while the impact of construction method on relationships among indices varied according to spatial scale. We found that the impact of construction method on BE measures was index and spatial scale specific. Operationalizing and developing BE measures using alternative methods at varying spatial scales before connecting to
Improved Full-Newton Step O(nL) Infeasible Interior-Point Method for Linear Optimization
Gu, G.; Mansouri, H.; Zangiabadi, M.; Bai, Y.Q.; Roos, C.
2009-01-01
We present several improvements of the full-Newton step infeasible interior-point method for linear optimization introduced by Roos (SIAM J. Optim. 16(4):1110–1136, 2006). Each main step of the method consists of a feasibility step and several centering steps. We use a more natural feasibility step,
DEFF Research Database (Denmark)
Cordua, Knud Skou; Hansen, Thomas Mejer; Lange, Katrine
.e. marginals) of the training image and a subsurface model are matched in order to obtain a solution with the same multiple-point statistics as the training image. Sequential Gibbs sampling is a simulation strategy that provides an efficient way of applying sequential simulation based algorithms as a priori...... proven to be an efficient way of obtaining multiple realizations that honor the same multiple-point statistics as the training image. The frequency matching method provides an alternative way of formulating multiple-point-based a priori models. In this strategy the pattern frequency distributions (i...... information in probabilistic inverse problems. Unfortunately, when this strategy is applied with the multiple-point-based simulation algorithm SNESIM the reproducibility of training image patterns is violated. In this study we suggest to combine sequential simulation with the frequency matching method...
National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains human-use resource data (e.g., abandoned vessels, access points, airports, aquaculture sites, archaeological sites, artificial reefs, beaches,...
National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains vector points and lines representing human-use resource data for airports, marinas, and mining sites in Northwest Arctic, Alaska....
National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains sensitive biological resource data for Steller sea lions and polar bears in Northwest Arctic, Alaska. Vector points in this data set represent...
National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains human-use resource point data for access sites, airports, aquaculture sites, beaches, boat ramps, marinas, coast guard facilities, oil...
National Research Council Canada - National Science Library
Brewster, J.D; Neumann, D; Ostertag, S.K; Loseto, L.L
2016-01-01
.... Shingle Point, YT is a traditional and modern day fishing and hunting community for Western Arctic indigenous people and is part of a marine protected area and the Inuvialuit Settlement Region (ISR...
National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains human-use resource data for abandoned vessels, access points, airports, archaeological sites, artifical reefs, beaches, boat ramps,...
National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains human-use resource data for abandoned vessels, access points, airports, aquaculture sites, beaches, boat ramps, coast guard stations, ferries,...
National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains human-use resource data for access points, aquaculture sites, airports, artificial reefs, boat ramps, coast guard stations, heliports,...
Habit control during growth on GaN point seed crystals by Na-flux method
Honjo, Masatomo; Imanishi, Masayuki; Imabayashi, Hiroki; Nakamura, Kosuke; Murakami, Kosuke; Matsuo, Daisuke; Maruyama, Mihoko; Imade, Mamoru; Yoshimura, Masashi; Mori, Yusuke
2017-01-01
The formation of the pyramidal habit is one of the requirements for the dramatic reduction of dislocations during growth on a tiny GaN seed called a “point seed”. In this study, we focus on controlling the growth habit to form a pyramidal shape in order to reduce the number of dislocations in the c-growth sector during growth on GaN point seeds. High temperature growth was found to change the growth habit from the truncated pyramidal shape to the pyramidal shape. As a result, the number of dislocations in the c-growth sector tended to decrease with increasing growth temperature.
Vannecke, T P W; Lampens, D R A; Ekama, G A; Volcke, E I P
2015-01-01
Simple titration methods certainly deserve consideration for on-site routine monitoring of volatile fatty acid (VFA) concentration and alkalinity during anaerobic digestion (AD), because of their simplicity, speed and cost-effectiveness. In this study, the 5 and 8 pH point titration methods for measuring the VFA concentration and carbonate system alkalinity (H2CO3*-alkalinity) were assessed and compared. For this purpose, synthetic solutions with known H2CO3*-alkalinity and VFA concentration as well as samples from anaerobic digesters treating three different kind of solid wastes were analysed. The results of these two related titration methods were verified with photometric and high-pressure liquid chromatography measurements. It was shown that photometric measurements lead to overestimations of the VFA concentration in the case of coloured samples. In contrast, the 5 pH point titration method provides an accurate estimation of the VFA concentration, clearly corresponding with the true value. Concerning the H2CO3*-alkalinity, the most accurate and precise estimations, showing very similar results for repeated measurements, were obtained using the 8 pH point titration. Overall, it was concluded that the 5 pH point titration method is the preferred method for the practical monitoring of AD of solid wastes due to its robustness, cost efficiency and user-friendliness.
Apparatus and methods for determining at least one characteristic of a proximate environment
Novascone, Stephen R [Idaho Falls, ID; West, Phillip B [Idaho Falls, ID; Anderson, Michael J [Troy, ID
2008-04-15
Methods and an apparatus for determining at least one characteristic of an environment are disclosed. A vibrational energy may be imparted into an environment and a magnitude of damping of the vibrational energy may be measured and at least one characteristic of the environment may be determined. Particularly, a vibratory source may be operated and coupled to an environment. At least one characteristic of the environment may be determined based on a shift in at least one steady-state frequency of oscillation of the vibratory source. An apparatus may include at least one vibratory source and a structure for positioning the at least one vibratory source proximate to an environment. Further, the apparatus may include an analysis device for determining at least one characteristic of the environment based at least partially upon shift in a steady-state oscillation frequency of the vibratory source for the given impetus.
Point-of-use filtration method for the prevention of fungal contamination of hospital water.
Warris, A.; Onken, A.; Gaustad, P.; Janssen, W.; Lee, H. van der; Verweij, P.E.; Abrahamsen, T.G.
2010-01-01
Published data implicate hospital water as a potential source of opportunistic fungi that may cause life-threatening infections in immunocompromised patients. Point-of-care filters are known to retain bacteria, but little is known about their efficacy in reducing exposure to moulds. We investigated
Doing Close-Relative Research: Sticking Points, Method and Ethical Considerations
Degabriele Pace, Geraldine
2015-01-01
Doing insider research can raise many problematic issues, particularly if the insiders are also close relatives. This paper deals with complexities arising from research which is participatory in nature. Thus, this paper seeks to describe the various sticking points that were encountered by the researcher when she decided to embark on insider…
Tian, Zhen; Jia, Xun; Jiang, Steve B
2013-01-01
In the treatment plan optimization for intensity modulated radiation therapy (IMRT), dose-deposition coefficient (DDC) matrix is often pre-computed to parameterize the dose contribution to each voxel in the volume of interest from each beamlet of unit intensity. However, due to the limitation of computer memory and the requirement on computational efficiency, in practice matrix elements of small values are usually truncated, which inevitably compromises the quality of the resulting plan. A fixed-point iteration scheme has been applied in IMRT optimization to solve this problem, which has been reported to be effective and efficient based on the observations of the numerical experiments. In this paper, we aim to point out the mathematics behind this scheme and to answer the following three questions: 1) whether the fixed-point iteration algorithm converges or not? 2) when it converges, whether the fixed point solution is same as the original solution obtained with the complete DDC matrix? 3) if not the same, wh...
Shi, Yixun
2009-01-01
Based on a sequence of points and a particular linear transformation generalized from this sequence, two recent papers (E. Mauch and Y. Shi, "Using a sequence of number pairs as an example in teaching mathematics". Math. Comput. Educ., 39 (2005), pp. 198-205; Y. Shi, "Case study projects for college mathematics courses based on a particular…
Numerical Time Integration Methods for a Point Absorber Wave Energy Converter
DEFF Research Database (Denmark)
Zurkinden, Andrew Stephen; Kramer, Morten
2012-01-01
The objective of this abstract is to provide a review of models for motion simulation of marine structures with a special emphasis on wave energy converters. The time-domain model is applied to a point absorber system working in pitch mode only. The device is similar to the well-known Wavestar...
Direct measurement of surface-state conductance by microscopic four-point probe method
DEFF Research Database (Denmark)
Hasegawa, S.; Shiraki, I.; Tanikawa, T.
2002-01-01
For in situ measurements of local electrical conductivity of well defined crystal surfaces in ultrahigh vacuum, we have developed microscopic four-point probes with a probe spacing of several micrometres, installed in a scanning-electron - microscope/electron-diffraction chamber. The probe...
Vu, Tinh Thi; Kiesel, Jens; Guse, Bjoern; Fohrer, Nicola
2017-04-01
The damming of rivers causes one of the most considerable impacts of our society on the riverine environment. More than 50% of the world's streams and rivers are currently impounded by dams before reaching the oceans. The construction of dams is of high importance in developing and emerging countries, i.e. for power generation and water storage. In the Vietnamese Vu Gia - Thu Bon Catchment (10,350 km2), about 23 dams were built during the last decades and store approximately 2,156 billion m3 of water. The water impoundment in 10 dams in upstream regions amounts to 17 % of the annual discharge volume. It is expected that impacts from these dams have altered the natural flow regime. However, up to now it is unclear how the flow regime was altered. For this, it needs to be investigated at what point in time these changes became significant and detectable. Many approaches exist to detect changes in stationary or consistency of hydrological records using statistical analysis of time series for the pre- and post-dam period. The objective of this study is to reliably detect and assess hydrologic shifts occurring in the discharge regime of an anthropogenically influenced river basin, mainly affected by the construction of dams. To achieve this, we applied nine available change-point tests to detect change in mean, variance and median on the daily and annual discharge records at two main gauges of the basin. The tests yield conflicting results: The majority of tests found abrupt changes that coincide with the damming-period, while others did not. To interpret how significant the changes in discharge regime are, and to which different properties of the time series each test responded, we calculated Indicators of Hydrologic Alteration (IHAs) for the time period before and after the detected change points. From the results, we can deduce, that the change point tests are influenced in different levels by different indicator groups (magnitude, duration, frequency, etc) and that
Directory of Open Access Journals (Sweden)
Jinhong Noh
2016-04-01
Full Text Available Obstacle avoidance methods require knowledge of the distance between a mobile robot and obstacles in the environment. However, in stochastic environments, distance determination is difficult because objects have position uncertainty. The purpose of this paper is to determine the distance between a robot and obstacles represented by probability distributions. Distance determination for obstacle avoidance should consider position uncertainty, computational cost and collision probability. The proposed method considers all of these conditions, unlike conventional methods. It determines the obstacle region using the collision probability density threshold. Furthermore, it defines a minimum distance function to the boundary of the obstacle region with a Lagrange multiplier method. Finally, it computes the distance numerically. Simulations were executed in order to compare the performance of the distance determination methods. Our method demonstrated a faster and more accurate performance than conventional methods. It may help overcome position uncertainty issues pertaining to obstacle avoidance, such as low accuracy sensors, environments with poor visibility or unpredictable obstacle motion.
Vasileios Psychas, Dimitrios; Delikaraoglou, Demitris
2016-04-01
The future Global Navigation Satellite Systems (GNSS), including modernized GPS, GLONASS, Galileo and BeiDou, offer three or more signal carriers for civilian use and much more redundant observables. The additional frequencies can significantly improve the capabilities of the traditional geodetic techniques based on GPS signals at two frequencies, especially with regard to the availability, accuracy, interoperability and integrity of high-precision GNSS applications. Furthermore, highly redundant measurements can allow for robust simultaneous estimation of static or mobile user states including more parameters such as real-time tropospheric biases and more reliable ambiguity resolution estimates. This paper presents an investigation and analysis of accuracy improvement techniques in the Precise Point Positioning (PPP) method using signals from the fully operational (GPS and GLONASS), as well as the emerging (Galileo and BeiDou) GNSS systems. The main aim was to determine the improvement in both the positioning accuracy achieved and the time convergence it takes to achieve geodetic-level (10 cm or less) accuracy. To this end, freely available observation data from the recent Multi-GNSS Experiment (MGEX) of the International GNSS Service, as well as the open source program RTKLIB were used. Following a brief background of the PPP technique and the scope of MGEX, the paper outlines the various observational scenarios that were used in order to test various data processing aspects of PPP solutions with multi-frequency, multi-constellation GNSS systems. Results from the processing of multi-GNSS observation data from selected permanent MGEX stations are presented and useful conclusions and recommendations for further research are drawn. As shown, data fusion from GPS, GLONASS, Galileo and BeiDou systems is becoming increasingly significant nowadays resulting in a position accuracy increase (mostly in the less favorable East direction) and a large reduction of convergence
Evaluation of a multi-point method for determining acoustic impedance
Jones, Michael G.; Parrott, Tony L.
1989-01-01
A multipoint method for determining acoustic impedance was evaluated in comparison with the traditional standing wave and two-microphone methods using 30 test samples covering the reflection factor magnitude range 0.004-0.999. The multipoint method is shown to combine the strengths of the standing wave and two-microphone methods while avoiding some of their inherent weaknesses. In particular, the results obtained suggest that the multipoint method will be less subject to flow induced random error than the two-microphone method in the presence of significant broadband noise levels associated with mean flow.
Hashemi, Seyyedhossein; Javaherian, Abdolrahim; Ataee-pour, Majid; Khoshdel, Hossein
2014-12-01
Facies models try to explain facies architectures which have a primary control on the subsurface heterogeneities and the fluid flow characteristics of a given reservoir. In the process of facies modeling, geostatistical methods are implemented to integrate different sources of data into a consistent model. The facies models should describe facies interactions; the shape and geometry of the geobodies as they occur in reality. Two distinct categories of geostatistical techniques are two-point and multiple-point (geo) statistics (MPS). In this study, both of the aforementioned categories were applied to generate facies models. A sequential indicator simulation (SIS) and a truncated Gaussian simulation (TGS) represented two-point geostatistical methods, and a single normal equation simulation (SNESIM) selected as an MPS simulation representative. The dataset from an extremely channelized carbonate reservoir located in southwest Iran was applied to these algorithms to analyze their performance in reproducing complex curvilinear geobodies. The SNESIM algorithm needs consistent training images (TI) in which all possible facies architectures that are present in the area are included. The TI model was founded on the data acquired from modern occurrences. These analogies delivered vital information about the possible channel geometries and facies classes that are typically present in those similar environments. The MPS results were conditioned to both soft and hard data. Soft facies probabilities were acquired from a neural network workflow. In this workflow, seismic-derived attributes were implemented as the input data. Furthermore, MPS realizations were conditioned to hard data to guarantee the exact positioning and continuity of the channel bodies. A geobody extraction workflow was implemented to extract the most certain parts of the channel bodies from the seismic data. These extracted parts of the channel bodies were applied to the simulation workflow as hard data. This
Winzor, Donald J
2004-02-15
As a response to recent expression of concern about possible unreliability of vapor pressure deficit measurements (K. Kiyosawa, Biophys. Chem. 104 (2003) 171-188), the results of published studies on the temperature dependence of the osmotic pressure of aqueous polyethylene glycol solutions are shown to account for the observed discrepancies between osmolality estimates obtained by freezing point depression and vapor pressure deficit osmometry--the cause of the concern.
Point Measurements of Fermi Velocities by a Time-of-Flight Method
DEFF Research Database (Denmark)
Falk, David S.; Henningsen, J. O.; Skriver, Hans Lomholt
1972-01-01
obtained one component of the velocity along half the circumference of the centrally symmetric orbit for B→∥[100]. The results are in agreement with current models for the Fermi surface. For B→∥[011], the electrons involved are not moving in a symmetry plane of the Fermi surface. In such cases one cannot...... masses for symmetry orbits of the Fermi surface, but differing slightly at general points. The comparison favors the Fourier model....
Directory of Open Access Journals (Sweden)
Marwan Abukhaled
2013-01-01
Full Text Available The variational iteration method is applied to solve a class of nonlinear singular boundary value problems that arise in physiology. The process of the method, which produces solutions in terms of convergent series, is explained. The Lagrange multipliers needed to construct the correctional functional are found in terms of the exponential integral and Whittaker functions. The method easily overcomes the obstacle of singularities. Examples will be presented to test the method and compare it to other existing methods in order to confirm fast convergence and significant accuracy.
Sugawara, Takuya; Ogihara, Yuki; Sakamoto, Yuji
2016-01-20
The point-based method and fast-Fourier-transform-based method are commonly used for calculation methods of computer-generation holograms. This paper proposes a novel fast calculation method for a patch model, which uses the point-based method. The method provides a calculation time that is proportional to the number of patches but not to that of the point light sources. This means that the method is suitable for calculating a wide area covered by patches quickly. Experiments using a graphics processing unit indicated that the proposed method is about 8 times or more faster than the ordinary point-based method.
Organizational Strategy and Business Environment Effects Based on a Computation Method
Reklitis, Panagiotis; Konstantopoulos, Nikolaos; Trivellas, Panagiotis
2007-12-01
According to many researchers of organizational theory, a great number of problems encountered by the manufacturing firms are due to their ineffectiveness to respond to significant changes of their external environment and align their competitive strategy accordingly. From this point of view, the pursuit of the appropriate generic strategy is vital for firms facing a dynamic and highly competitive environment. In the present paper, we adopt Porter's typology to operationalise organizational strategy (cost leadership, innovative and marketing differentiation, and focus) considering changes in the external business environment (dynamism, complexity and munificence). Although simulation of social events is a quite difficult task, since there are so many considerations (not all well understood) involved, in the present study we developed a dynamic system based on the conceptual framework of strategy-environment associations.
Energy Technology Data Exchange (ETDEWEB)
Zhu, Yong; Jiang, Wan-lu; Kong, Xiang-dong [Yanshan University, Hebei (China)
2017-02-15
In mechanical fault diagnosis and condition monitoring, extracting and eliminating the trend term of machinery signal are necessary. In this paper, an adaptive extraction method for trend term of machinery signal based on Extreme-point symmetric mode decomposition (ESMD) was proposed. This method fully utilized ESMD, including the self-adaptive decomposition feature and optimal fitting strategy. The effectiveness and practicability of this method are tested through simulation analysis and measured data validation. Results indicate that this method can adaptively extract various trend terms hidden in machinery signal, and has commendable self-adaptability. Moreover, the extraction results are better than those of empirical mode decomposition.
Energy Technology Data Exchange (ETDEWEB)
Baldwin, J.M. [Sandia National Labs., Livermore, CA (United States). Integrated Manufacturing Systems
1996-04-01
The Dimensional Inspection Techniques Specification (DITS) Project is an ongoing effort to produce tools and guidelines for optimum sampling and data analysis of machined parts, when measured using point-sample methods of dimensional metrology. This report is a compilation of results of a literature survey, conducted in support of the DITS. Over 160 citations are included, with author abstracts where available.
DEFF Research Database (Denmark)
Khoshfetrat Pakazad, Sina; Hansson, Anders; Andersen, Martin S.
2017-01-01
In this paper, we propose a distributed algorithm for solving coupled problems with chordal sparsity or an inherent tree structure which relies on primal–dual interior-point methods. We achieve this by distributing the computations at each iteration, using message-passing. In comparison to existi...
Solving a system of Volterra-Fredholm integral equations of the second kind via fixed point method
Hasan, Talaat I.; Salleh, Shaharuddin; Sulaiman, Nejmaddin A.
2015-12-01
In this paper, we consider the system of Volterra-Fredholm integral equations of the second kind (SVFI-2). We propose fixed point method (FPM) to solve SVFI-2. In addition, a few theorems and new algorithm is introduced. They are supported by numerical examples and simulations using Matlab. The results are reasonably good when compared with the exact solutions.
Czech Academy of Sciences Publication Activity Database
Pastorek, Lukáš; Sobol, Margaryta; Hozák, Pavel
2016-01-01
Roč. 146, č. 4 (2016), s. 391-406 ISSN 0948-6143 R&D Projects: GA TA ČR(CZ) TE01020118; GA ČR GA15-08738S; GA MŠk(CZ) ED1.1.00/02.0109; GA MŠk(CZ) LM2015062 Grant - others:Human Frontier Science Program(FR) RGP0017/2013 Institutional support: RVO:68378050 Keywords : Colocalization * Quantitative analysis * Pointed patterns * Transmission electron microscopy * Manders' coefficients * Immunohistochemistry Subject RIV: EB - Genetics ; Molecular Biology Impact factor: 2.553, year: 2016
Payrau, Bernard; Qu?r?, Nadine; Bois, Danis
2011-01-01
Background A first study on vascular fasciatherapy enabled us to observe the turning of a turbulent blood flow into a laminar one, and a questioning on the process involved in this transformation emerged. The first question was: What is the nature of artery from the point of view of fascia? And a second question was: Which is the link permitting the observed process working in our first study? So this time, we are investigating a specific aspect of the big question that polarizes the interest...
Directory of Open Access Journals (Sweden)
Dominique Placko
2016-10-01
Full Text Available The distributed point source method, or DPSM, developed in the last decade has been used for solving various engineering problems—such as elastic and electromagnetic wave propagation, electrostatic, and fluid flow problems. Based on a semi-analytical formulation, the DPSM solution is generally built by superimposing the point source solutions or Green’s functions. However, the DPSM solution can be also obtained by superimposing elemental solutions of volume sources having some source density called the equivalent source density (ESD. In earlier works mostly point sources were used. In this paper the DPSM formulation is modified to introduce a new kind of ESD, replacing the classical single point source by a family of point sources that are referred to as quantum sources. The proposed formulation with these quantum sources do not change the dimension of the global matrix to be inverted to solve the problem when compared with the classical point source-based DPSM formulation. To assess the performance of this new formulation, the ultrasonic field generated by a circular planer transducer was compared with the classical DPSM formulation and analytical solution. The results show a significant improvement in the near field computation.
McGraner, Kristin L.; Robbins, Daniel
2010-01-01
Although many research questions in English education demand the use of qualitative methods, this paper will briefly explore how English education researchers and doctoral students may use statistics and quantitative methods to inform, complement, and/or deepen their inquiries. First, the authors will provide a general overview of the survey areas…
Calculation of condition indices for road structures using a deduct points method
CSIR Research Space (South Africa)
Roux, MP
2016-07-01
Full Text Available ) and relevancy (R) rating. The DER-rating method has been included in the Draft TMH19 Manual for the Visual Assessment of Road Structures. The D, E, and R ratings are used to calculate condition indices for road structures. The method used is a deduct...
Electrostatics of a Point Charge between Intersecting Planes: Exact Solutions and Method of Images
Mei, W. N.; Holloway, A.
2005-01-01
In this work, the authors present a commonly used example in electrostatics that could be solved exactly in a conventional manner, yet expressed in a compact form, and simultaneously work out special cases using the method of images. Then, by plotting the potentials and electric fields obtained from these two methods, the authors demonstrate that…
Comparison of methods for estimating density of forest songbirds from point counts
Jennifer L. Reidy; Frank R. Thompson; J. Wesley. Bailey
2011-01-01
New analytical methods have been promoted for estimating the probability of detection and density of birds from count data but few studies have compared these methods using real data. We compared estimates of detection probability and density from distance and time-removal models and survey protocols based on 5- or 10-min counts and outer radii of 50 or 100 m. We...
Directory of Open Access Journals (Sweden)
Du Wei-Shih
2011-01-01
Full Text Available Abstract In this paper, we introduce a new approach method to find a common element in the intersection of the set of the solutions of a finite family of equilibrium problems and the set of fixed points of a nonexpansive mapping in a real Hilbert space. Under appropriate conditions, some strong convergence theorems are established. The results obtained in this paper are new, and a few examples illustrating these results are given. Finally, we point out that some 'so-called' mixed equilibrium problems and generalized equilibrium problems in the literature are still usual equilibrium problems. 2010 Mathematics Subject Classification: 47H09; 47H10, 47J25.
Energy Technology Data Exchange (ETDEWEB)
Rider, M.J.; Castro, C.A.; Garcia, A.V. [State University of Campinas (Brazil). Electric Energy Systems Dept.; Paucar, V.L. [Federal University of Maranhao (Brazil). Electrical Engineering Dept.
2004-07-01
A method for computing the minimum active power loss in competitive electric power markets is proposed. The active power loss minimisation problem is formulated as an optimal power flow (OPF) with equality and inequality nonlinear constraints which take into account the power system security. The OPF has been solved using the multiple predictor-corrector interior-point method (MPC) of the family of higher-order interior-point methods, enhanced with a procedure for step-length computation during Newton iterations. The utilisation of the proposed enhanced MPC leads to convergence with a smaller number of iterations and better computational times than some results reported in the literature. An efficient computation of the primal and dual step-sizes is capable of reducing the primal and dual objective function errors, respectively, assuring continuously decreasing errors during the iterations of the interior-point method procedure. The proposed method has been simulated for several IEEF- test systems and two real systems including a 464 bus configuration of the interconnected Peruvian power system, and a 2256 bus scenario of the South-Southeast interconnected Brazilian system. Results of the tests have shown that the convergence is facilitated and the number of iterations may be small. (author)
Treatment of phantom pain with contralateral injection into tender points: a new method of treatment
Directory of Open Access Journals (Sweden)
Alaa A El Aziz Labeeb
2015-01-01
Conclusion Contralateral injections of 1 ml of 0.25% bupivacaine in the myofascial hyperalgesic areas attenuated phantom limb pain and affected phantom limb sensation. Our study gives a basis of a new method of management of that kind of severe pain to improve the method of rehabilitation of amputee. However, further longitudinal studies with larger number of patients are needed to confirm our study.
DEFF Research Database (Denmark)
Barfod, Adrian; Straubhaar, Julien; Høyer, Anne-Sophie
2017-01-01
Creating increasingly realistic hydrological models involves the inclusion of additional geological and geophysical data in the hydrostratigraphic modelling procedure. Using Multiple Point Statistics (MPS) for stochastic hydrostratigraphic modelling provides a degree of flexibility that allows......2. The comparison of the stochastic hydrostratigraphic MPS models is carried out in an elaborate scheme of visual inspection, mathematical similarity and consistency with boreholes. Using the Kasted survey data, a practical example for modelling new survey areas is presented. A cognitive...... soft data variable. The computation time of 2-3 h for snesim was in between DS and iqsim. The snesim implementation used here is part of the Stanford Geostatistical Modeling Software, or SGeMS. The snesim setup was not trivial, with numerous parameter settings, usage of multiple grids and a search tree...
Directory of Open Access Journals (Sweden)
Phayap Katchang
2010-01-01
Full Text Available The purpose of this paper is to investigate the problem of finding a common element of the set of solutions for mixed equilibrium problems, the set of solutions of the variational inclusions with set-valued maximal monotone mappings and inverse-strongly monotone mappings, and the set of fixed points of a family of finitely nonexpansive mappings in the setting of Hilbert spaces. We propose a new iterative scheme for finding the common element of the above three sets. Our results improve and extend the corresponding results of the works by Zhang et al. (2008, Peng et al. (2008, Peng and Yao (2009, as well as Plubtieng and Sriprad (2009 and some well-known results in the literature.
Methods of choosing the best methods of building a dynamic visualization environment
Directory of Open Access Journals (Sweden)
В.А. Бородін
2009-02-01
Full Text Available In work is offered the methods of the choice of the most optimum combination of the methods which provides the building of the visual image of the dynamic scenes on the displays of real-time ANGS, which defines the optimal percent of the use for each of m software programs, that are in complex, n methods, optimizing velocity of the image of the visual image. The calculation of the ratio is carried out using the details of this problem to the linear programming problem. In work is offered the calculation of the optimum methods for building a visual image of a dynamic scenes for a specific task.
Xu, Jun; Dang, Chao; Kong, Fan
2017-10-01
This paper presents a new method for efficient structural reliability analysis. In this method, a rotational quasi-symmetric point method (RQ-SPM) is proposed for evaluating the fractional moments of the performance function. Then, the derivation of the performance function's probability density function (PDF) is carried out based on the maximum entropy method in which constraints are specified in terms of fractional moments. In this regard, the probability of failure can be obtained by a simple integral over the performance function's PDF. Six examples, including a finite element-based reliability analysis and a dynamic system with strong nonlinearity, are used to illustrate the efficacy of the proposed method. All the computed results are compared with those by Monte Carlo simulation (MCS). It is found that the proposed method can provide very accurate results with low computational effort.
DEFF Research Database (Denmark)
Hasheminamin, Maryam; Agelidis, Vassilios; Ahmadi, Abdollah
2018-01-01
Voltage rise (VR) due to reverse power flow is an important obstacle for high integration of Photovoltaic (PV) into residential networks. This paper introduces and elaborates a novel methodology of an index-based single-point-reactive power-control (SPRPC) methodology to mitigate voltage rise by ...... system with high r/x ratio. Efficacy, effectiveness and cost study of SPRPC is compared to droop control to evaluate its advantages....... by absorbing adequate reactive power from one selected point. The proposed index utilizes short circuit analysis to select the best point to apply this Volt/Var control method. SPRPC is supported technically and financially by distribution network operator that makes it cost effective, simple and efficient...
Energy Technology Data Exchange (ETDEWEB)
Tuvshinjargal, Doopalam; Lee, Deok Jin [Kunsan National University, Gunsan (Korea, Republic of)
2015-06-15
In this paper, an efficient dynamic reactive motion planning method for an autonomous vehicle in a dynamic environment is proposed. The purpose of the proposed method is to improve the robustness of autonomous robot motion planning capabilities within dynamic, uncertain environments by integrating a virtual plane-based reactive motion planning technique with a sensor fusion-based obstacle detection approach. The dynamic reactive motion planning method assumes a local observer in the virtual plane, which allows the effective transformation of complex dynamic planning problems into simple stationary ones proving the speed and orientation information between the robot and obstacles. In addition, the sensor fusion-based obstacle detection technique allows the pose estimation of moving obstacles using a Kinect sensor and sonar sensors, thus improving the accuracy and robustness of the reactive motion planning approach. The performance of the proposed method was demonstrated through not only simulation studies but also field experiments using multiple moving obstacles in hostile dynamic environments.
DEFF Research Database (Denmark)
Barfod, Adrian; Straubhaar, Julien; Høyer, Anne-Sophie
2017-01-01
the incorporation of elaborate datasets and provides a framework for stochastic hydrostratigraphic modelling. This paper focuses on comparing three MPS methods: snesim, DS and iqsim. The MPS methods are tested and compared on a real-world hydrogeophysical survey from Kasted in Denmark, which covers an area of 45 km...... soft data variable. The computation time of 2-3 h for snesim was in between DS and iqsim. The snesim implementation used here is part of the Stanford Geostatistical Modeling Software, or SGeMS. The snesim setup was not trivial, with numerous parameter settings, usage of multiple grids and a search tree...
Directory of Open Access Journals (Sweden)
Wei Liu
2016-12-01
Full Text Available High-accuracy surface measurement of large aviation parts is a significant guarantee of aircraft assembly with high quality. The result of boundary measurement is a significant parameter for aviation-part measurement. This paper proposes a measurement method for accurately measuring the surface and boundary of aviation part with feature compression extraction and directed edge-point criterion. To improve the measurement accuracy of both the surface and boundary of large parts, extraction method of global boundary and feature analysis of local stripe are combined. The center feature of laser stripe is obtained with high accuracy and less calculation using a sub-pixel centroid extraction method based on compress processing. This method consists of a compressing process of images and judgment criterion of laser stripe centers. An edge-point extraction method based on directed arc-length criterion is proposed to obtain accurate boundary. Finally, a high-precision reconstruction of aerospace part is achieved. Experiments are performed both in a laboratory and an industrial field. The physical measurements validate that the mean distance deviation of the proposed method is 0.47 mm. The results of the field experimentation show the validity of the proposed method.
An automated method for the evaluation of the pointing accuracy of Sun-tracking devices
Baumgartner, Dietmar J.; Pötzi, Werner; Freislich, Heinrich; Strutzmann, Heinz; Veronig, Astrid M.; Rieder, Harald E.
2017-03-01
The accuracy of solar radiation measurements, for direct (DIR) and diffuse (DIF) radiation, depends significantly on the precision of the operational Sun-tracking device. Thus, rigid targets for instrument performance and operation have been specified for international monitoring networks, e.g., the Baseline Surface Radiation Network (BSRN) operating under the auspices of the World Climate Research Program (WCRP). Sun-tracking devices that fulfill these accuracy requirements are available from various instrument manufacturers; however, none of the commercially available systems comprise an automatic accuracy control system allowing platform operators to independently validate the pointing accuracy of Sun-tracking sensors during operation. Here we present KSO-STREAMS (KSO-SunTRackEr Accuracy Monitoring System), a fully automated, system-independent, and cost-effective system for evaluating the pointing accuracy of Sun-tracking devices. We detail the monitoring system setup, its design and specifications, and the results from its application to the Sun-tracking system operated at the Kanzelhöhe Observatory (KSO) Austrian radiation monitoring network (ARAD) site. The results from an evaluation campaign from March to June 2015 show that the tracking accuracy of the device operated at KSO lies within BSRN specifications (i.e., 0.1° tracking accuracy) for the vast majority of observations (99.8 %). The evaluation of manufacturer-specified active-tracking accuracies (0.02°), during periods with direct solar radiation exceeding 300 W m-2, shows that these are satisfied in 72.9 % of observations. Tracking accuracies are highest during clear-sky conditions and on days where prevailing clear-sky conditions are interrupted by frontal movement; in these cases, we obtain the complete fulfillment of BSRN requirements and 76.4 % of observations within manufacturer-specified active-tracking accuracies. Limitations to tracking surveillance arise during overcast conditions and
Abdul Manan, Muhammad Marizwan
2014-09-01
This paper uses data from an observational study, conducted at access points in straight sections of primary roads in Malaysia in 2012, to investigate the effects of motorcyclists' behavior and road environment attributes on the occurrence of serious traffic conflicts involving motorcyclists entering primary roads via access points. In order to handle the unobserved heterogeneity in the small sample data size, this study applies mixed effects logistic regression with multilevel bootstrapping. Two statistically significant models (Model 2 and Model 3) are produced, with 2 levels of random effect parameters, i.e. motorcyclists' attributes and behavior at Level 1, and road environment attributes at Level 2. Among all the road environment attributes tested, the traffic volume and the speed limit are found to be statistically significant, only contributing to 26-29% of the variations affecting the traffic conflict outcome. The implication is that 71-74% of the unmeasured or undescribed attributes and behavior of motorcyclists still have an importance in predicting the outcome: a serious traffic conflict. As for the fixed effect parameters, both models show that the risk of motorcyclists being involved in a serious traffic conflict is 2-4 times more likely if they accept a shorter gap to a single approaching vehicle (time lag road from the access point. A road environment factor, such as a narrow lane (seen in Model 2), and a behavioral factor, such as stopping at the stop line (seen in Model 3), also influence the occurrence of a serious traffic conflict compared to those entering into a wider lane road and without stopping at the stop line, respectively. A discussion of the possible reasons for this seemingly strange result, including a recommendation for further research, concludes the paper. Copyright © 2014 Elsevier Ltd. All rights reserved.
GOCE in ocean modelling - Point mass method applied on GOCE gravity gradients
DEFF Research Database (Denmark)
Herceg, Matija
2009-01-01
This presentation is an introduction to my Ph.D project. The main objective of the study is to improve the methodology for combining GOCE gravity field models with satellite altimetry to derive optimal dynamic ocean topography models for oceanography. Here a method for geoid determination using...
A Bayesian MCMC method for point process models with intractable normalising constants
DEFF Research Database (Denmark)
Berthelsen, Kasper Klitgaard; Møller, Jesper
2004-01-01
We present new methodology for drawing samples from a posterior distribution when the likelihood function is only specified up to a normalising constant. Our method is "on-line" as compared with alternative approaches to the problem which require "off-line" computations. Since it is needed...
A method for finding the ridge between saddle points applied to rare event rate estimates
DEFF Research Database (Denmark)
Maronsson, Jon Bergmann; Jónsson, Hannes; Vegge, Tejs
2012-01-01
to the path. The method is applied to Al adatom diffusion on the Al(100) surface to find the ridge between 2-, 3- and 4-atom concerted displacements and hop mechanisms. A correction to the harmonic approximation of transition state theory was estimated by direct evaluation of the configuration integral along...
Directory of Open Access Journals (Sweden)
П.В. Артамонов
2008-03-01
Full Text Available This article described findings investigation of a dynamic method measurement articulate the moments of tensometriced rudders surfaces, located on model half-wing of the plane. Measurements were spent in a wind tunnel at continuous moving model on the corners of attack in the selected diapason.
Directory of Open Access Journals (Sweden)
Braud Isabelle
2017-09-01
Full Text Available Topsoil field-saturated hydraulic conductivity, Kfs, is a parameter that controls the partition of rainfall between infiltration and runoff and is a key parameter in most distributed hydrological models. There is a mismatch between the scale of local in situ Kfs measurements and the scale at which the parameter is required in models for regional mapping. Therefore methods for extrapolating local Kfs values to larger mapping units are required. The paper explores the feasibility of mapping Kfs in the Cévennes-Vivarais region, in south-east France, using more easily available GIS data concerning geology and land cover. Our analysis makes uses of a data set from infiltration measurements performed in the area and its vicinity for more than ten years. The data set is composed of Kfs derived from infiltration measurements performed using various methods: Guelph permeameters, double ring and single ring infiltrotrometers and tension infiltrometers. The different methods resulted in a large variation in Kfs up to several orders of magnitude. A method is proposed to pool the data from the different infiltration methods to create an equivalent set of Kfs. Statistical tests showed significant differences in Kfs distributions in function of different geological formations and land cover. Thus the mapping of Kfs at regional scale was based on geological formations and land cover. This map was compared to a map based on the Rawls and Brakensiek (RB pedotransfer function (mainly based on texture and the two maps showed very different patterns. The RB values did not fit observed equivalent Kfs at the local scale, highlighting that soil texture alone is not a good predictor of Kfs.
Lin, Claire Yilin; Veneziani, Alessandro; Ruthotto, Lars
2017-10-26
We present novel numerical methods for polyline-to-point-cloud registration and their application to patient-specific modeling of deployed coronary artery stents from image data. Patient-specific coronary stent reconstruction is an important challenge in computational hemodynamics and relevant to the design and improvement of the prostheses. It is an invaluable tool in large-scale clinical trials that computationally investigate the effect of new generations of stents on hemodynamics and eventually tissue remodeling. Given a point cloud of strut positions, which can be extracted from images, our stent reconstruction method aims at finding a geometrical transformation that aligns a model of the undeployed stent to the point cloud. Mathematically, we describe the undeployed stent as a polyline, which is a piecewise linear object defined by its vertices and edges. We formulate the nonlinear registration as an optimization problem whose objective function consists of a similarity measure, quantifying the distance between the polyline and the point cloud, and a regularization functional, penalizing undesired transformations. Using projections of points onto the polyline structure, we derive novel distance measures. Our formulation supports most commonly used transformation models including very flexible nonlinear deformations. We also propose 2 regularization approaches ensuring the smoothness of the estimated nonlinear transformation. We demonstrate the potential of our methods using an academic 2D example and a real-life 3D bioabsorbable stent reconstruction problem. Our results show that the registration problem can be solved to sufficient accuracy within seconds using only a few number of Gauss-Newton iterations. Copyright © 2017 John Wiley & Sons, Ltd.
Development of Precise Point Positioning Method Using Global Positioning System Measurements
Directory of Open Access Journals (Sweden)
Byung-Kyu Choi
2011-09-01
Full Text Available Precise point positioning (PPP is increasingly used in several parts such as monitoring of crustal movement and maintaining an international terrestrial reference frame using global positioning system (GPS measurements. An accuracy of PPP data processing has been increased due to the use of the more precise satellite orbit/clock products. In this study we developed PPP algorithm that utilizes data collected by a GPS receiver. The measurement error modelling including the tropospheric error and the tidal model in data processing was considered to improve the positioning accuracy. The extended Kalman filter has been also employed to estimate the state parameters such as positioning information and float ambiguities. For the verification, we compared our results to other of International GNSS Service analysis center. As a result, the mean errors of the estimated position on the East-West, North-South and Up-Down direction for the five days were 0.9 cm, 0.32 cm, and 1.14 cm in 95% confidence level.
Directory of Open Access Journals (Sweden)
Byung-Kyu Choi
2012-09-01
Full Text Available Kinematic global positioning system precise point positioning (GPS PPP technology is widely used to the several area such as monitoring of crustal movement and precise orbit determination (POD using the dual-frequency GPS observations. In this study we developed a kinematic PPP technology and applied 3-pass (forward/backward/forward filter for the stabilization of the initial state of the parameters to be estimated. For verification of results, we obtained GPS data sets from six international GPS reference stations (ALGO, AMC2, BJFS, GRAZ, IENG and TSKB and processed in daily basis by using the developed software. As a result, the mean position errors by kinematic PPP showed 0.51 cm in the east-west direction, 0.31 cm in the north-south direction and 1.02 cm in the up-down direction. The root mean square values produced from them were 1.59 cm for the east-west component, 1.26 cm for the south-west component and 2.95 cm for the up-down component.
Leveraging Data Fusion Strategies in Multireceptor Lead Optimization MM/GBSA End-Point Methods.
Knight, Jennifer L; Krilov, Goran; Borrelli, Kenneth W; Williams, Joshua; Gunn, John R; Clowes, Alec; Cheng, Luciano; Friesner, Richard A; Abel, Robert
2014-08-12
Accurate and efficient affinity calculations are critical to enhancing the contribution of in silico modeling during the lead optimization phase of a drug discovery campaign. Here, we present a large-scale study of the efficacy of data fusion strategies to leverage results from end-point MM/GBSA calculations in multiple receptors to identify potent inhibitors among an ensemble of congeneric ligands. The retrospective analysis of 13 congeneric ligand series curated from publicly available data across seven biological targets demonstrates that in 90% of the individual receptor structures MM/GBSA scores successfully identify subsets of inhibitors that are more potent than a random selection, and data fusion strategies that combine MM/GBSA scores from each of the receptors significantly increase the robustness of the predictions. Among nine different data fusion metrics based on consensus scores or receptor rankings, the SumZScore (i.e., converting MM/GBSA scores into standardized Z-Scores within a receptor and computing the sum of the Z-Scores for a given ligand across the ensemble of receptors) is found to be a robust and physically meaningful metric for combining results across multiple receptors. Perhaps most surprisingly, even with relatively low to modest overall correlations between SumZScore and experimental binding affinities, SumZScore tends to reliably prioritize subsets of inhibitors that are at least as potent as those that are prioritized from a "best" single receptor identified from known compounds within the congeneric series.
Taheri, Navid; Rezasoltani, Asghar; Okhovatian, Farshad; Karami, Mehdi; Hosseini, Sayed Mohsen; Kouhzad Mohammadi, Hosein
2016-07-01
Myofascial pain syndrome (MPS) is a neuromuscular dysfunction consisting of both motor and sensory abnormalities. Considering the high prevalence of MPS and its related disabilities and costs, this study was designed to determine the reliability of new ultrasonographic indexes of the upper trapezius muscle as well as the sensitivity and specificity of 2D ultrasound imaging for diagnostic purposes. Furthermore, we sought to evaluate the effectiveness of dry needling (DN) on studied ultrasonographic indexes. This study will be performed in two steps with two different designs. The first is a pilot study and was designed as a semi-experimental study to determine the sensitivity and specificity of ultrasonography for the diagnosis of MPS and the reliability of ultrasonographic measurements like muscle thickness, area of myofascial trigger points (MTrPs) in longitudinal view, echogenicity of MTrPs in longitudinal view, echogenicity of muscle with MTrPs in longitudinal and transverse views, and pennation angle of upper trapezius muscle. The second study is an interventional study which was designed to investigate the effectiveness of DN on ultrasonographic measurements, for which the reliability was determined in the first study. we will quantify the effectiveness of DN on MTrPs and muscle tissue by using novel ultrasonographic indexes. The results of the current study will provide baseline information to design more interventional studies to improve the evaluation of other treatments of MPS. Copyright © 2015 Elsevier Ltd. All rights reserved.
Liu, Xiao-Na; Zheng, Qiu-Sheng; Che, Xiao-Qing; Wu, Zhi-Sheng; Qiao, Yan-Jiang
2017-03-01
The blending end-point determination of Angong Niuhuang Wan (AGNH) is a key technology problem. The control strategy based on quality by design (QbD) concept proposes a whole blending end-point determination method, and provides a methodology for blending the Chinese materia medica containing mineral substances. Based on QbD concept, the laser induced breakdown spectroscopy (LIBS) was used to assess the cinnabar, realgar and pearl powder blending of AGNH in a pilot-scale experiment, especially the whole blending end-point in this study. The blending variability of three mineral medicines including cinnabar, realgar and pearl powder, was measured by moving window relative standard deviation (MWRSD) based on LIBS. The time profiles of realgar and pearl powder did not produce consistent results completely, but all of them reached even blending at the last blending stage, so that the whole proposal blending end point was determined. LIBS is a promising Process Analytical Technology (PAT) for process control. Unlike other elemental determination technologies such ICP-OES, LIBS does not need an elaborate digestion procedure, which is a promising and rapid technique to understand the blending process of Chinese materia medica (CMM) containing cinnabar, realgar and other mineral traditional Chinese medicine. This study proposed a novel method for the research of large varieties of traditional Chinese medicines.. Copyright© by the Chinese Pharmaceutical Association.
Research on Geographical Environment Unit Division Based on the Method of Natural Breaks (Jenks)
Chen, J.; Yang, S. T.; Li, H. W.; Zhang, B.; Lv, J. R.
2013-11-01
Zoning which is to divide the study area into different zones according to their geographical differences at the global, national or regional level, includes natural division, economic division, geographical zoning of departments, comprehensive zoning and so on. Zoning is of important practical significance, for example, knowing regional differences and characteristics, regional research and regional development planning, understanding the favorable and unfavorable conditions of the regional development etc. Geographical environment is arising from the geographical position linkages. Geographical environment unit division is also a type of zoning. The geographical environment indicators are deeply studied and summed up in the article, including the background, the associated and the potential. The background indicators are divided into four categories, such as the socio-economic, the political and military, the strategic resources and the ecological environment, which can be divided into more sub-indexes. While the sub-indexes can be integrated to comprehensive index system by weighted stacking method. The Jenks natural breaks classification method, also called the Jenks optimization method, is a data classification method designed to determine the best arrangement of values into different classes. This is done by seeking to minimize each class's average deviation from the class mean, while maximizing each class's deviation from the means of the other groups. In this paper, the experiment of Chinese surrounding geographical environment unit division has been done based on the natural breaks (jenks) method, the geographical environment index system and the weighted stacking method, taking South Asia as an example. The result indicates that natural breaks (jenks) method is of good adaptability and high accuracy on the geographical environment unit division. The geographical environment research was originated in the geopolitics and flourished in the geo
An Entry Point for Formal Methods: Specification and Analysis of Event Logs
Directory of Open Access Journals (Sweden)
Howard Barringer
2010-03-01
Full Text Available Formal specification languages have long languished, due to the grave scalability problems faced by complete verification methods. Runtime verification promises to use formal specifications to automate part of the more scalable art of testing, but has not been widely applied to real systems, and often falters due to the cost and complexity of instrumentation for online monitoring. In this paper we discuss work in progress to apply an event-based specification system to the logging mechanism of the Mars Science Laboratory mission at JPL. By focusing on log analysis, we exploit the "instrumentation" already implemented and required for communicating with the spacecraft. We argue that this work both shows a practical method for using formal specifications in testing and opens interesting research avenues, including a challenging specification learning problem.
DEFF Research Database (Denmark)
Barfod, Adrian
from a deterministic 3D geological model of the study area. The stochastic ensemble modeling approach is used to compare three different MPS methods (Paper II). However, visually comparing a large set of 3D hydrostratigraphic models is no trivial task. Therefore, a quantitative comparison technique......The PhD thesis presents a new method for analyzing the relationship between resistivity and lithology, as well as a method for quantifying the hydrostratigraphic modeling uncertainty related to Multiple-Point Statistical (MPS) methods. Three-dimensional (3D) geological models are im...... in two publicly available databases, the JUPITER and GERDA databases, which contain borehole and geophysical data, respectively. The large amounts of available data provided a unique opportunity for studying the resistivity-lithology relationship. The method for analyzing the resistivity...
Directory of Open Access Journals (Sweden)
Antonio Roberto Balbo
2012-01-01
Full Text Available This paper proposes a predictor-corrector primal-dual interior point method which introduces line search procedures (IPLS in both the predictor and corrector steps. The Fibonacci search technique is used in the predictor step, while an Armijo line search is used in the corrector step. The method is developed for application to the economic dispatch (ED problem studied in the field of power systems analysis. The theory of the method is examined for quadratic programming problems and involves the analysis of iterative schemes, computational implementation, and issues concerning the adaptation of the proposed algorithm to solve ED problems. Numerical results are presented, which demonstrate improvements and the efficiency of the IPLS method when compared to several other methods described in the literature. Finally, postoptimization analyses are performed for the solution of ED problems.
High precision micro-scale Hall Effect characterization method using in-line micro four-point probes
DEFF Research Database (Denmark)
Petersen, Dirch Hjorth; Hansen, Ole; Lin, Rong
2008-01-01
Accurate characterization of ultra shallow junctions (USJ) is important in order to understand the principles of junction formation and to develop the appropriate implant and annealing technologies. We investigate the capabilities of a new micro-scale Hall effect measurement method where Hall...... effect is measured with collinear micro four-point probes (M4PP). We derive the sensitivity to electrode position errors and describe a position error suppression method to enable rapid reliable Hall effect measurements with just two measurement points. We show with both Monte Carlo simulations...... and experimental measurements, that the repeatability of a micro-scale Hall effect measurement is better than 1 %. We demonstrate the ability to spatially resolve Hall effect on micro-scale by characterization of an USJ with a single laser stripe anneal. The micro sheet resistance variations resulting from...
Zaffina, S; Camisa, V; Poscia, A; Tucci, M G; Montaldi, V; Cerabona, V; Wachocka, M; Moscato, U
2012-01-01
Several studies have shown that occupational exposure to anesthetic gases might be higher during pediatric surgery, probably due to the increased use of inhalational induction techniques. Our study aims to assess the level of exposure to sevoflurane in two rooms of pediatric surgery, using multi-point sampling method for environmental monitoring. The gas concentrations as well as its dispersion were measured in strategic points in the rooms for a total of 44 surgical interventions. Although the average of these concentrations has been rather low (1.32, SD +/- 1:55 ppm), the results obtained have documented a significant distribution kinetics difference inside the rooms as function of multiple factors among which there were the anesthetic technique used and the team involved. Therefore the method described allows to correctly analyze the spread of anesthetic gases and suggests a different risk stratification which may be dependent on the professional work.
Belon, Ana Paula; Nieuwendyk, Laura M; Vallianatos, Helen; Nykiforuk, Candace I J
2014-09-01
A growing body of evidence shows that community environment plays an important role in individuals' physical activity engagement. However, while attributes of the physical environment are widely investigated, sociocultural, political, and economic aspects of the environment are often neglected. This article helps to fill these knowledge gaps by providing a more comprehensive understanding of multiple dimensions of the community environment relative to physical activity. The purpose of this study was to qualitatively explore how people's experiences and perceptions of their community environments affect their abilities to engage in physical activity. A PhotoVoice method was used to identify barriers to and opportunities for physical activity among residents in four communities in the province of Alberta, Canada, in 2009. After taking pictures, the thirty-five participants shared their perceptions of those opportunities and barriers in their community environments during individual interviews. Using the Analysis Grid for Environments Linked to Obesity (ANGELO) framework, themes emerging from these photo-elicited interviews were organized in four environment types: physical, sociocultural, economic, and political. The data show that themes linked to the physical (56.6%) and sociocultural (31.4%) environments were discussed more frequently than the themes of the economic (5.9%) and political (6.1%) environments. Participants identified nuanced barriers and opportunities for physical activity, which are illustrated by their quotes and photographs. The findings suggest that a myriad of factors from physical, sociocultural, economic, and political environments influence people's abilities to be physically active in their communities. Therefore, adoption of a broad, ecological perspective is needed to address the barriers and build upon the opportunities described by participants to make communities more healthy and active. Copyright © 2014 Elsevier Ltd. All rights
Energy Technology Data Exchange (ETDEWEB)
Huang, Zhenyu; Zhou, Ning; Tuffner, Francis K.; Chen, Yousu; Trudnowski, Daniel J.; Diao, Ruisheng; Fuller, Jason C.; Mittelstadt, William A.; Hauer, John F.; Dagle, Jeffery E.
2010-10-18
Small signal stability problems are one of the major threats to grid stability and reliability in the U.S. power grid. An undamped mode can cause large-amplitude oscillations and may result in system breakups and large-scale blackouts. There have been several incidents of system-wide oscillations. Of those incidents, the most notable is the August 10, 1996 western system breakup, a result of undamped system-wide oscillations. Significant efforts have been devoted to monitoring system oscillatory behaviors from measurements in the past 20 years. The deployment of phasor measurement units (PMU) provides high-precision, time-synchronized data needed for detecting oscillation modes. Measurement-based modal analysis, also known as ModeMeter, uses real-time phasor measurements to identify system oscillation modes and their damping. Low damping indicates potential system stability issues. Modal analysis has been demonstrated with phasor measurements to have the capability of estimating system modes from both oscillation signals and ambient data. With more and more phasor measurements available and ModeMeter techniques maturing, there is yet a need for methods to bring modal analysis from monitoring to actions. The methods should be able to associate low damping with grid operating conditions, so operators or automated operation schemes can respond when low damping is observed. The work presented in this report aims to develop such a method and establish a Modal Analysis for Grid Operation (MANGO) procedure to aid grid operation decision making to increase inter-area modal damping. The procedure can provide operation suggestions (such as increasing generation or decreasing load) for mitigating inter-area oscillations.
Modeling of Semiconductors and Correlated Oxides with Point Defects by First Principles Methods
Wang, Hao
2014-06-15
Point defects in silicon, vanadium dioxide, and doped ceria are investigated by density functional theory. Defects involving vacancies and interstitial oxygen and carbon in silicon are after formed in outer space and significantly affect device performances. The screened hybrid functional by Heyd-Scuseria-Ernzerhof is used to calculate formation energies, binding energies, and electronic structures of the defective systems because standard density functional theory underestimates the bang gap of silicon. The results indicate for the A-center a −2 charge state. Tin is proposed to be an effective dopant to suppress the formation of A-centers. For the total energy difference between the A- and B-type carbon related G-centers we find close agreement with the experiment. The results indicate that the C-type G-center is more stable than both the A- and B-types. The electronic structures of the monoclinic and rutile phases of vanadium dioxide are also studied using the Heyd-Scuseria-Ernzerhof functional. The ground states of the pure phases obtained by calculations including spin polarization disagree with the experimental observations that the monoclinic phase should not be magnetic, the rutile phase should be metallic, and the monoclinic phase should have a lower total energy than the rutile phase. By tuning the Hartree-Fock fraction α to 10% the agreement with experiments is improved in terms of band gaps and relative energies of the phases. A calculation scheme is proposed to simulate the relationship between the transition temperature of the metal-insulator transition and the dopant concentration in tungsten doped vanadium dioxide. We achieve good agreement with the experimental situation. 18.75% and 25% yttrium, lanthanum, praseodymium, samarium, and gadolinium doped ceria supercells generated by the special quasirandom structure approach are employed to investigate the impact of doping on the O diffusion. The experimental behavior of the conductivity for the
Built environment and physical activity: a brief review of evaluation methods
Directory of Open Access Journals (Sweden)
Adriano Akira Ferreira Hino
2010-08-01
Full Text Available There is strong evidence indicating that the environment where people live has amarked influence on physical activity. The current understanding of this relationship is basedon studies conducted in developed and culturally distinct countries and may not be applicableto the context of Brazil. In this respect, a better understanding of methods evaluating the relationshipbetween the environment and physical activity may contribute to the development ofnew studies in this area in Brazil. The objective of the present study was to briefly describe themain methods used to assess the relationship between built environment and physical activity.Three main approaches are used to obtain information about the environment: 1 environmentalperception; 2 systematic observation, and 3 geoprocessing. These methods are mainly applied toevaluate population density, mixed land use, physical activity facilities, street patterns, sidewalk/bike path coverage, public transportation, and safety/esthetics. In Brazil, studies investigating therelationship between the environment and physical activity are scarce, but the number of studiesis growing. Thus, further studies are necessary and methods applicable to the context of Brazilneed to be developed in order to increase the understanding of this subject.
An Evaluation Method of Underwater Ocean Environment Safety Situation Based on D-S Evidence Theory
Directory of Open Access Journals (Sweden)
Yuxin Zhao
2015-01-01
Full Text Available Because of complex ocean environment, underwater vehicles are facing many challenges in navigation safety and precise navigation. Aiming at the requirements of underwater navigation safety, this paper presents an evaluation method of underwater ocean environment safety situation based on Dempster-Shafer (D-S evidence theory. Firstly, the vital ocean environment factors which affect the underwater navigation safety are taken into account, and a novel basic probability assignment (BPA construction method of ocean environment factors is proposed according to their characteristics. Then, a new transformation method of BPA to decision-making probability is put forward to deal with the uncertainty degree. Furthermore, the super-standard weight is applied to preprocess the BPA, and D-S combination rule is used to acquire the evaluation result by fusing the preprocessed BPA. Ocean environment safety situation index is obtained by quantizing the evaluation grades. Finally, experimental results show that the method proposed has the superior practicability and reliability in actual applications.
DEFF Research Database (Denmark)
Sokoler, Leo Emil; Frison, Gianluca; Edlund, Kristian
2013-01-01
In this paper, we develop an efficient interior-point method (IPM) for the linear programs arising in economic model predictive control of linear systems. The novelty of our algorithm is that it combines a homogeneous and self-dual model, and a specialized Riccati iteration procedure. We test...... the algorithm in a conceptual study of power systems management. Simulations show that in comparison to state of the art software implementation of IPMs, our method is significantly faster and scales in a favourable way....
Directory of Open Access Journals (Sweden)
Jitpeera Thanyarat
2011-01-01
Full Text Available We introduce a new iterative method for finding a common element of the set of solutions for mixed equilibrium problem, the set of solutions of the variational inequality for a -inverse-strongly monotone mapping, and the set of fixed points of a family of finitely nonexpansive mappings in a real Hilbert space by using the viscosity and Cesàro mean approximation method. We prove that the sequence converges strongly to a common element of the above three sets under some mind conditions. Our results improve and extend the corresponding results of Kumam and Katchang (2009, Peng and Yao (2009, Shimizu and Takahashi (1997, and some authors.
Methods of sampling airborne fungi in working environments of waste treatment facilities
Directory of Open Access Journals (Sweden)
Kristýna Černá
2016-06-01
Full Text Available Objectives: The objective of the present study was to evaluate and compare the efficiency of a filter based sampling method and a high volume sampling method for sampling airborne culturable fungi present in waste sorting facilities. Material and Methods: Membrane filters method was compared with surface air system method. The selected sampling methods were modified and tested in 2 plastic waste sorting facilities. Results: The total number of colony-forming units (CFU/m3 of airborne fungi was dependent on the type of sampling device, on the time of sampling, which was carried out every hour from the beginning of the work shift, and on the type of cultivation medium (p < 0.001. Detected concentrations of airborne fungi ranged 2×102–1.7×106 CFU/m3 when using the membrane filters (MF method, and 3×102–6.4×104 CFU/m3 when using the surface air system (SAS method. Conclusions: Both methods showed comparable sensitivity to the fluctuations of the concentrations of airborne fungi during the work shifts. The SAS method is adequate for a fast indicative determination of concentration of airborne fungi. The MF method is suitable for thorough assessment of working environment contamination by airborne fungi. Therefore we recommend the MF method for the implementation of a uniform standard methodology of airborne fungi sampling in working environments of waste treatment facilities.
American Society for Testing and Materials. Philadelphia
2010-01-01
1.1 This test method provides a procedure for determining the ability of photovoltaic modules to withstand repeated immersion or splash exposure by seawater as might be encountered when installed in a marine environment, such as a floating aid-to-navigation. A combined environmental cycling exposure with modules repeatedly submerged in simulated saltwater at varying temperatures and under repetitive pressurization provides an accelerated basis for evaluation of aging effects of a marine environment on module materials and construction. 1.2 This test method defines photovoltaic module test specimens and requirements for positioning modules for test, references suitable methods for determining changes in electrical performance and characteristics, and specifies parameters which must be recorded and reported. 1.3 This test method does not establish pass or fail levels. The determination of acceptable or unacceptable results is beyond the scope of this test method. 1.4 The values stated in SI units are to be ...
A Practical Radiosity Method for Predicting Transmission Loss in Urban Environments
Directory of Open Access Journals (Sweden)
Liang Ming
2004-01-01
Full Text Available The ability to predict transmission loss or field strength distribution is crucial for determining coverage in planning personal communication systems. This paper presents a practical method to accurately predict entire average transmission loss distribution in complicated urban environments. The method uses a 3D propagation model based on radiosity and a simplified city information database including surfaces of roads and building groups. Narrowband validation measurements with line-of-sight (LOS and non-line-of-sight (NLOS cases at 1800 MHz give excellent agreement in urban environments.
Shi, Liang; Wang, Jie; Liu, Binhao; Nara, Kazuhide; Lian, Chunlan; Shen, Zhenguo; Xia, Yan; Chen, Yahua
2017-11-01
We examined the effects of three ectomycorrhizal (ECM) symbionts on the growth and photosynthesis capacity of Japanese black pine (Pinus thunbergii) seedlings and estimated physiological and photosynthetic parameters such as the light compensation point (LCP), biomass, and phosphorus (Pi) concentration of P. thunbergii seedlings. Through this investigation, we documented a new role of ectomycorrhizal (ECM) fungi: enhancement of the survival and competitiveness of P. thunbergii seedlings under low-light condition by reducing the LCP of seedlings. At a CO2 concentration of 400 ppm, the LCP of seedlings with ECM inoculations was 40-70 μmol photons m-2 s-1, significantly lower than that of non-mycorrhizal (NM) seedlings (200 μmol photons m-2 s-1). In addition, photosynthetic carbon fixation (Pn) increased with light intensity and CO2 level, and the Pn of ECM seedlings was significantly higher than that of NM seedlings; Pisolithus sp. (Pt)- and Laccaria amethystea (La)-mycorrhizal seedlings had significantly lower Pn than Cenococcum geophilum (Cg)-mycorrhizal seedlings. However, La-mycorrhizal seedlings exhibited the highest fresh weight, relative water content (RWC), and the lowest LCP in the mycorrhizal group. Concomitantly, ECM seedlings showed significantly increased chlorophyll content of needles and higher Pi concentrations compared to NM seedlings. Overall, ECM symbionts promoted growth and photosynthesis while reducing the LCP of P. thunbergii seedlings. These findings indicate that ECM fungi can enhance the survival and competitiveness of host seedlings under low light.
Yehia, Ali M.
2013-05-01
New, simple, specific, accurate and precise spectrophotometric technique utilizing ratio spectra is developed for simultaneous determination of two different binary mixtures. The developed ratio H-point standard addition method (RHPSAM) was managed successfully to resolve the spectral overlap in itopride hydrochloride (ITO) and pantoprazole sodium (PAN) binary mixture, as well as, mosapride citrate (MOS) and PAN binary mixture. The theoretical background and advantages of the newly proposed method are presented. The calibration curves are linear over the concentration range of 5-60 μg/mL, 5-40 μg/mL and 4-24 μg/mL for ITO, MOS and PAN, respectively. Specificity of the method was investigated and relative standard deviations were less than 1.5. The accuracy, precision and repeatability were also investigated for the proposed method according to ICH guidelines.
Ilakkuvan, Vinu; Tacelosky, Michael; Ivey, Keith C; Pearson, Jennifer L; Cantrell, Jennifer; Vallone, Donna M; Abrams, David B; Kirchner, Thomas R
2014-04-09
Photographs are an effective way to collect detailed and objective information about the environment, particularly for public health surveillance. However, accurately and reliably annotating (ie, extracting information from) photographs remains difficult, a critical bottleneck inhibiting the use of photographs for systematic surveillance. The advent of distributed human computation (ie, crowdsourcing) platforms represents a veritable breakthrough, making it possible for the first time to accurately, quickly, and repeatedly annotate photos at relatively low cost. This paper describes a methods protocol, using photographs from point-of-sale surveillance studies in the field of tobacco control to demonstrate the development and testing of custom-built tools that can greatly enhance the quality of crowdsourced annotation. Enhancing the quality of crowdsourced photo annotation requires a number of approaches and tools. The crowdsourced photo annotation process is greatly simplified by decomposing the overall process into smaller tasks, which improves accuracy and speed and enables adaptive processing, in which irrelevant data is filtered out and more difficult targets receive increased scrutiny. Additionally, zoom tools enable users to see details within photographs and crop tools highlight where within an image a specific object of interest is found, generating a set of photographs that answer specific questions. Beyond such tools, optimizing the number of raters (ie, crowd size) for accuracy and reliability is an important facet of crowdsourced photo annotation. This can be determined in a systematic manner based on the difficulty of the task and the desired level of accuracy, using receiver operating characteristic (ROC) analyses. Usability tests of the zoom and crop tool suggest that these tools significantly improve annotation accuracy. The tests asked raters to extract data from photographs, not for the purposes of assessing the quality of that data, but rather to
Brown, Diane Storer; Aronow, Harriet Udin
2016-01-01
The value of the ambulatory care nurse remains undocumented from a quality and patient safety measurement perspective and the practice is at risk of being highly variable and of unknown quality. The American Academy of Ambulatory Care Nursing and the Collaborative Alliance for Nursing Outcomes propose nurse leaders create a tipping point to measure the value of nursing across the continuum of nursing care, moving from inpatient to ambulatory care. As care continues to shift into the ambulatory care environment, the quality imperative must also shift to assure highly reliable, safe, and effective health care.
Unique construction methods used to expand Australia's Hay Point Port
Energy Technology Data Exchange (ETDEWEB)
McRobert, J.D.
1977-04-01
The expanded port was to accommodate a throughput of approximately 20,000,000 tons per year. This expansion required an extra rail loop and two-car tippler, two new rail-mounted stacker reclaimer units installed on two more double stockpile areas--each served by one stacker reclaimer, one on-shore surge bin of 1,000 tons capacity, a new approach conveyor over the existing conveyor trestle, and a new berth and shiploader. The new rail unloading system was identical to the original system of a twin car McDowell Wellman design. The new Dravo stacker reclaimers were each designed for an average output of 3,000,000 tons per hour. The new stockpile yard conveyors were interconnected with the old system at both ends of the approach trestle for maximum flexibility of loading. The on-land stockpile storage was increased to more than 3,000 tons. The new berth shiploader was designed to load ships at the rate of 6,000 tons per hour. Feed from the surge bins minimized deviation from the mean shiploading rate. Construction methods are described.
Methods of sampling airborne fungi in working environments of waste treatment facilities.
Černá, Kristýna; Wittlingerová, Zdeňka; Zimová, Magdaléna; Janovský, Zdeněk
2016-01-01
The objective of the present study was to evaluate and compare the efficiency of a filter based sampling method and a high volume sampling method for sampling airborne culturable fungi present in waste sorting facilities. Membrane filters method was compared with surface air system method. The selected sampling methods were modified and tested in 2 plastic waste sorting facilities. The total number of colony-forming units (CFU)/m3 of airborne fungi was dependent on the type of sampling device, on the time of sampling, which was carried out every hour from the beginning of the work shift, and on the type of cultivation medium (p fungi ranged 2×102-1.7×106 CFU/m3 when using the membrane filters (MF) method, and 3×102-6.4×104 CFU/m3 when using the surface air system (SAS) method. Both methods showed comparable sensitivity to the fluctuations of the concentrations of airborne fungi during the work shifts. The SAS method is adequate for a fast indicative determination of concentration of airborne fungi. The MF method is suitable for thorough assessment of working environment contamination by airborne fungi. Therefore we recommend the MF method for the implementation of a uniform standard methodology of airborne fungi sampling in working environments of waste treatment facilities. This work is available in Open Access model and licensed under a CC BY-NC 3.0 PL license.
Ghasemi, Elham; Kaykhaii, Massoud
2015-01-01
A fast, simple, and economical method was developed for simultaneous spectrophotometric determination of uranium(VI) and vanadium(V) in water samples based on micro cloud point extraction (MCPE) at room temperature. This is the first report on the simultaneous extraction and determination of U(VI) and V(V). In this method, Triton X114 was employed as a non-ionic surfactant for the cloud point procedure and 4-(2-pyridylazo) resorcinol (PAR) was used as the chelating agent for both analytes. To reach the cloud point at room temperature, the MCPE procedure was carried out in brine. The factors influencing the extraction efficiency were investigated and optimized. Under the optimized condition, the linear calibration curve was found to be in the concentration range between 100 - 750 and 50 - 600 μg L(-1) for U(VI) and V(V), respectively, with a limit of detection of 17.03 μg L(-1) (U) and 5.51 μg L(-1) (V). Total analysis time including microextraction was less than 5 min.
Yamanouchi, Tsuneaki; Horiuchi, Kenichi; Ishii, Kazunari; Mimura, Yasuhiko; Kato, Atsushi; Adachi, Isao
2014-01-01
The adsorption of Bevacizumab, Trastuzumab, Rituximab, Nedaplatin, Vincristine sulfate, Nogitecan hydrochloride, Actinomycin D and Ramosetron hydrochloride to 0.2 μm endotoxin-retentive in-line filters was evaluated with pediatric doses by UV spectrophotometry. The results indicated that some drug adsorption was shown with Nogitecan hydrochloride, Actinomycin D and Ramosetron hydrochloride, and good recovery was shown with the other five drugs. For the three drugs which showed some losses, drug recovery was investigated at multiple test doses. The approximation formula for each drug adsorption was recorded as Y=100-A/X (X: dose (mg), Y: recovery rate (%), A: a constant for individual drug). The results showed there was high correlation between the reciprocal of test drug dose and the recovery rate. Furthermore, in the cases where adsorption to the filter were observed, it was found that it was possible to determine the relationship between dose and the recovery rate from a filterability test with one point pediatric dose. Since the recovery rate obtained from the approximation formula with multiple doses and that calculated from the prediction formula with one point pediatric dose were almost the same, then it was concluded that it is not necessary to conduct the filterability tests with multiple doses. We have shown that using UV spectrophotometry and carrying out a filterability test using one point pediatric dose is relatively easy method and reduces the effort and expense. This method for analysis of drug adsorption is extremely useful when using in-line filters with infusion therapy.
Shayanfar, Mohsen Ali; Barkhordari, Mohammad Ali; Roudak, Mohammad Amin
2017-06-01
Monte Carlo simulation (MCS) is a useful tool for computation of probability of failure in reliability analysis. However, the large number of required random samples makes it time-consuming. Response surface method (RSM) is another common method in reliability analysis. Although RSM is widely used for its simplicity, it cannot be trusted in highly nonlinear problems due to its linear nature. In this paper, a new efficient algorithm, employing the combination of importance sampling, as a class of MCS, and RSM is proposed. In the proposed algorithm, analysis starts with importance sampling concepts and using a represented two-step updating rule of design point. This part finishes after a small number of samples are generated. Then RSM starts to work using Bucher experimental design, with the last design point and a represented effective length as the center point and radius of Bucher's approach, respectively. Through illustrative numerical examples, simplicity and efficiency of the proposed algorithm and the effectiveness of the represented rules are shown.
Physics-based statistical model and simulation method of RF propagation in urban environments
Pao, Hsueh-Yuan; Dvorak, Steven L.
2010-09-14
A physics-based statistical model and simulation/modeling method and system of electromagnetic wave propagation (wireless communication) in urban environments. In particular, the model is a computationally efficient close-formed parametric model of RF propagation in an urban environment which is extracted from a physics-based statistical wireless channel simulation method and system. The simulation divides the complex urban environment into a network of interconnected urban canyon waveguides which can be analyzed individually; calculates spectral coefficients of modal fields in the waveguides excited by the propagation using a database of statistical impedance boundary conditions which incorporates the complexity of building walls in the propagation model; determines statistical parameters of the calculated modal fields; and determines a parametric propagation model based on the statistical parameters of the calculated modal fields from which predictions of communications capability may be made.
DEFF Research Database (Denmark)
Bølling, Mads; Hartmeyer, Rikke; Bentsen, Peter
2017-01-01
In this study, we explored how teachers can take advantage of a ‘place’ in urban environments outside the school and thereby stimulate pupils’ situational interest in science teaching. Drawing on the Sophos research method, we conducted a single case study including film-elicited interviews....... The data consisted of transcribed interviews with 4 experienced teachers and 11 pupils. The interviews were elicited by films showing group work in science teaching in urban environments: a parking lot, a green public park and a zoo. We conducted individual interviews with science teachers, while...... of pupils’ situational interest, we argue that science teachers can draw on these seven place-conscious methods to stimulate interest in science teaching in urban environments....
System and method for confining an object to a region of fluid flow having a stagnation point
Schroeder, Charles M. (Inventor); Shaqfeh, Eric S. G. (Inventor); Babcock, Hazen P. (Inventor); Chu, Steven (Inventor)
2006-01-01
A device for confining an object to a region proximate to a fluid flow stagnation point includes one or more inlets for carrying the fluid into the region, one or more outlets for carrying the fluid out of the region, and a controller, in fluidic communication with the inlets and outlets, for adjusting the motion of the fluid to produce a stagnation point in the region, thereby confining the object to the region. Applications include, for example, prolonged observation of the object, manipulation of the object, etc. The device optionally may employ a feedback control mechanism, a sensing apparatus (e.g., for imaging), and a storage medium for storing, and a computer for analyzing and manipulating, data acquired from observing the object. The invention further provides methods of using such a device and system in a number of fields, including biology, chemistry, physics, material science, and medical science.
Zhang, Jiao; Xia, Zhanfeng; He, Jiangzhou; Sun, Hongzhuan; Zhang, Lili
2013-07-04
To evaluate the influence of DNA extraction methods on the actinobacteria diversity analysis in saline environment via 16S rDNA Restriction Fragment Length Polymorphism. CTAB-SDS method, glass bead beating method and repeated freezing and thawing method were used to extract total DNA in soil samples from the Yanqi Salten. The 16S rDNA clone libraries were constructed by using the purified 16S rDNA PCR amplicons to transform the E. coli DH5alpha. The transformants in the library were further analyzed by RFLP. The unique 16S rDNA clones were sequenced and further used for phylogenetic analysis. Different Operational Taxonomic Units (OTU) were obtained from DNA extracts and total 35 OTUs were obtained from CTAB-SDS method, 19 OTUs from galss bead beating method and 14 OTUs from repeated freezing and thawing methods. Up to 52% OTUs in the three libraries constructed displayed lower similarity with the published sequence, perhaps representing novel taxons. The total OTUs belong to Actinobacteridae, Acidimicrobidae and Rubrobacteridae subclasses. DNA extraction methods influence the actinobacterial diversity. Each of the DNA extraction method in our study has some drawbacks and biases, so it is better to use combined DNA extracts from different DNA methods to evaluate the microbial diversity in salty environments.
Directory of Open Access Journals (Sweden)
KOIKE Hitonobu
2017-01-01
Full Text Available Subsurface fatigue cracks under rolling contact area of the PEEK shaft against an alumina bearing’s ball were investigated for application of frictional part in mechanical element in special situations such as chemical environments. In order to explore the flaking process of the PEEK shaft, the rolling contact fatigue tests were carried out by using a one-point radial loading rolling contact machine. The flaking occurred on the rolling track of the PEEK shaft at approximate 4⨉105 fatigue cycles. The subsurface fatigue crack propagation was investigated by using 2.5-Dimension layer observation method. The flaking was caused by the propagations of surface cracks and subsurface shear cracks, and the flaking shape was half-ellipse. Moreover, beach marks as fatigue crack propagation in the flaking were observed.
A Study on Evaluation Method of Equipment Expansion in Power System under Competitive Environment
Nakajima, Takuya; Oyama, Tsutomu
The supply reliability of the power system strongly depends on the system planning and operation. Under the competitive environment, system planning and operation become more complicated and difficult due to the new uncertainties that have not been considered so far. Which may also results in the enlargement of difficulties in forecast in the planning stage, and causes the deterioration of supply reliability. In the competitive environment, the transmission network must be planned and operated with the economical rationality and fairness. However, it is difficult to realize the system planning and operation considering the economical rationality and fairness because of the uncertainties. Then, the high flexibility and robustness against the uncertainties are required for the system planning and operation. This paper evaluates the performance of system expansion planning from two points of views: the probabilistic supply reliability and transmission margin in power system. As indices, the Expected Energy Not Supplied (EENS) and Available Transmission Capability (ATC) are used in this study.
Novel methods and expected run II performance of ATLAS track reconstruction in dense environments
Jansky, Roland Wolfgang; The ATLAS collaboration
2015-01-01
Detailed understanding and optimal track reconstruction performance of ATLAS in the core of high pT objects is paramount for a number of techniques such as jet energy and mass calibration, jet flavour tagging, and hadronic tau identification as well as measurements of physics quantities like jet fragmentation functions. These dense environments are characterized by charged particle separations on the order of the granularity of ATLAS’s inner detector. With the insertion of a new innermost layer in this tracking detector, which allows measurements closer to the interaction point, and an increase in the centre of mass energy, these difficult environments will become even more relevant in Run II, such as in searches for heavy resonances. Novel algorithmic developments to the ATLAS track reconstruction software targeting these topologies as well as the expected improved performance will be presented.
The Critical Success Factors Method: Its Application in a Special Library Environment.
Borbely, Jack
1981-01-01
Discusses the background and theory of the Critical Success Factors (CSF) management method, as well as its application in an information center or other special library environment. CSF is viewed as a management tool that can enhance the viability of the special library within its parent organization. (FM)
Engineers' Perceptions of Diversity and the Learning Environment at Work: A Mixed Methods Study
Firestone, Brenda L.
2012-01-01
The purpose of this dissertation research study was to investigate engineers' perceptions of diversity and the workplace learning environment surrounding diversity education efforts in engineering occupations. The study made use of a mixed methods methodology and was theoretically framed using a critical feminist adult education lens and…
Brun, Milivoj Konstantin; Luthra, Krishan Lal
2003-01-01
While silicon-containing ceramics or ceramic composites are prone to material loss in combustion gas environments, this invention introduces a method to prevent or greatly reduce the thickness loss by injecting directly an effective amount, generally in the part per million level, of silicon or silicon-containing compounds into the combustion gases.
Method for integrated design of low energy buildings with high quality indoor environment
DEFF Research Database (Denmark)
Petersen, Steffen
2008-01-01
Energy performance and indoor environment have due to new increased regulatory demands become decisive design parameters in the building design process. In order to comply with the increased regulatory demands, we present an integrated design method which argues that the design of buildings must ...
Shahriari, Mohammadali; Biglarbegian, Mohammad
2018-01-01
This paper presents a new conflict resolution methodology for multiple mobile robots while ensuring their motion-liveness, especially for cluttered and dynamic environments. Our method constructs a mathematical formulation in a form of an optimization problem by minimizing the overall travel times of the robots subject to resolving all the conflicts in their motion. This optimization problem can be easily solved through coordinating only the robots' speeds. To overcome the computational cost in executing the algorithm for very cluttered environments, we develop an innovative method through clustering the environment into independent subproblems that can be solved using parallel programming techniques. We demonstrate the scalability of our approach through performing extensive simulations. Simulation results showed that our proposed method is capable of resolving the conflicts of 100 robots in less than 1.23 s in a cluttered environment that has 4357 intersections in the paths of the robots. We also developed an experimental testbed and demonstrated that our approach can be implemented in real time. We finally compared our approach with other existing methods in the literature both quantitatively and qualitatively. This comparison shows while our approach is mathematically sound, it is more computationally efficient, scalable for very large number of robots, and guarantees the live and smooth motion of robots.
Sound source localization method in an environment with flow based on Amiet-IMACS
Wei, Long; Li, Min; Qin, Sheng; Fu, Qiang; Yang, Debin
2017-05-01
A sound source localization method is proposed to localize and analyze the sound source in an environment with airflow. It combines the improved mapping of acoustic correlated sources (IMACS) method and Amiet's method, and is called Amiet-IMACS. It can localize uncorrelated and correlated sound sources with airflow. To implement this approach, Amiet's method is used to correct the sound propagation path in 3D, which improves the accuracy of the array manifold matrix and decreases the position error of the localized source. Then, the mapping of acoustic correlated sources (MACS) method, which is as a high-resolution sound source localization algorithm, is improved by self-adjusting the constraint parameter at each irritation process to increase convergence speed. A sound source localization experiment using a pair of loud speakers in an anechoic wind tunnel under different flow speeds is conducted. The experiment exhibits the advantage of Amiet-IMACS in localizing a more accurate sound source position compared with implementing IMACS alone in an environment with flow. Moreover, the aerodynamic noise produced by a NASA EPPLER 862 STRUT airfoil model in airflow with a velocity of 80 m/s is localized using the proposed method, which further proves its effectiveness in a flow environment. Finally, the relationship between the source position of this airfoil model and its frequency, along with its generation mechanism, is determined and interpreted.
Harper, Lane; Powell, Jeff; Pijl, Em M
2017-07-31
Given the current opioid crisis around the world, harm reduction agencies are seeking to help people who use drugs to do so more safely. Many harm reduction agencies are exploring techniques to test illicit drugs to identify and, where possible, quantify their constituents allowing their users to make informed decisions. While these technologies have been used for years in Europe (Nightlife Empowerment & Well-being Implementation Project, Drug Checking Service: Good Practice Standards; Trans European Drugs Information (TEDI) Workgroup, Factsheet on Drug Checking in Europe, 2011; European Monitoring Centre for Drugs and Drug Addiction, An Inventory of On-site Pill-Testing Interventions in the EU: Fact Files, 2001), they are only now starting to be utilized in this context in North America. The goal of this paper is to describe the most common methods for testing illicit substances and then, based on this broad, encompassing review, recommend the most appropriate methods for testing at point of care.Based on our review, the best methods for point-of-care drug testing are handheld infrared spectroscopy, Raman spectroscopy, and ion mobility spectrometry; mass spectrometry is the current gold standard in forensic drug analysis. It would be prudent for agencies or clinics that can obtain the funding to contact the companies who produce these devices to discuss possible usage in a harm reduction setting. Lower tech options, such as spot/color tests and immunoassays, are limited in their use but affordable and easy to use.
Siahkouhian, M; Meamarbashi, A
2013-02-01
The aim of the present study was to compare determined heart rate deflection point (HRDP) by the long distance maximum (L.Dmax), short distance maximum (S.Dmax) and plasma lactate measurements as criterion. Fifteen healthy and active male volunteers, aged 20-24, were selected as subjects and performed the exhaustive testing protocol which took place on a calibrated electronically braked cycle ergometer. To determine the HRDP, each subject's data was recorded during the exercise test and analyzed by a designed computer program. Venous blood samples were drawn for the measurement of plasma lactate concentration by a direct method. Downward inflection of HRDP was noticed in all subjects. Comparison of the S.Dmax and L.Dmax methods with the criterion method (lactate method) showed that while HRDP determined by the S.Dmax and lactate methods were not significantly different (167±8.83 vs. 168±8.17 b/min; P=0.86), significant difference emerged between determined HRDP by the L.Dmax and lactate methods (167±8.83 vs. 139.56±6.73 b/min; P£0.001). Bland-Altman plots revealed a good agreement between S.Dmax and lactate methods (95% CI=-5 to +3.6 b/min), while there is no agreement between L.Dmax and lactate method (95% CI=+4.9 to +71.3 b/min). Significant correlation was observed between the criterion and S.Dmax model (r=0.944) whereas there was no significant correlation between the criterion and L.Dmax model (r=0.158). Based on these results, it could be suggested that S.Dmax method is an accurate and reliable alternative to the cumbersome, expensive, and time-consuming lactate method.
DEFF Research Database (Denmark)
McKnight, Ursula S.; Sonne, Anne Thobo; Fjordbøge, Annika Sidelmann
2013-01-01
an increasingly important activity for the hydrogeological investigations of rivers and streams. In cases where groundwater contaminant plumes are discharging to streams, determination of flow paths and groundwater fluxes are essential for evaluating the transport, fate and potential impact of the plume...... by two major polluting point sources, Grindsted factory and Grindsted landfill, representing two of the 43 large-scale contaminated sites in Denmark. Our overall aim was therefore to (i) test the applicability of different methods for mapping groundwater pollution as it enters streams at a complex site...
DEFF Research Database (Denmark)
Sokoler, Leo Emil; Skajaa, Anders; Frison, Gianluca
2013-01-01
In this paper, we present a warm-started homogenous and self-dual interior-point method (IPM) for the linear programs arising in economic model predictive control (MPC) of linear systems. To exploit the structure in the optimization problems, our algorithm utilizes a Riccati iteration procedure...... algorithm in MATLAB and its performance is analyzed based on a smart grid power management case study. Closed loop simulations show that 1) our algorithm is significantly faster than state-of-the-art IPMs based on sparse linear algebra routines, and 2) warm-starting reduces the number of iterations...
DEFF Research Database (Denmark)
Li, Kai; Wei, Min; Xie, Chuan
2017-01-01
In order to control the neutral point voltage of inverter with discontinuous PWM (DPWM), this paper proposed a generalized discontinuous PWM (GDPWM) based neutral point voltage balancing method for three level neutral point clamped (NPC) voltage source inverter (VSI). Firstly, a triangle carrier ...
Aucouturier, Julien; Rance, Mélanie; Meyer, Martine; Isacco, Laurie; Thivel, David; Fellmann, Nicole; Duclos, Martine; Duché, Pascale
2009-01-01
We aimed to examine the interchangeability of techniques used to assess maximal oxygen consumption (VO2max) and maximal aerobic power (MAP) employed to express the maximal fat oxidation point in obese children and adolescents. Rate of fat oxidation were measured in 24 obese subjects (13.0 +/- 2.4 years; Body Mass Index 30.2 +/- 6.3 kg m(-2)) who performed a five 4-min stages submaximal incremental cycling exercise. A second cycling exercise was performed to measure VO2max. Results are those of the 20 children who achieved the criterion of RER (>1.02) to assess the attainment of VO2max. Although correlations between results obtained by different methods were strong, Bland-Altman plots showed little agreement between the maximal fat oxidation point expressed as a percentage of measured VO2max and as % VO2max estimated according to ACSM guidelines (underestimation : -5.9%) or using the predictive equations of Wasserman (-13.9%). Despite a mean underestimation of 1.4% several values were out of the limits of agreement when comparing measured MAP and Theoretical MAP. Estimations of VO2max lead to underestimations of the maximal fat oxidation point.
Method of the Aquatic Environment Image Processing for Determining the Mineral Suspension Parameters
Directory of Open Access Journals (Sweden)
D.A. Antonenkov
2016-10-01
Full Text Available The present article features the developed method to determine the mineral suspension characteristics by obtaining and following processing of the aquatic environment images. This method is capable of maintaining its performance under the conditions of considerable dynamic activity of the water masses. The method feature consists in application of the developed computing algorithm, simultaneous use of morphological filters and histogram methods for image processing, and in a special calibration technique. As a whole it provides a possibility to calculate size and concentration of the particles on the images obtained. The developed technical means permitting to get the environment images of the required quality are briefly described. The algorithm of the developed software operation is represented. The examples of numerical and weight distribution of the particles according to their sizes, and the totals of comparing the results obtained by the standard and developed methods are represented. The developed method makes it possible to obtain the particle size data in the range of 50–1000 μm and also to determine the suspension concentration with ~12 % error. This method can be technically implemented for the instruments intended for in situ measurements using the gauges, allowing obtaining exposure time short values, such as applying the electron-optical converter, which acts as the image intensifier, and the high-speed electronic shutter. The completed method testing in the laboratory makes possible to obtain the results similar in accuracy with the results of the in situ measurements.
Energy Technology Data Exchange (ETDEWEB)
Kelly, J.G.; Vehar, D.W.
1987-12-01
Neutron spectra have been measured by the foil-activation method in 13 different environments in and around the Sandia Pulsed Reactor, the White Sands Missile Range Fast Burst Reactor, and the Sandia Annular Core Research Reactor. The spectra were obtained by using the SANDII code in a manner that was not dependent on the initial trial. This altered technique is better suited for the determination of spectra in environments that are difficult to predict by calculation, and it tends to reveal features that may be biased out by the use of standard trial-dependent methods. For some of the configurations, studies have also been made of how well the solution is determined in each energy region. The experimental methods and the techniques used in the analyses are thoroughly explained. 34 refs., 51 figs., 40 tabs.
Marck, Patricia; Molzahn, Anita; Berry-Hauf, Rhonda; Hutchings, Loretta Gail; Hughes, Susan
2014-01-01
This study used principles and methods of good ecological restoration, including participatory photographic research methods, to explore perceptions of safety and quality in one hemodialysis unit. Using a list of potential safety and quality issues developed during an initial focus group, a practitioner-led photo walkabout was conducted to obtain photographs of the patient care unit and nurses' stories (photo narration) about safety and quality in their environment. Following a process of iterative coding, photos were used to discuss preliminary themes in a photo elicitation focus group with four additional unit staff The major themes identified related to clutter, infection control, unit design, chemicals and air quality, lack of storage space, and health and safety hazards (including wet floors, tripping hazards from hoses, moving furniture/chairs). The visual methods engaged researchers and unit nurses in rich dialogue about safety in this complex environment and provides an ongoing basis for monitoring and enhancing safety.
Directory of Open Access Journals (Sweden)
Vitalii M. Bazurin
2015-05-01
Full Text Available Delphi visual programming environment provides ample opportunities for visual mapping arrays. There are a number of Delphi screen form components, which help you to visualize the array on the form. Processing arrays programs in Delphi environment have their differences from the same programs in Pascal. The article describes these differences. Also, the features of student learning methods for solving problems of array processing using Delphi visual components are highlighted. It has been exposed sequence and logic of the teaching material on arrays processing using TStringGrid and TMemo components.
Directory of Open Access Journals (Sweden)
Md Nabiul Islam Khan
Full Text Available In the Point-Centred Quarter Method (PCQM, the mean distance of the first nearest plants in each quadrant of a number of random sample points is converted to plant density. It is a quick method for plant density estimation. In recent publications the estimator equations of simple PCQM (PCQM1 and higher order ones (PCQM2 and PCQM3, which uses the distance of the second and third nearest plants, respectively show discrepancy. This study attempts to review PCQM estimators in order to find the most accurate equation form. We tested the accuracy of different PCQM equations using Monte Carlo Simulations in simulated (having 'random', 'aggregated' and 'regular' spatial patterns plant populations and empirical ones.PCQM requires at least 50 sample points to ensure a desired level of accuracy. PCQM with a corrected estimator is more accurate than with a previously published estimator. The published PCQM versions (PCQM1, PCQM2 and PCQM3 show significant differences in accuracy of density estimation, i.e. the higher order PCQM provides higher accuracy. However, the corrected PCQM versions show no significant differences among them as tested in various spatial patterns except in plant assemblages with a strong repulsion (plant competition. If N is number of sample points and R is distance, the corrected estimator of PCQM1 is 4(4N - 1/(π ∑ R2 but not 12N/(π ∑ R2, of PCQM2 is 4(8N - 1/(π ∑ R2 but not 28N/(π ∑ R2 and of PCQM3 is 4(12N - 1/(π ∑ R2 but not 44N/(π ∑ R2 as published.If the spatial pattern of a plant association is random, PCQM1 with a corrected equation estimator and over 50 sample points would be sufficient to provide accurate density estimation. PCQM using just the nearest tree in each quadrant is therefore sufficient, which facilitates sampling of trees, particularly in areas with just a few hundred trees per hectare. PCQM3 provides the best density estimations for all types of plant assemblages including the repulsion process
Khan, Md Nabiul Islam; Hijbeek, Renske; Berger, Uta; Koedam, Nico; Grueters, Uwe; Islam, S M Zahirul; Hasan, Md Asadul; Dahdouh-Guebas, Farid
2016-01-01
In the Point-Centred Quarter Method (PCQM), the mean distance of the first nearest plants in each quadrant of a number of random sample points is converted to plant density. It is a quick method for plant density estimation. In recent publications the estimator equations of simple PCQM (PCQM1) and higher order ones (PCQM2 and PCQM3, which uses the distance of the second and third nearest plants, respectively) show discrepancy. This study attempts to review PCQM estimators in order to find the most accurate equation form. We tested the accuracy of different PCQM equations using Monte Carlo Simulations in simulated (having 'random', 'aggregated' and 'regular' spatial patterns) plant populations and empirical ones. PCQM requires at least 50 sample points to ensure a desired level of accuracy. PCQM with a corrected estimator is more accurate than with a previously published estimator. The published PCQM versions (PCQM1, PCQM2 and PCQM3) show significant differences in accuracy of density estimation, i.e. the higher order PCQM provides higher accuracy. However, the corrected PCQM versions show no significant differences among them as tested in various spatial patterns except in plant assemblages with a strong repulsion (plant competition). If N is number of sample points and R is distance, the corrected estimator of PCQM1 is 4(4N - 1)/(π ∑ R2) but not 12N/(π ∑ R2), of PCQM2 is 4(8N - 1)/(π ∑ R2) but not 28N/(π ∑ R2) and of PCQM3 is 4(12N - 1)/(π ∑ R2) but not 44N/(π ∑ R2) as published. If the spatial pattern of a plant association is random, PCQM1 with a corrected equation estimator and over 50 sample points would be sufficient to provide accurate density estimation. PCQM using just the nearest tree in each quadrant is therefore sufficient, which facilitates sampling of trees, particularly in areas with just a few hundred trees per hectare. PCQM3 provides the best density estimations for all types of plant assemblages including the repulsion process
An analytically based numerical method for computing view factors in real urban environments
Lee, Doo-Il; Woo, Ju-Wan; Lee, Sang-Hyun
2018-01-01
A view factor is an important morphological parameter used in parameterizing in-canyon radiative energy exchange process as well as in characterizing local climate over urban environments. For realistic representation of the in-canyon radiative processes, a complete set of view factors at the horizontal and vertical surfaces of urban facets is required. Various analytical and numerical methods have been suggested to determine the view factors for urban environments, but most of the methods provide only sky-view factor at the ground level of a specific location or assume simplified morphology of complex urban environments. In this study, a numerical method that can determine the sky-view factors ( ψ ga and ψ wa ) and wall-view factors ( ψ gw and ψ ww ) at the horizontal and vertical surfaces is presented for application to real urban morphology, which are derived from an analytical formulation of the view factor between two blackbody surfaces of arbitrary geometry. The established numerical method is validated against the analytical sky-view factor estimation for ideal street canyon geometries, showing a consolidate confidence in accuracy with errors of less than 0.2 %. Using a three-dimensional building database, the numerical method is also demonstrated to be applicable in determining the sky-view factors at the horizontal (roofs and roads) and vertical (walls) surfaces in real urban environments. The results suggest that the analytically based numerical method can be used for the radiative process parameterization of urban numerical models as well as for the characterization of local urban climate.
Validation of Experimental whole-body SAR Assessment Method in a Complex Indoor Environment
DEFF Research Database (Denmark)
Bamba, Aliou; Joseph, Wout; Vermeeren, Gunter
2012-01-01
simulations with the Finite-Difference Time-Domain method. Furthermore, the method accounts for the diffuse multipath components (DMC) in the total absorption rate by considering the reverberation time of the investigated room, which describes all the losses in a complex indoor environment. The advantage...... of the proposed method is that it allows discarding the computation burden because it does not use any discretizations. Results show good agreement between measurement and computation at 2.8 GHz, as long as the plane wave assumption is valid, i.e., for high distances from the transmitter. Relative deviations 0...
Directory of Open Access Journals (Sweden)
Jose Ernie C. Lope
2013-12-01
Full Text Available In their 2012 work, Lope, Roque, and Tahara considered singular nonlinear partial differential equations of the form tut = F(t; x; u; ux, where the function F is assumed to be continuous in t and holomorphic in the other variables. They have shown that under some growth conditions on the coefficients of the partial Taylor expansion of F as t 0, the equation has a unique solution u(t; x with the same growth order as that of F(t; x; 0; 0. Koike considered systems of partial differential equations using the Banach fixed point theorem and the iterative method of Nishida and Nirenberg. In this paper, we prove the result obtained by Lope and others using the method of Koike, thereby avoiding the repetitive step of differentiating a recursive equation with respect to x as was done by the aforementioned authors.
Directory of Open Access Journals (Sweden)
David I Flores
Full Text Available The automatic identification of catalytic residues still remains an important challenge in structural bioinformatics. Sequence-based methods are good alternatives when the query shares a high percentage of identity with a well-annotated enzyme. However, when the homology is not apparent, which occurs with many structures from the structural genome initiative, structural information should be exploited. A local structural comparison is preferred to a global structural comparison when predicting functional residues. CMASA is a recently proposed method for predicting catalytic residues based on a local structure comparison. The method achieves high accuracy and a high value for the Matthews correlation coefficient. However, point substitutions or a lack of relevant data strongly affect the performance of the method. In the present study, we propose a simple extension to the CMASA method to overcome this difficulty. Extensive computational experiments are shown as proof of concept instances, as well as for a few real cases. The results show that the extension performs well when the catalytic site contains mutated residues or when some residues are missing. The proposed modification could correctly predict the catalytic residues of a mutant thymidylate synthase, 1EVF. It also successfully predicted the catalytic residues for 3HRC despite the lack of information for a relevant side chain atom in the PDB file.
Directory of Open Access Journals (Sweden)
T. R. Jordana
2016-06-01
Full Text Available Documentation of the three-dimensional (3D cultural landscape has traditionally been conducted during site visits using conventional photographs, standard ground surveys and manual measurements. In recent years, there have been rapid developments in technologies that produce highly accurate 3D point clouds, including aerial LiDAR, terrestrial laser scanning, and photogrammetric data reduction from unmanned aerial systems (UAS images and hand held photographs using Structure from Motion (SfM methods. These 3D point clouds can be precisely scaled and used to conduct measurements of features even after the site visit has ended. As a consequence, it is becoming increasingly possible to collect non-destructive data for a wide variety of cultural site features, including landscapes, buildings, vegetation, artefacts and gardens. As part of a project for the U.S. National Park Service, a variety of data sets have been collected for the Wormsloe State Historic Site, near Savannah, Georgia, USA. In an effort to demonstrate the utility and versatility of these methods at a range of scales, comparisons of the features mapped with different techniques will be discussed with regards to accuracy, data set completeness, cost and ease-of-use.
Sato, Hiroyuki; Hirakawa, Akihiro; Hamada, Chikuma
2016-10-15
The paradigm of oncology drug development is expanding from developing cytotoxic agents to developing biological or molecularly targeted agents (MTAs). Although it is common for the efficacy and toxicity of cytotoxic agents to increase monotonically with dose escalation, the efficacy of some MTAs may exhibit non-monotonic patterns in their dose-efficacy relationships. Many adaptive dose-finding approaches in the available literature account for the non-monotonic dose-efficacy behavior by including additional model parameters. In this study, we propose a novel adaptive dose-finding approach based on binary efficacy and toxicity outcomes in phase I trials for monotherapy using an MTA. We develop a dose-efficacy model, the parameters of which are allowed to change in the vicinity of the change point of the dose level, in order to consider the non-monotonic pattern of the dose-efficacy relationship. The change point is obtained as the dose that maximizes the log-likelihood of the assumed dose-efficacy and dose-toxicity models. The dose-finding algorithm is based on the weighted Mahalanobis distance, calculated using the posterior probabilities of efficacy and toxicity outcomes. We compare the operating characteristics between the proposed and existing methods and examine the sensitivity of the proposed method by simulation studies under various scenarios. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Validation of experimental whole-body SAR assessment method in a complex indoor environment.
Bamba, Aliou; Joseph, Wout; Vermeeren, Gunter; Tanghe, Emmeric; Gaillot, Davy Paul; Andersen, Jørgen B; Nielsen, Jesper Ødum; Lienard, Martine; Martens, Luc
2013-02-01
Experimentally assessing the whole-body specific absorption rate (SAR(wb) ) in a complex indoor environment is very challenging. An experimental method based on room electromagnetics theory (accounting only the line-of-sight as specular path) is validated using numerical simulations with the finite-difference time-domain method. Furthermore, the method accounts for diffuse multipath components (DMC) in the total absorption rate by considering the reverberation time of the investigated room, which describes all the losses in a complex indoor environment. The advantage of the proposed method is that it allows discarding the computational burden because it does not use any discretizations. Results show good agreement between measurement and computation at 2.8 GHz, as long as the plane wave assumption is valid, that is, at large distances from the transmitter. Relative deviations of 0.71% and 4% have been obtained for far-field scenarios, and 77.5% for the near field-scenario. The contribution of the DMC in the total absorption rate is also quantified here, which has never been investigated before. It is found that the DMC may represent an important part of the total absorption rate; its contribution may reach up to 90% for certain scenarios in an indoor environment. Copyright © 2012 Wiley Periodicals, Inc.
Advanced Methods for Robot-Environment Interaction towards an Industrial Robot Aware of Its Volume
Directory of Open Access Journals (Sweden)
Fabrizio Romanelli
2011-01-01
Full Text Available A fundamental aspect of robot-environment interaction in industrial environments is given by the capability of the control system to model the structured and unstructured environment features. Industrial robots have to perform complex tasks at high speeds and have to satisfy hard cycle times while maintaining the operations extremely precise. The capability of the robot to perceive the presence of environmental objects is something still missing in the real industrial context. Although anthropomorphic robot producers have faced problems related to the interaction between robot and its environment, there is not an exhaustive study on the capabilities of the robot being aware of its volume and on the tools eventually mounted on its flange. In this paper, a solution to model the environment of the robot in order to make it capable of perceiving and avoiding collisions with the objects in its surroundings is shown. Furthermore, the model will be extended to take also into account the volume of the robot tool in order to extend the perception capabilities of the entire system. Testing results will be showed in order to validate the method, proving that the system is able to cope with complex real surroundings.
TECHNOLOGY AND METHODS OF CREATING WEB-BASED LEARNING ENVIRONMENT FOR HUMANITIES EDUCATION
Directory of Open Access Journals (Sweden)
Вилена Александровна Брылева
2013-04-01
Full Text Available The purpose of the article is to describe the structure of web environment in frames of new educational paradigm in teaching Humanities, to clarify the scientifical and practical importance of using Web 2.0 technologies in higher education. This problem is of great importance due to the necessity of integration of modern IT into educational environment which needs to develop new methods of teaching.The model of educational environment presented in the article is based on the integration of LMS Moodle and PLE Mahara. The authors define the functional modules and means of the environment, describe its didactic qualities, organization requirements and usage advantages. The methodic model of teaching English worked out by the authors supposes step-by-step formation of professional as well as informational competence necessary to any modern specialist. The effectiveness of the model is verified by experiental learning, based on individual and group forms of work on educational site of Institute of Philology and Intercultural Communication of Volgograd State university.DOI: http://dx.doi.org/10.12731/2218-7405-2013-2-8
A three-point backward finite-difference method has been derived for a system of mixed hyperbolic¯¯parabolic (convection¯¯diffusion) partial differential equations (mixed PDEs). The method resorts to the three-point backward differenci...
Directory of Open Access Journals (Sweden)
Yidong Xu
2017-10-01
Full Text Available A novel localization method based on multiple signal classification (MUSIC algorithm is proposed for positioning an electric dipole source in a confined underwater environment by using electric dipole-receiving antenna array. In this method, the boundary element method (BEM is introduced to analyze the boundary of the confined region by use of a matrix equation. The voltage of each dipole pair is used as spatial-temporal localization data, and it does not need to obtain the field component in each direction compared with the conventional fields based localization method, which can be easily implemented in practical engineering applications. Then, a global-multiple region-conjugate gradient (CG hybrid search method is used to reduce the computation burden and to improve the operation speed. Two localization simulation models and a physical experiment are conducted. Both the simulation results and physical experiment result provide accurate positioning performance, with the help to verify the effectiveness of the proposed localization method in underwater environments.
Mitchell, Kendra R; Takacs-Vesbach, Cristina D
2008-10-01
The widespread use of molecular techniques in studying microbial communities has greatly enhanced our understanding of microbial diversity and function in the natural environment and contributed to an explosion of novel commercially viable enzymes. One of the most promising environments for detecting novel processes, enzymes, and microbial diversity is hot springs. We examined potential biases introduced by DNA preservation and extraction methods by comparing the quality, quantity, and diversity of environmental DNA samples preserved and extracted by commonly used methods. We included samples from sites representing the spectrum of environmental conditions that are found in Yellowstone National Park thermal features. Samples preserved in a non-toxic sucrose lysis buffer (SLB), along with a variation of a standard DNA extraction method using CTAB resulted in higher quality and quantity DNA than the other preservation and extraction methods tested here. Richness determined using DGGE revealed that there was some variation within replicates of a sample, but no statistical difference among the methods. However, the sucrose lysis buffer preserved samples extracted by the CTAB method were 15-43% more diverse than the other treatments.
Eskandari, Habibollah
2006-02-01
H-point standard addition method (HPSAM) has been applied for simultaneous determination of palladium and cobalt in trace levels, using disodium 1-nitroso-2-naphthol-3, 6-disulphonate (nitroso-R salt) as a selective chromogenic reagent. Palladium and cobalt in the neutral pHs form red color complexes with nitroso-R in aqueous solutions and making spectrophotometric monitoring possible. Simultaneous determination of palladium and cobalt were performed by HPSAM--first derivative spectrophotometry. First derivative signals at the two pairs of wavelengths, 523 and 589 nm or 513 and 554 nm were monitored with the addition of standard solutions of palladium or cobalt, respectively. The method is able to accurately determine palladium/cobalt ratio 1:10 to 15:1 (wt/wt). Accuracy and reproducibility of the determination method on the various amounts of palladium and cobalt known were evaluated in their binary mixtures. To investigate selectivity of the method and to ensure that no serious interferences were observed the effects of diverse ions on the determination of palladium and cobalt were also studied. The recommended procedure was successfully applied to real and synthetic cobalt or palladium alloys, B-complex ampoules, a palladium-charcoal mixture and real water matrices.
Roberts, T Edward; Bridge, Thomas C; Caley, M Julian; Baird, Andrew H
2016-01-01
Understanding patterns in species richness and diversity over environmental gradients (such as altitude and depth) is an enduring component of ecology. As most biological communities feature few common and many rare species, quantifying the presence and abundance of rare species is a crucial requirement for analysis of these patterns. Coral reefs present specific challenges for data collection, with limitations on time and site accessibility making efficiency crucial. Many commonly used methods, such as line intercept transects (LIT), are poorly suited to questions requiring the detection of rare events or species. Here, an alternative method for surveying reef-building corals is presented; the point count transect (PCT). The PCT consists of a count of coral colonies at a series of sample stations, located at regular intervals along a transect. In contrast the LIT records the proportion of each species occurring under a transect tape of a given length. The same site was surveyed using PCT and LIT to compare species richness estimates between the methods. The total number of species increased faster per individual sampled and unit of time invested using PCT. Furthermore, 41 of the 44 additional species recorded by the PCT occurred ≤ 3 times, demonstrating the increased capacity of PCT to detect rare species. PCT provides a more accurate estimate of local-scale species richness than the LIT, and is an efficient alternative method for surveying reef corals to address questions associated with alpha-diversity, and rare or incidental events.
Directory of Open Access Journals (Sweden)
Mohammad Hosein Soruraddin
2011-01-01
Full Text Available A simple, rapid, and sensitive spectrophotometric method for the determination of trace amounts of selenium (IV was described. In this method, all selenium spices reduced to selenium (IV using 6 M HCl. Cloud point extraction was applied as a preconcentration method for spectrophotometric determination of selenium (IV in aqueous solution. The proposed method is based on the complexation of Selenium (IV with dithizone at pH < 1 in micellar medium (Triton X-100. After complexation with dithizone, the analyte was quantitatively extracted to the surfactant-rich phase by centrifugation and diluted to 5 mL with methanol. Since the absorption maxima of the complex (424 nm and dithizone (434 nm overlap, hence, the corrected absorbance, Acorr, was used to overcome the problem. With regard to the preconcentration, the tested parameters were the pH of the extraction, the concentration of the surfactant, the concentration of dithizone, and equilibration temperature and time. The detection limit is 4.4 ng mL-1; the relative standard deviation for six replicate measurements is 2.18% for 50 ng mL-1 of selenium. The procedure was applied successfully to the determination of selenium in two kinds of pharmaceutical samples.
Screening Methods for Agent Compatibility with People, Materials, and the Environment
1999-04-01
the environment was held at the National Institute of Standards and Technology on November 14 and 15, 1997, which was attended by approximately 40 representatives from government, academia, and industry. The participants were asked to assess currently used screening methods for each of the following properties of candidate fire suppressants: environmental impact (including ozone depletion potential, global warming potential, and atmospheric lifetime); materials compatibility (including long-term storage stability, the interaction of the agent with metals, gaskets and
A Method for Evaluating Information Security Governance (ISG) Components in Banking Environment
Ula, M.; Ula, M.; Fuadi, W.
2017-02-01
As modern banking increasingly relies on the internet and computer technologies to operate their businesses and market interactions, the threats and security breaches have highly increased in recent years. Insider and outsider attacks have caused global businesses lost trillions of Dollars a year. Therefore, that is a need for a proper framework to govern the information security in the banking system. The aim of this research is to propose and design an enhanced method to evaluate information security governance (ISG) implementation in banking environment. This research examines and compares the elements from the commonly used information security governance frameworks, standards and best practices. Their strength and weakness are considered in its approaches. The initial framework for governing the information security in banking system was constructed from document review. The framework was categorized into three levels which are Governance level, Managerial level, and technical level. The study further conducts an online survey for banking security professionals to get their professional judgment about the ISG most critical components and the importance for each ISG component that should be implemented in banking environment. Data from the survey was used to construct a mathematical model for ISG evaluation, component importance data used as weighting coefficient for the related component in the mathematical model. The research further develops a method for evaluating ISG implementation in banking based on the mathematical model. The proposed method was tested through real bank case study in an Indonesian local bank. The study evidently proves that the proposed method has sufficient coverage of ISG in banking environment and effectively evaluates the ISG implementation in banking environment.
Advanced Algorithms and Automation Tools for Discrete Ordinates Methods in Parallel Environments
Energy Technology Data Exchange (ETDEWEB)
Alireza Haghighat
2003-05-07
This final report discusses major accomplishments of a 3-year project under the DOE's NEER Program. The project has developed innovative and automated algorithms, codes, and tools for solving the discrete ordinates particle transport method efficiently in parallel environments. Using a number of benchmark and real-life problems, the performance and accuracy of the new algorithms have been measured and analyzed.
Energy Technology Data Exchange (ETDEWEB)
Choi, Jang-Hwan, E-mail: jhchoi21@stanford.edu [Department of Radiology, Stanford University, Stanford, California 94305 and Department of Mechanical Engineering, Stanford University, Stanford, California 94305 (United States); Constantin, Dragos [Microwave Physics R& E, Varian Medical Systems, Palo Alto, California 94304 (United States); Ganguly, Arundhuti; Girard, Erin; Fahrig, Rebecca [Department of Radiology, Stanford University, Stanford, California 94305 (United States); Morin, Richard L. [Mayo Clinic Jacksonville, Jacksonville, Florida 32224 (United States); Dixon, Robert L. [Department of Radiology, Wake Forest University, Winston-Salem, North Carolina 27157 (United States)
2015-08-15
Purpose: To propose new dose point measurement-based metrics to characterize the dose distributions and the mean dose from a single partial rotation of an automatic exposure control-enabled, C-arm-based, wide cone angle computed tomography system over a stationary, large, body-shaped phantom. Methods: A small 0.6 cm{sup 3} ion chamber (IC) was used to measure the radiation dose in an elliptical body-shaped phantom made of tissue-equivalent material. The IC was placed at 23 well-distributed holes in the central and peripheral regions of the phantom and dose was recorded for six acquisition protocols with different combinations of minimum kVp (109 and 125 kVp) and z-collimator aperture (full: 22.2 cm; medium: 14.0 cm; small: 8.4 cm). Monte Carlo (MC) simulations were carried out to generate complete 2D dose distributions in the central plane (z = 0). The MC model was validated at the 23 dose points against IC experimental data. The planar dose distributions were then estimated using subsets of the point dose measurements using two proposed methods: (1) the proximity-based weighting method (method 1) and (2) the dose point surface fitting method (method 2). Twenty-eight different dose point distributions with six different point number cases (4, 5, 6, 7, 14, and 23 dose points) were evaluated to determine the optimal number of dose points and their placement in the phantom. The performances of the methods were determined by comparing their results with those of the validated MC simulations. The performances of the methods in the presence of measurement uncertainties were evaluated. Results: The 5-, 6-, and 7-point cases had differences below 2%, ranging from 1.0% to 1.7% for both methods, which is a performance comparable to that of the methods with a relatively large number of points, i.e., the 14- and 23-point cases. However, with the 4-point case, the performances of the two methods decreased sharply. Among the 4-, 5-, 6-, and 7-point cases, the 7-point case (1
Evaluation of two autoinducer-2 quantification methods for application in marine environments
Wang, Tian-Nyu
2018-02-11
This study evaluated two methods, namely high performance liquid chromatography with fluorescence detection (HPLC-FLD) and Vibrio harveyi BB170 bioassay, for autoinducer-2 (AI-2) quantification in marine samples. Using both methods, the study also investigated the stability of AI-2 in varying pH, temperature and media, as well as quantified the amount of AI-2 signals in marine samples.HPLC-FLD method showed a higher level of reproducibility and precision compared to V. harveyi BB170 bioassay. Alkaline pH > 8 and high temperature (> 37°C) increased the instability of AI-2. The AI-2 concentrations in seawater were low, ca. 3.2-27.6 pmol l-1 whereas 8- week old marine biofilm grew on an 18.8 cm2 substratum accumulated ca. 0.207 nmol of AI-2.Both methods have pros and cons for AI-2 quantification in marine samples. Regardless, both methods reported a ubiquitous presence of AI-2 in both planktonic and biomass fractions of seawater, as well as in marine biofilm.In this study, AI-2 signals were for the first time enumerated in marine samples to reveal the ubiquitous presence of AI-2 in this environment. The findings suggest a possible role of AI-2 in biofilm formation in marine environment, and the contribution of AI-2 in biofilm-associated problems such as biofouling and biocorrosion. This article is protected by copyright. All rights reserved.
Roma, E; Bond, T; Jeffrey, P
2014-09-01
Many scientific studies have suggested that point-of-use water treatment can improve water quality and reduce the risk of infectious diseases. Despite the ease of use and relatively low cost of such methods, experience shows the potential benefits derived from provision of such systems depend on recipients' acceptance of the technology and its sustained use. To date, few contributions have addressed the problem of user experience in the post-implementation phase. This can diagnose challenges, which undermine system longevity and its sustained use. A qualitative evaluation of two household water treatment systems, solar disinfection (SODIS) and chlorine tablets (Aquatabs), in three villages was conducted by using a diagnostic tool focusing on technology performance and experience. Cross-sectional surveys and in-depth interviews were used to investigate perceptions of involved stakeholders (users, implementers and local government). Results prove that economic and functional factors were significant in using SODIS, whilst perceptions of economic, taste and odour components were important in Aquatabs use. Conclusions relate to closing the gap between factors that technology implementers and users perceive as key to the sustained deployment of point-of-use disinfection technologies.
Hokama, Y; Wachi, K M; Shiraki, A; Goo, C; Ebesu, J S
2001-02-01
The biological assessments of the flora and fauna in the near-shore ocean environment, specifically Barbers Point Harbor (BPH), demonstrate the usefulness of these biological analyses for evaluation of the changes occurring following man-made excavation for expansion of the harbor. The study included identification and enumeration of macroalgae and dinoflagellates and analyses of herbivores and carnivores in four areas within the perimeter of the harbor and the north and south entrances into the harbor. Numbers of macroalgae varied between 1994 and 1999 surveys, with significant decrease in numbers in stations C, D and E. Stations A and B were similar between 1994 and 1999 with a slight increase in 1999. The significant differences were shown with the appearance of Gambierdiscus toxicus (G toxicus) in 1999 among the algae in stations A and B. Assessment of herbivores and carnivores with the immunological membrane immunobead assay using monoclonal antibody to ciguatoxin and related polyethers demonstrated an increase in fish toxicity among the herbivore from 1994-1999 (22% increase) with a decrease (22%) in non-toxic fish. This was also demonstrated in the carnivores, but to a lesser degree. It is suggested that the biological analyses of the flora and the fauna of the near-shore ocean environment are appropriate to assess the changes that occur from natural and man-made alterations.
A hybrid multiple attribute decision making method for solving problems of industrial environment
Directory of Open Access Journals (Sweden)
Dinesh Singh
2011-01-01
Full Text Available The selection of appropriate alternative in the industrial environment is an important but, at the same time, a complex and difficult problem because of the availability of a wide range of alternatives and similarity among them. Therefore, there is a need for simple, systematic, and logical methods or mathematical tools to guide decision makers in considering a number of selection attributes and their interrelations. In this paper, a hybrid decision making method of graph theory and matrix approach (GTMA and analytical hierarchy process (AHP is proposed. Three examples are presented to illustrate the potential of the proposed GTMA-AHP method and the results are compared with the results obtained using other decision making methods.
Nabwey, Hossam A.; Boumazgour, Mohamed; Rashad, A. M.
2017-07-01
The group method analysis is applied to study the steady mixed convection stagnation-point flow of a non-Newtonian nanofluid towards a vertical stretching surface. The model utilized for the nanofluid incorporates the Brownian motion and thermophoresis effects. Applying the one-parameter transformation group which reduces the number of independent variables by one and thus, the system of governing partial differential equations has been converted to a set of nonlinear ordinary differential equations, and these equations are then computed numerically using the implicit finite-difference scheme. Comparison with previously published studies is executed and the results are found to be in excellent agreement. Results for the velocity, temperature, and the nanoparticle volume fraction profiles as well as the local skin-friction coefficient and local Nusselt number are presented in graphical and tabular forms, and discussed for different values of the governing parameters to show interesting features of the solutions.
Energy Technology Data Exchange (ETDEWEB)
Reimann, Rene; Haack, Christian; Leuermann, Martin; Raedel, Leif; Schoenen, Sebastian; Schimp, Michael; Wiebusch, Christopher [III. Physikalisches Institut, RWTH Aachen (Germany); Collaboration: IceCube-Collaboration
2015-07-01
IceCube, a cubic-kilometer sized neutrino detector at the geographical South Pole, has recently measured a flux of high-energy astrophysical neutrinos. Although this flux has now been observed in multiple analyses, no point sources or source classes could be identified yet. Standard point source searches test many points in the sky for a point source of astrophysical neutrinos individually and therefore produce many trials. Our approach is to additionally use the measured diffuse spectrum to constrain the number of possible point sources and their properties. Initial studies of the method performance are shown.
Yin, Gaohong
2016-05-01
Since the failure of the Scan Line Corrector (SLC) instrument on Landsat 7, observable gaps occur in the acquired Landsat 7 imagery, impacting the spatial continuity of observed imagery. Due to the highly geometric and radiometric accuracy provided by Landsat 7, a number of approaches have been proposed to fill the gaps. However, all proposed approaches have evident constraints for universal application. The main issues in gap-filling are an inability to describe the continuity features such as meandering streams or roads, or maintaining the shape of small objects when filling gaps in heterogeneous areas. The aim of the study is to validate the feasibility of using the Direct Sampling multiple-point geostatistical method, which has been shown to reconstruct complicated geological structures satisfactorily, to fill Landsat 7 gaps. The Direct Sampling method uses a conditional stochastic resampling of known locations within a target image to fill gaps and can generate multiple reconstructions for one simulation case. The Direct Sampling method was examined across a range of land cover types including deserts, sparse rural areas, dense farmlands, urban areas, braided rivers and coastal areas to demonstrate its capacity to recover gaps accurately for various land cover types. The prediction accuracy of the Direct Sampling method was also compared with other gap-filling approaches, which have been previously demonstrated to offer satisfactory results, under both homogeneous area and heterogeneous area situations. Studies have shown that the Direct Sampling method provides sufficiently accurate prediction results for a variety of land cover types from homogeneous areas to heterogeneous land cover types. Likewise, it exhibits superior performances when used to fill gaps in heterogeneous land cover types without input image or with an input image that is temporally far from the target image in comparison with other gap-filling approaches.
Selbig, William R.; Bannerman, Roger T.
2011-01-01
The U.S Geological Survey, in cooperation with the Wisconsin Department of Natural Resources (WDNR) and in collaboration with the Root River Municipal Stormwater Permit Group monitored eight urban source areas representing six types of source areas in or near Madison, Wis. in an effort to improve characterization of particle-size distributions in urban stormwater by use of fixed-point sample collection methods. The types of source areas were parking lot, feeder street, collector street, arterial street, rooftop, and mixed use. This information can then be used by environmental managers and engineers when selecting the most appropriate control devices for the removal of solids from urban stormwater. Mixed-use and parking-lot study areas had the lowest median particle sizes (42 and 54 (u or mu)m, respectively), followed by the collector street study area (70 (u or mu)m). Both arterial street and institutional roof study areas had similar median particle sizes of approximately 95 (u or mu)m. Finally, the feeder street study area showed the largest median particle size of nearly 200 (u or mu)m. Median particle sizes measured as part of this study were somewhat comparable to those reported in previous studies from similar source areas. The majority of particle mass in four out of six source areas was silt and clay particles that are less than 32 (u or mu)m in size. Distributions of particles ranging from 500 (u or mu)m were highly variable both within and between source areas. Results of this study suggest substantial variability in data can inhibit the development of a single particle-size distribution that is representative of stormwater runoff generated from a single source area or land use. Continued development of improved sample collection methods, such as the depth-integrated sample arm, may reduce variability in particle-size distributions by mitigating the effect of sediment bias inherent with a fixed-point sampler.
DEFF Research Database (Denmark)
Ungar, David; Ernst, Erik
2007-01-01
Point Argument: "Dynamic Languages (in Reactive Environments) Unleash Creativity," by David Ungar. For the sake of creativity, the profession needs to concentrate more on inventing new and better dynamic languages and environments and less on improving static languages. Counterpoint Argument...