Nedelcu, O.; Salisteanu, C. I.; Popa, F.; Salisteanu, B.; Oprescu, C. V.; Dogaru, V.
2017-01-01
The complexity of electrical circuits or of equivalent thermal circuits that were considered to be analyzed and solved requires taking into account the method that is used for their solving. Choosing the method of solving determines the amount of calculation necessary for applying one of the methods. The heating and ventilation systems of electrical machines that have to be modeled result in complex equivalent electrical circuits of large dimensions, which requires the use of the most efficient methods of solving them. The purpose of the thermal calculation of electrical machines is to establish the heating, the overruns of temperatures or over-temperatures in some parts of the machine compared to the temperature of the ambient, in a given operating mode of the machine. The paper presents the application of the modified nodal analysis method for the modeling of the thermal circuit of an asynchronous machine.
Mugica R, A.; Valle G, E. del [IPN, ESFM, 07738 Mexico D.F. (Mexico)]. e-mail: mugica@esfm.ipn.mx
2003-07-01
Nowadays the numerical methods of solution to the diffusion equation by means of algorithms and computer programs result so extensive due to the great number of routines and calculations that should carry out, this rebounds directly in the execution times of this programs, being obtained results in relatively long times. This work shows the application of an acceleration method of the convergence of the classic method of those powers that it reduces notably the number of necessary iterations for to obtain reliable results, what means that the compute times they see reduced in great measure. This method is known in the literature like Wielandt method and it has incorporated to a computer program that is based on the discretization of the neutron diffusion equations in plate geometry and stationary state by polynomial nodal methods. In this work the neutron diffusion equations are described for several energy groups and their discretization by means of those called physical nodal methods, being illustrated in particular the quadratic case. It is described a model problem widely described in the literature which is solved for the physical nodal grade schemes 1, 2, 3 and 4 in three different ways: to) with the classic method of the powers, b) method of the powers with the Wielandt acceleration and c) method of the powers with the Wielandt modified acceleration. The results for the model problem as well as for two additional problems known as benchmark problems are reported. Such acceleration method can also be implemented to problems of different geometry to the proposal in this work, besides being possible to extend their application to problems in 2 or 3 dimensions. (Author)
New nodal methods for fluid flow equations
Michael, Edward-Pierre Edward
Several new highly accurate and highly efficient computational methods, called nodal integral methods (NIMs), for solving steady-state and time-dependent fluid flow equations have been developed. First, a new third order nodal integral method for solving the linear, two-dimensional, steady-state, convection-diffusion equation was developed without introducing Legendre moments of the dependent variable higher than the zeroth moment. Numerical comparisons of the new method with the second order NIM, the upwind difference scheme (UWDS) and the locally exact consistent upwind scheme of second order (LECUSSO) showed that, in the important 1% error range, the new method is more efficient than the UWDS, and the LECUSSO scheme, but, less efficient than the second order NIM. Also two new methods for solving the generic, two-dimensional, time-dependent, convection-diffusion equation were developed. One is a full space-time NIM in which both the spatial and temporal operators are discretized using the nodal integral approach. The other is a hybrid finite-difference/NIM method in which the temporal operator is discretized using a backward finite-difference approximation, and the spatial operator is discretized using the nodal integral approach. It was found, as expected, that the full space-time NIM is second order in both space and time while the hybrid finite-difference/NIM is second order in space but only first order in time. Finally, two new methods for solving the conservation of mass and the Navier-Stokes equations for incompressible fluid flow were developed. One is for the steady-state mass and Navier-Stokes equations while the other solves the time-dependent equations. The spatial stencils that result from these new formulations for the mass and the Navier-Stokes equations are similar to those obtained by traditional staggered-grid finite-difference methods. However, the new methods use second order approximations for both the velocities and the pressures. These
Nodal methods in numerical reactor calculations
Hennart, J.P. [UNAM, IIMAS, A.P. 20-726, 01000 Mexico D.F. (Mexico)]. e-mail: jean_hennart@hotmail.com; Valle, E. del [National Polytechnic Institute, School of Physics and Mathematics, Department of Nuclear Engineering, Mexico, D.F. (Mexico)
2004-07-01
The present work describes the antecedents, developments and applications started in 1972 with Prof. Hennart who was invited to be part of the staff of the Nuclear Engineering Department at the School of Physics and Mathematics of the National Polytechnic Institute. Since that time and up to 1981, several master theses based on classical finite element methods were developed with applications in point kinetics and in the steady state as well as the time dependent multigroup diffusion equations. After this period the emphasis moved to nodal finite elements in 1, 2 and 3D cartesian geometries. All the thesis were devoted to the numerical solution of the neutron multigroup diffusion and transport equations, few of them including the time dependence, most of them related with steady state diffusion equations. The main contributions were as follows: high order nodal schemes for the primal and mixed forms of the diffusion equations, block-centered finite-differences methods, post-processing, composite nodal finite elements for hexagons, and weakly and strongly discontinuous schemes for the transport equation. Some of these are now being used by several researchers involved in nuclear fuel management. (Author)
Adaptive Nodal Transport Methods for Reactor Transient Analysis
Thomas Downar; E. Lewis
2005-08-31
Develop methods for adaptively treating the angular, spatial, and time dependence of the neutron flux in reactor transient analysis. These methods were demonstrated in the DOE transport nodal code VARIANT and the US NRC spatial kinetics code, PARCS.
A nonlinear analytic function expansion nodal method for transient calculations
Joo, Han Gyn; Park, Sang Yoon; Cho, Byung Oh; Zee, Sung Quun [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)
1998-12-31
The nonlinear analytic function expansion nodal (AFEN) method is applied to the solution of the time-dependent neutron diffusion equation. Since the AFEN method requires both the particular solution and the homogeneous solution to the transient fixed source problem, the derivation of the solution method is focused on finding the particular solution efficiently. To avoid complicated particular solutions, the source distribution is approximated by quadratic polynomials and the transient source is constructed such that the error due to the quadratic approximation is minimized, In addition, this paper presents a new two-node solution scheme that is derived by imposing the constraint of current continuity at the interface corner points. The method is verified through a series of application to the NEACRP PWR rod ejection benchmark problems. 6 refs., 2 figs., 1 tab. (Author)
An integral nodal variational method for multigroup criticality calculations
Lewis, E.E. [Northwestern Univ., Evanston, IL (United States). Dept. of Mechanical Engineering]. E-mail: e-lewis@northwestern.edu; Smith, M.A.; Palmiotti, G. [Argonne National Lab., IL (United States)]. E-mail: masmith@ra.anl.gov; gpalmiotti@ra.anl.gov; Tsoulfanidis, N. [Missouri Univ., Rolla, MO (United States). Dept. of Nuclear Engineering]. E-mail: tsoul@umr.edu
2003-07-01
An integral formulation of the variational nodal method is presented and applied to a series of benchmark critically problems. The method combines an integral transport treatment of the even-parity flux within the spatial node with an odd-parity spherical harmonics expansion of the Lagrange multipliers at the node interfaces. The response matrices that result from this formulation are compatible with those in the VARIANT code at Argonne National Laboratory. Either homogeneous or heterogeneous nodes may be employed. In general, for calculations requiring higher-order angular approximations, the integral method yields solutions with comparable accuracy while requiring substantially less CPU time and memory than the standard spherical harmonics expansion using the same spatial approximations. (author)
A transient, quadratic nodal method for triangular-Z geometry
DeLorey, T.F.
1993-06-01
Many systematically-derived nodal methods have been developed for Cartesian geometry due to the extensive interest in Light Water Reactors. These methods typically model the transverse-integrated flux as either an analytic or low order polynomial function of position within the node. Recently, quadratic nodal methods have been developed for R-Z and hexagonal geometry. A static and transient quadratic nodal method is developed for triangular-Z geometry. This development is particularly challenging because the quadratic expansion in each node must be performed between the node faces and the triangular points. As a consequence, in the 2-D plane, the flux and current at the points of the triangles must be treated. Quadratic nodal equations are solved using a non-linear iteration scheme, which utilizes the corrected, mesh-centered finite difference equations, and forces these equations to match the quadratic equations by computing discontinuity factors during the solution. Transient nodal equations are solved using the improved quasi-static method, which has been shown to be a very efficient solution method for transient problems. Several static problems are used to compare the quadratic nodal method to the Coarse Mesh Finite Difference (CMFD) method. The quadratic method is shown to give more accurate node-averaged fluxes. However, it appears that the method has difficulty predicting node leakages near reactor boundaries and severe material interfaces. The consequence is that the eigenvalue may be poorly predicted for certain reactor configurations. The transient methods are tested using a simple analytic test problem, a heterogeneous heavy water reactor benchmark problem, and three thermal hydraulic test problems. Results indicate that the transient methods have been implemented correctly.
COMPUTATION OF SUPER-CONVERGENT NODAL STRESSES OF TIMOSHENKO BEAM ELEMENTS BY EEP METHOD
王枚; 袁驷
2004-01-01
The newly proposed element energy projection (EEP) method has been applied to the computation of super-convergent nodal stresses of Timoshenko beam elements. General formulas based on element projection theorem were derived and illustrative numerical examples using two typical elements were given. Both the analysis and examples show that EEP method also works very well for the problems with vector function solutions. The EEP method gives super-convergent nodal stresses, which are well comparable to the nodal displacements in terms of both convergence rate and error magnitude. And in addition, it can overcome the "shear locking" difficulty for stresses even when the displacements are badly affected. This research paves the way for application of the EEP method to general onedimensional systems of ordinary differential equations.
New procedure for criticality search using coarse mesh nodal methods
Pereira, Wanderson F.; Silva, Fernando C. da; Martinez, Aquilino S., E-mail: wneto@con.ufrj.b, E-mail: fernando@con.ufrj.b, E-mail: Aquilino@lmp.ufrj.b [Coordenacao dos Programas de Pos-Graduacao de Engenharia (PEN/COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Programa de Engenharia Nuclear
2011-07-01
The coarse mesh nodal methods have as their primary goal to calculate the neutron flux inside the reactor core. Many computer systems use a specific form of calculation, which is called nodal method. In classical computing systems that use the criticality search is made after the complete convergence of the iterative process of calculating the neutron flux. In this paper, we proposed a new method for the calculation of criticality, condition which will be over very iterative process of calculating the neutron flux. Thus, the processing time for calculating the neutron flux was reduced by half compared with the procedure developed by the Nuclear Engineering Program of COPPE/UFRJ (PEN/COPPE/UFRJ). (author)
A Parallel Probabilistic Load Flow Method Considering Nodal Correlations
Jun Liu
2016-12-01
Full Text Available With the introduction of more and more random factors in power systems, probabilistic load flow (PLF has become one of the most important tasks for power system planning and operation. Cumulants-based PLF is an effective algorithm to calculate PLF in an analytical way, however, the correlations among the nodal injections to the system level have rarely been studied. A novel parallel cumulants-based PLF method considering nodal correlations is proposed in this paper, which is able to deal with the correlations among all system nodes, and avoid the Jacobian matrix inversion in the traditional cumulants-based PLF as well. In addition, parallel computing is introduced to improve the efficiency of the numerical calculations. The accuracy of the proposed method is validated by numerical tests on the standard IEEE-14 system, comparing with the results from Correlation Latin hypercube sampling Monte Carlo Simulation (CLMCS method. And the efficiency and parallel performance is proven by the tests on the modified IEEE-300, C703, N1047 systems with distributed generation (DG. Numerical simulations show that the proposed parallel cumulants-based PLF method considering nodal correlations is able to get more accurate results using less computational time and physical memory, and have higher efficiency and better parallel performance than the traditional one.
Progress and applications of the variational nodal method
Carrico, C.B. [Argonne National Lab., IL (United States); Palmiotti, G. [CEA Centre d`Etudes Nucleaires de Cadarache, 13 - Saint-Paul-lez-Durance (France); Lewis, E.E. [Northwestern Univ., Evanston, IL (United States). Dept. of Mechanical Engineering
1995-07-01
This paper summarizes current progress and developments with the variational nodal method(VNM) and its implementaion within the DIF3D code suite. After a brief development of the mathematical basis for the VNM, results from two three-dimensional benchmarks are presented for a variety of computers. Then current applications of the VNM are discussed including diffusion theory calculations, burnup calculations, highly heterogeneous cores, higher-order spherical harmonics approximations, perturbation theory and heterogeneous nodes.
A Hybrid Nodal Method for Time-Dependent Incompressible Flow in Two-Dimensional Arbitrary Geometries
Toreja, A J; Uddin, R
2002-10-21
A hybrid nodal-integral/finite-analytic method (NI-FAM) is developed for time-dependent, incompressible flow in two-dimensional arbitrary geometries. In this hybrid approach, the computational domain is divided into parallelepiped and wedge-shaped space-time nodes (cells). The conventional nodal integral method (NIM) is applied to the interfaces between adjacent parallelepiped nodes (cells), while a finite analytic approach is applied to the interfaces between parallelepiped and wedge-shaped nodes (cells). In this paper, the hybrid method is formally developed and an application of the NI-FAM to fluid flow in an enclosed cavity is presented. Results are compared with those obtained using a commercial computational fluid dynamics code.
A.A. Bingham; R.M. Ferrer; A.M. ougouag
2009-09-01
An accurate and computationally efficient two or three-dimensional neutron diffusion model will be necessary for the development, safety parameters computation, and fuel cycle analysis of a prismatic Very High Temperature Reactor (VHTR) design under Next Generation Nuclear Plant Project (NGNP). For this purpose, an analytical nodal Green’s function solution for the transverse integrated neutron diffusion equation is developed in two and three-dimensional hexagonal geometry. This scheme is incorporated into HEXPEDITE, a code first developed by Fitzpatrick and Ougouag. HEXPEDITE neglects non-physical discontinuity terms that arise in the transverse leakage due to the transverse integration procedure application to hexagonal geometry and cannot account for the effects of burnable poisons across nodal boundaries. The test code being developed for this document accounts for these terms by maintaining an inventory of neutrons by using the nodal balance equation as a constraint of the neutron flux equation. The method developed in this report is intended to restore neutron conservation and increase the accuracy of the code by adding these terms to the transverse integrated flux solution and applying the nodal Green’s function solution to the resulting equation to derive a semi-analytical solution.
Evaluation of the use of nodal methods for MTR neutronic analysis
Reitsma, F.; Mueller, E.Z.
1997-08-01
Although modern nodal methods are used extensively in the nuclear power industry, their use for research reactor analysis has been very limited. The suitability of nodal methods for material testing reactor analysis is investigated with the emphasis on the modelling of the core region (fuel assemblies). The nodal approach`s performance is compared with that of the traditional finite-difference fine mesh approach. The advantages of using nodal methods coupled with integrated cross section generation systems are highlighted, especially with respect to data preparation, simplicity of use and the possibility of performing a great variety of reactor calculations subject to strict time limitations such as are required for the RERTR program.
Verdu, G. [Departamento de Ingenieria Quimica Y Nuclear, Universitat Politecnica de Valencia, Cami de Vera, 14, 46022. Valencia (Spain); Capilla, M.; Talavera, C. F.; Ginestar, D. [Dept. of Nuclear Engineering, Departamento de Matematica Aplicada, Universitat Politecnica de Valencia, Cami de Vera, 14, 46022. Valencia (Spain)
2012-07-01
PL equations are classical high order approximations to the transport equations which are based on the expansion of the angular dependence of the angular neutron flux and the nuclear cross sections in terms of spherical harmonics. A nodal collocation method is used to discretize the PL equations associated with a neutron source transport problem. The performance of the method is tested solving two 1D problems with analytical solution for the transport equation and a classical 2D problem. (authors)
Girardi, E.; Ruggieri, J.M. [CEA Cadarache (DER/SPRC/LEPH), 13 - Saint-Paul-lez-Durance (France). Dept. d' Etudes des Reacteurs; Santandrea, S. [CEA Saclay, Dept. Modelisation de Systemes et Structures DM2S/SERMA/LENR, 91 - Gif sur Yvette (France)
2005-07-01
This paper describes a recently-developed extension of our 'Multi-methods,multi-domains' (MM-MD) method for the solution of the multigroup transport equation. Based on a domain decomposition technique, our approach allows us to treat the one-group equation by cooperatively employing several numerical methods together. In this work, we describe the coupling between the Method of Characteristics (integro-differential equation, unstructured meshes) with the Variational Nodal Method (even parity equation, cartesian meshes). Then, the coupling method is applied to the benchmark model of the Phebus experimental facility (Cea Cadarache). Our domain decomposition method give us the capability to employ a very fine mesh in describing a particular fuel bundle with an appropriate numerical method (MOC), while using a much large mesh size in the rest of the core, in conjunction with a coarse-mesh method (VNM). This application shows the benefits of our MM-MD approach, in terms of accuracy and computing time: the domain decomposition method allows us to reduce the Cpu time, while preserving a good accuracy of the neutronic indicators: reactivity, core-to-bundle power coupling coefficient and flux error. (authors)
Theoretical study of some nodal methods for the solution of the diffusion equation. Numerical tests
Fedon-Magnaud, C.
1983-08-01
The nodal methods used in the solution of the neutron multigroup diffusion equation are described. A new formulation of this methods is obtained in order to have a comparison with the finite element methods. After a brief review of nonconforming finite element theory, we use a Radau formula to establish the equivalence with nodal schemes. Convergence theorems and error estimations are then obtained. In the last part, numerical calculations are performed for two reactor test configurations. Comparisons are done between nodal or nonconforming schemes and more classical methods (F.D., conforming F.E.) wich are used in reactor analysis.
Global/Local iterative homogenization methods for neutron diffusion nodal theory
Kim, Hark Rho
1994-02-15
The objective of this research is to develop efficient spatial homogenization methods for coarse-mesh nodal analysis of the light water reactors in which the reference solutions are not known. The methods developed are the global/local iterative procedures, including procedures based on variational principles. The nodal expansion method (NEM) with generalized equivalence theory is employed in coarse-mesh nodal analysis. The finite difference method (FDM) is used in fine-mesh local assembly calculation. To achieve fast and stable convergence in local assembly calculation, the mixed boundary condition is imposed at the assembly surface, where the surface flux is modulated. The assembly wise fundamental mode eigenfunction is used as the modulating function. Two direct methods are developed for the global/local iterative homogenization : G{sub 1} and G{sub 2}·G{sub 1} procedure is based on the rigorous definition of the flux-weighted constants (FWCs) and G{sub 2} procedure preserves the reaction rate ratio. Three variational principles are also proposed for the assembly homogenization. The basic form is inferred from the Pomraning's variational principle. Since the two variational methods, F{sub 0} and F{sub 2}, are based on the ratio of reaction rates, these are insensitive to the amplitude of the flux and hence they are of the Lagrangian form. On the while, the other variational principle F{sub 1} is based on the reaction rate and this requires a normalization due to its property that is sensitive to the amplitude of the flux. Thus the resulting form of F{sub 1} becomes the Swinger type. The homogenization methods developed were applied to the LWR problems. In the PWR problems we treated, there is no strong need for a global/local iterative homogenization procedure, since the heterogeneity between the fuel assemblies is relatively weak. Using the assembly discontinuity factor(ADF), the nodal analysis was improved with reasonable accuracy, while no significant
A coarse-mesh nodal method-diffusive-mesh finite difference method
Joo, H.; Nichols, W.R.
1994-05-01
Modern nodal methods have been successfully used for conventional light water reactor core analyses where the homogenized, node average cross sections (XSs) and the flux discontinuity factors (DFs) based on equivalence theory can reliably predict core behavior. For other types of cores and other geometries characterized by tightly-coupled, heterogeneous core configurations, the intranodal flux shapes obtained from a homogenized nodal problem may not accurately portray steep flux gradients near fuel assembly interfaces or various reactivity control elements. This may require extreme values of DFs (either very large, very small, or even negative) to achieve a desired solution accuracy. Extreme values of DFs, however, can disrupt the convergence of the iterative methods used to solve for the node average fluxes, and can lead to a difficulty in interpolating adjacent DF values. Several attempts to remedy the problem have been made, but nothing has been satisfactory. A new coarse-mesh nodal scheme called the Diffusive-Mesh Finite Difference (DMFD) technique, as contrasted with the coarse-mesh finite difference (CMFD) technique, has been developed to resolve this problem. This new technique and the development of a few-group, multidimensional kinetics computer program are described in this paper.
Hernandez M, N. [CFE, Carretera Cardel-Nautla Km. 43.5, 91680 Veracruz (Mexico); Alonso V, G.; Valle G, E. del [IPN-ESFM, 07738 Mexico D.F. (Mexico)]. e-mail: nhmiranda@mexico.com
2003-07-01
In 1979, Hennart and collaborators applied several schemes of classic finite element in the numerical solution of the diffusion equations in X Y geometry and stationary state. Almost two decades then, in 1996, himself and other collaborators carried out a similar work but using nodal schemes type finite element. Continuing in this last direction, in this work a group it is described a set of several Hybrid Nodal schemes denominated (NH) as well as their application to solve the diffusion equations in multigroup in stationary state and X Y geometry. The term hybrid nodal it means that such schemes interpolate not only Legendre moments of face and of cell but also the values of the scalar flow of neutrons in the four corners of each cell or element of the spatial discretization of the domain of interest. All the schemes here considered are polynomials like they were it their predecessors. Particularly, its have developed and applied eight different hybrid nodal schemes that its are very nearby related with those developed by Hennart and collaborators in the past. It is treated of schemes in those that nevertheless that decreases the number of interpolation parameters it is conserved the accurate in relation to the bi-quadratic and bi-cubic schemes. Of these eight, three were described and applied in a previous work. It is the bi-lineal classic scheme as well as the hybrid nodal schemes, bi-quadratic and bi-cubic for that here only are described the other 5 hybrid nodal schemes although they are provided numerical results for several test problems with all them. (Author)
Error analysis of the quartic nodal expansion method for slab geometry
Penland, R.C.; Turinsky, P.J. [North Carolina State Univ., Raleigh, NC (United States); Azmy, Y.Y. [Oak Ridge National Lab., TN (United States)
1995-02-01
This paper presents an analysis of the quartic polynomial Nodal Expansion Method (NEM) for one-dimensional neutron diffusion calculations. As part of an ongoing effort to develop an adaptive mesh refinement strategy for use in state-of-the-art nodal kinetics codes, we derive a priori error bounds on the computed solution for uniform meshes and validate them using a simple test problem. Predicted error bounds are found to be greater than computed maximum absolute errors by no more than a factor of six allowing mesh size selection to reflect desired accuracy. We also quantify the rapid convergence in the NEM computed solution as a function of mesh size.
A nodal spectral stiffness matrix for the finite-element method
Bittencourt, Marco L.; Vazquez, Thais G.
2008-12-01
In this paper, shape functions are proposed for the spectral finite-element method aiming to finding a nodal spectral stiffness matrix. The proposed shape functions obtain a nearly diagonal 1D stiffness matrix with better conditioning than using the Lagrange and Jacobi bases.
Some results of a nodal method for nonlinear space-time reactor dynamics
Le, T.T. (Westinghouse Savannah River Co., Aiken, SC (United States)); Grossman, L.M. (California Univ., Berkeley, CA (United States). Dept. of Nuclear Engineering)
1991-01-01
There are many reports about nodal methods for static and dynamic problems, but not many for the nonlinear feedback cases. In this paper, a class of nodal methods called mathematical nodal method'' (MNM) is studied with the temperature feedback problems. The spatially complex domain of the problem is represented as a collection of geometrically simple subdomains of the size of fuel assemblies called nodes. Over each node, the time dependent coefficients of the neutron flux, precursor concentrations, fuel and coolant temperatures are the surface and volume weighted average (moment) values of the unknown solutions; the space dependent basis functions are a combination of Legendre polynomials. If the material parameters are a linear function of fuel and coolant temperatures, the coupled equations can be put in a dimensionless form and a system of time dependent ordinary differential equations containing nonlinear feedback terms is obtained. These nonlinear feedback terms are updated at each time step during the time iteration process. Results of some benchmark problems are included in this report.
Some results of a nodal method for nonlinear space-time reactor dynamics
Le, T.T. [Westinghouse Savannah River Co., Aiken, SC (United States); Grossman, L.M. [California Univ., Berkeley, CA (United States). Dept. of Nuclear Engineering
1991-12-31
There are many reports about nodal methods for static and dynamic problems, but not many for the nonlinear feedback cases. In this paper, a class of nodal methods called ``mathematical nodal method`` (MNM) is studied with the temperature feedback problems. The spatially complex domain of the problem is represented as a collection of geometrically simple subdomains of the size of fuel assemblies called nodes. Over each node, the time dependent coefficients of the neutron flux, precursor concentrations, fuel and coolant temperatures are the surface and volume weighted average (moment) values of the unknown solutions; the space dependent basis functions are a combination of Legendre polynomials. If the material parameters are a linear function of fuel and coolant temperatures, the coupled equations can be put in a dimensionless form and a system of time dependent ordinary differential equations containing nonlinear feedback terms is obtained. These nonlinear feedback terms are updated at each time step during the time iteration process. Results of some benchmark problems are included in this report.
Error analysis of the quadratic nodal expansion method in slab geometry
Penland, R.C.; Turinsky, P.J. [North Carolina State Univ., Raleigh, NC (United States); Azmy, Y.Y. [Oak Ridge National Lab., TN (United States)
1994-10-01
As part of an effort to develop an adaptive mesh refinement strategy for use in state-of-the-art nodal diffusion codes, the authors derive error bounds on the solution variables of the quadratic Nodal Expansion Method (NEM) in slab geometry. Closure of the system is obtained through flux discontinuity relationships and boundary conditions. In order to verify the analysis presented, the authors compare the quadratic NEM to the analytic solution of a test problem. The test problem for this investigation is a one-dimensional slab [0,20cm] with L{sup 2} = 6.495cm{sup 2} and D = 0.1429cm. The slab has a unit neutron source distributed uniformly throughout and zero flux boundary conditions. The analytic solution to this problem is used to compute the node-average fluxes over a variety of meshes, and these are used to compute the NEM maximum error on each mesh.
A stabilised nodal spectral element method for fully nonlinear water waves
Engsig-Karup, A. P.; Eskilsson, C.; Bigoni, D.
2016-08-01
We present an arbitrary-order spectral element method for general-purpose simulation of non-overturning water waves, described by fully nonlinear potential theory. The method can be viewed as a high-order extension of the classical finite element method proposed by Cai et al. (1998) [5], although the numerical implementation differs greatly. Features of the proposed spectral element method include: nodal Lagrange basis functions, a general quadrature-free approach and gradient recovery using global L2 projections. The quartic nonlinear terms present in the Zakharov form of the free surface conditions can cause severe aliasing problems and consequently numerical instability for marginally resolved or very steep waves. We show how the scheme can be stabilised through a combination of over-integration of the Galerkin projections and a mild spectral filtering on a per element basis. This effectively removes any aliasing driven instabilities while retaining the high-order accuracy of the numerical scheme. The additional computational cost of the over-integration is found insignificant compared to the cost of solving the Laplace problem. The model is applied to several benchmark cases in two dimensions. The results confirm the high order accuracy of the model (exponential convergence), and demonstrate the potential for accuracy and speedup. The results of numerical experiments are in excellent agreement with both analytical and experimental results for strongly nonlinear and irregular dispersive wave propagation. The benefit of using a high-order - possibly adapted - spatial discretisation for accurate water wave propagation over long times and distances is particularly attractive for marine hydrodynamics applications.
A stabilised nodal spectral element method for fully nonlinear water waves
Engsig-Karup, Allan Peter; Eskilsson, C.; Bigoni, Daniele
2016-01-01
We present an arbitrary-order spectral element method for general-purpose simulation of non-overturning water waves, described by fully nonlinear potential theory. The method can be viewed as a high-order extension of the classical finite element method proposed by Cai et al. (1998) [5], although...... the numerical implementation differs greatly. Features of the proposed spectral element method include: nodal Lagrange basis functions, a general quadrature-free approach and gradient recovery using global L2 projections. The quartic nonlinear terms present in the Zakharov form of the free surface conditions...
Spectral Method with the Tensor-Product Nodal Basis for the Steklov Eigenvalue Problem
Xuqing Zhang
2013-01-01
Full Text Available This paper discusses spectral method with the tensor-product nodal basis at the Legendre-Gauss-Lobatto points for solving the Steklov eigenvalue problem. A priori error estimates of spectral method are discussed, and based on the work of Melenk and Wohlmuth (2001, a posterior error estimator of the residual type is given and analyzed. In addition, this paper combines the shifted-inverse iterative method and spectral method to establish an efficient scheme. Finally, numerical experiments with MATLAB program are reported.
Zhou, Xiafeng, E-mail: zhou-xf11@mails.tsinghua.edu.cn; Guo, Jiong, E-mail: guojiong12@tsinghua.edu.cn; Li, Fu, E-mail: lifu@tsinghua.edu.cn
2015-12-15
Highlights: • NEMs are innovatively applied to solve convection diffusion equation. • Stability, accuracy and numerical diffusion for NEM are analyzed for the first time. • Stability and numerical diffusion depend on the NEM expansion order and its parity. • NEMs have higher accuracy than both second order upwind and QUICK scheme. • NEMs with different expansion orders are integrated into a unified discrete form. - Abstract: The traditional finite difference method or finite volume method (FDM or FVM) is used for HTGR thermal-hydraulic calculation at present. However, both FDM and FVM require the fine mesh sizes to achieve the desired precision and thus result in a limited efficiency. Therefore, a more efficient and accurate numerical method needs to be developed. Nodal expansion method (NEM) can achieve high accuracy even on the coarse meshes in the reactor physics analysis so that the number of spatial meshes and computational cost can be largely decreased. Because of higher efficiency and accuracy, NEM can be innovatively applied to thermal-hydraulic calculation. In the paper, NEMs with different orders of basis functions are successfully developed and applied to multi-dimensional steady convection diffusion equation. Numerical results show that NEMs with three or higher order basis functions can track the reference solutions very well and are superior to second order upwind scheme and QUICK scheme. However, the false diffusion and unphysical oscillation behavior are discovered for NEMs. To explain the reasons for the above-mentioned behaviors, the stability, accuracy and numerical diffusion properties of NEM are analyzed by the Fourier analysis, and by comparing with exact solutions of difference and differential equation. The theoretical analysis results show that the accuracy of NEM increases with the expansion order. However, the stability and numerical diffusion properties depend not only on the order of basis functions but also on the parity of
A Stabilised Nodal Spectral Element Method for Fully Nonlinear Water Waves
Engsig-Karup, Allan Peter; Bigoni, Daniele
2015-01-01
We present an arbitrary-order spectral element method for general-purpose simulation of non-overturning water waves, described by fully nonlinear potential theory. The method can be viewed as a high-order extension of the classical finite element method proposed by Cai et al (1998) \\cite{CaiEtAl1998}, although the numerical implementation differs greatly. Features of the proposed spectral element method include: nodal Lagrange basis functions, a general quadrature-free approach and gradient recovery using global $L^2$ projections. The quartic nonlinear terms present in the Zakharov form of the free surface conditions can cause severe aliasing problems and consequently numerical instability for marginally resolved or very steep waves. We show how the scheme can be stabilised through a combination of over-integration of the Galerkin projections and a mild spectral filtering on a per element basis. This effectively removes any aliasing driven instabilities while retaining the high-order accuracy of the numerical...
Well-balanced nodal discontinuous Galerkin method for Euler equations with gravity
Chandrashekar, Praveen
2015-01-01
We present a well-balanced nodal discontinuous Galerkin (DG) scheme for compressible Euler equations with gravity. The DG scheme makes use of discontinuous Lagrange basis functions supported at Gauss-Lobatto-Legendre (GLL) nodes together with GLL quadrature using the same nodes. The well-balanced property is achieved by a specific form of source term discretization that depends on the nature of the hydrostatic solution, together with the GLL nodes for quadrature of the source term. The scheme is able to preserve isothermal and polytropic stationary solutions upto machine precision on any mesh composed of quadrilateral cells and for any gravitational potential. It is applied on several examples to demonstrate its well-balanced property and the improved resolution of small perturbations around the stationary solution.
A PURE NODAL-ANALYSIS METHOD SUITABLE FOR ANALOG CIRCUITS USING NULLORS
E. Tlelo-Cuautle
2003-09-01
Full Text Available A novel technique suitable for computer-aided analysis of analog integrated circuits (ICs is introduced. This technique uses the features of both nodal-analysis (NA and symbolic analysis, at nullor level. First, the nullor is used to model the ideal behavior of several analog devices, namely: transistors, opamps, OTAs, and current conveyors. From this modeling approach, it is shown how to transform circuits working in voltage-mode to current-mode and vice-versa. Second, it is demonstrated that using nullors, all non-NA-compatible elements can be transformed into NA-compatible ones, this results in a computationally-improved pure-NA method. Third, the computation of fully-symbolic expressions using , is described. It is demonstrated that a symbolic expression gives more insight in the behavior and performance of the circuit. Finally, several examples demonstrate the suitability and appropriateness of the proposed method to be used in education.
A study of the radiative transfer equation using a spherical harmonics-nodal collocation method
Capilla, M. T.; Talavera, C. F.; Ginestar, D.; Verdú, G.
2017-03-01
Optical tomography has found many medical applications that need to know how the photons interact with the different tissues. The majority of the photon transport simulations are done using the diffusion approximation, but this approximation has a limited validity when optical properties of the different tissues present large gradients, when structures near the photons source are studied or when anisotropic scattering has to be taken into account. As an alternative to the diffusion model, the PL equations for the radiative transfer problem are studied. These equations are discretized in a rectangular mesh using a nodal collocation method. The performance of this model is studied by solving different 1D and 2D benchmark problems of light propagation in tissue having media with isotropic and anisotropic scattering.
Xolocostli M, J.V
2002-07-01
The main objective of this work is to solve the neutron transport equation in one and two dimensions (slab geometry and X Y geometry, respectively), with no time dependence, for BWR assemblies using nodal methods. In slab geometry, the nodal methods here used are the polynomial continuous (CMPk) and discontinuous (DMPk) families but only the Linear Continuous (also known as Diamond Difference), the Quadratic Continuous (QC), the Cubic Continuous (CC), the Step Discontinuous (also known as Backward Euler), the Linear Discontinuous (LD) and the Quadratic Discontinuous (QD) were considered. In all these schemes the unknown function, the angular neutron flux, is approximated as a sum of basis functions in terms of Legendre polynomials, associated to the values of the neutron flux in the edges (left, right, or both) and the Legendre moments in the cell, depending on the nodal scheme used. All these schemes were implemented in a computer program developed in previous thesis works and known with the name TNX. This program was modified for the purposes of this work. The program discreetizes the domain of concern in one dimension and determines numerically the angular neutron flux for each point of the discretization when the number of energy groups and regions are known starting from an initial approximation for the angular neutron flux being consistent with the boundary condition imposed for a given problem. Although only problems with two-energy groups were studied the computer program does not have limitations regarding the number of energy groups and the number of regions. The two problems analyzed with the program TNX have practically the same characteristics (fuel and water), with the difference that one of them has a control rod. In the part corresponding to two-dimensional problems, the implemented nodal methods were those designated as hybrids that consider not only the edge and cell Legendre moments, but also the values of the neutron flux in the corner points
Xolocostli M, J.V
2002-07-01
The main objective of this work is to solve the neutron transport equation in one and two dimensions (slab geometry and X Y geometry, respectively), with no time dependence, for BWR assemblies using nodal methods. In slab geometry, the nodal methods here used are the polynomial continuous (CMPk) and discontinuous (DMPk) families but only the Linear Continuous (also known as Diamond Difference), the Quadratic Continuous (QC), the Cubic Continuous (CC), the Step Discontinuous (also known as Backward Euler), the Linear Discontinuous (LD) and the Quadratic Discontinuous (QD) were considered. In all these schemes the unknown function, the angular neutron flux, is approximated as a sum of basis functions in terms of Legendre polynomials, associated to the values of the neutron flux in the edges (left, right, or both) and the Legendre moments in the cell, depending on the nodal scheme used. All these schemes were implemented in a computer program developed in previous thesis works and known with the name TNX. This program was modified for the purposes of this work. The program discreetizes the domain of concern in one dimension and determines numerically the angular neutron flux for each point of the discretization when the number of energy groups and regions are known starting from an initial approximation for the angular neutron flux being consistent with the boundary condition imposed for a given problem. Although only problems with two-energy groups were studied the computer program does not have limitations regarding the number of energy groups and the number of regions. The two problems analyzed with the program TNX have practically the same characteristics (fuel and water), with the difference that one of them has a control rod. In the part corresponding to two-dimensional problems, the implemented nodal methods were those designated as hybrids that consider not only the edge and cell Legendre moments, but also the values of the neutron flux in the corner points
A posteriori error estimator and AMR for discrete ordinates nodal transport methods
Duo, Jose I. [The Pennsylvania State University, 138 Reber Bldg, University Park (United States); Azmy, Yousry Y. [The Pennsylvania State University, 229 Reber Bldg, University Park (United States); Zikatanov, Ludmil T. [The Pennsylvania State University, 218 McAllister Bldg, University Park (United States)
2008-07-01
In the development of high fidelity transport solvers, optimization of the use of available computational resources and access to a tool for assessing quality of the solution are key to the success of large-scale nuclear systems' simulation. Error control provides the analyst with a confidence level in the numerical solution and enables for optimization of resources through Adaptive Mesh Refinement (AMR). In this paper, we derive an a posterior error estimator based on the nodal solution of the Arbitrarily High Order Transport Method of the Nodal type (AHOT-N). Furthermore, by making assumptions on the regularity of the solution, we represent the error estimator as a function of computable volume and element-edges residuals. The global L{sub 2} error norm is proved to be bound by the estimator. To lighten the computational load, we present a numerical approximation to the aforementioned residuals and split the global norm error estimator into local error indicators. These indicators are used to drive an AMR strategy for the spatial discretization. However, the indicators based on forward solution residuals alone do not bound the cell-wise error. The estimator and AMR strategy are tested in two problems featuring strong heterogeneity and highly transport streaming regime with strong flux gradients. The results show that the error estimator indeed bounds the global error norms and that the error indicator follows the cell-error's spatial distribution pattern closely. The AMR strategy proves beneficial to optimize resources, primarily by reducing the number of discrete variables unknowns solved for to achieve a prescribed solution accuracy in global L{sub 2} error norm. Likewise, AMR achieves higher accuracy compared to uniform refinement when resolving sharp flux gradients, for the same number of unknowns. (authors)
A posteriori error estimator and AMR for discrete ordinates nodal transport methods
Duo, Jose I. [Westinghouse Electric Co., 4350 Northern Pike, Monroeville, PA 15146 (United States)], E-mail: duoji@westinghouse.com; Azmy, Yousry Y. [North Carolina State University, 1110 Burlington Lab., Raleigh, NC 27695-7909 (United States)], E-mail: yyazmy@ncsu.edu; Zikatanov, Ludmil T. [The Pennsylvania State University, 218 McAllister Bldg, University Park (United States)
2009-04-15
In the development of high fidelity transport solvers, optimization of the use of available computational resources and access to a tool for assessing quality of the solution are key to the success of large-scale nuclear systems' simulation. In this regard, error control provides the analyst with a confidence level in the numerical solution and enables for optimization of resources through Adaptive Mesh Refinement (AMR). In this paper, we derive an a posteriori error estimator based on the nodal solution of the Arbitrarily High Order Transport Method of the Nodal type (AHOT-N). Furthermore, by making assumptions on the regularity of the solution, we represent the error estimator as a function of computable volume and element-edges residuals. The global L{sub 2} error norm is proved to be bound by the estimator. To lighten the computational load, we present a numerical approximation to the aforementioned residuals and split the global norm error estimator into local error indicators. These indicators are used to drive an AMR strategy for the spatial discretization. However, the indicators based on forward solution residuals alone do not bound the cell-wise error. The estimator and AMR strategy are tested in two problems featuring strong heterogeneity and highly transport streaming regime with strong flux gradients. The results show that the error estimator indeed bounds the global error norms and that the error indicator follows the cell-error's spatial distribution pattern closely. The AMR strategy proves beneficial to optimize resources, primarily by reducing the number of unknowns solved for to achieve prescribed solution accuracy in global L{sub 2} error norm. Likewise, AMR achieves higher accuracy compared to uniform refinement when resolving sharp flux gradients, for the same number of unknowns.
Duerigen, Susan
2013-05-15
The superior advantage of a nodal method for reactor cores with hexagonal fuel assemblies discretized as cells consisting of equilateral triangles is its mesh refinement capability. In this thesis, a diffusion and a simplified P{sub 3} (or SP{sub 3}) neutron transport nodal method are developed based on trigonal geometry. Both models are implemented in the reactor dynamics code DYN3D. As yet, no other well-established nodal core analysis code comprises an SP{sub 3} transport theory model based on trigonal meshes. The development of two methods based on different neutron transport approximations but using identical underlying spatial trigonal discretization allows a profound comparative analysis of both methods with regard to their mathematical derivations, nodal expansion approaches, solution procedures, and their physical performance. The developed nodal approaches can be regarded as a hybrid NEM/AFEN form. They are based on the transverse-integration procedure, which renders them computationally efficient, and they use a combination of polynomial and exponential functions to represent the neutron flux moments of the SP{sub 3} and diffusion equations, which guarantees high accuracy. The SP{sub 3} equations are derived in within-group form thus being of diffusion type. On this basis, the conventional diffusion solver structure can be retained also for the solution of the SP{sub 3} transport problem. The verification analysis provides proof of the methodological reliability of both trigonal DYN3D models. By means of diverse hexagonal academic benchmark and realistic detailed-geometry full-transport-theory problems, the superiority of the SP{sub 3} transport over the diffusion model is demonstrated in cases with pronounced anisotropy effects, which is, e.g., highly relevant to the modeling of fuel assemblies comprising absorber material.
Ultrasound-guided core biopsy: an effective method of detecting axillary nodal metastases.
Solon, Jacqueline G
2012-02-01
BACKGROUND: Axillary nodal status is an important prognostic predictor in patients with breast cancer. This study evaluated the sensitivity and specificity of ultrasound-guided core biopsy (Ax US-CB) at detecting axillary nodal metastases in patients with primary breast cancer, thereby determining how often sentinel lymph node biopsy could be avoided in node positive patients. STUDY DESIGN: Records of patients presenting to a breast unit between January 2007 and June 2010 were reviewed retrospectively. Patients who underwent axillary ultrasonography with or without preoperative core biopsy were identified. Sensitivity, specificity, positive predictive value, and negative predictive value for ultrasonography and percutaneous biopsy were evaluated. RESULTS: Records of 718 patients were reviewed, with 445 fulfilling inclusion criteria. Forty-seven percent (n = 210\\/445) had nodal metastases, with 110 detected by Ax US-CB (sensitivity 52.4%, specificity 100%, positive predictive value 100%, negative predictive value 70.1%). Axillary ultrasonography without biopsy had sensitivity and specificity of 54.3% and 97%, respectively. Lymphovascular invasion was an independent predictor of nodal metastases (sensitivity 60.8%, specificity 80%). Ultrasound-guided core biopsy detected more than half of all nodal metastases, sparing more than one-quarter of all breast cancer patients an unnecessary sentinel lymph node biopsy. CONCLUSIONS: Axillary ultrasonography, when combined with core biopsy, is a valuable component of the management of patients with primary breast cancer. Its ability to definitively identify nodal metastases before surgical intervention can greatly facilitate a patient\\'s preoperative integrated treatment plan. In this regard, we believe our study adds considerably to the increasing data, which indicate the benefit of Ax US-CB in the preoperative detection of nodal metastases.
Nodal Analysis of Circuits Containing Current Conveyors
T. Dostal; A. I. Rybin
2001-01-01
A special method of the nodal analysis of the circuits containing several types of the multiport current conveyors is presented in this paper. The method is based on the given regular and homogeneous models of the irregular current conveyors by the gyrators. Then a diakoptic solving and modification of the inversion of the admittance matrix is applied
Applied Bayesian Hierarchical Methods
Congdon, Peter D
2010-01-01
Bayesian methods facilitate the analysis of complex models and data structures. Emphasizing data applications, alternative modeling specifications, and computer implementation, this book provides a practical overview of methods for Bayesian analysis of hierarchical models.
Methods of applied mathematics
Hildebrand, Francis B
1992-01-01
This invaluable book offers engineers and physicists working knowledge of a number of mathematical facts and techniques not commonly treated in courses in advanced calculus, but nevertheless extremely useful when applied to typical problems in many different fields. It deals principally with linear algebraic equations, quadratic and Hermitian forms, operations with vectors and matrices, the calculus of variations, and the formulations and theory of linear integral equations. Annotated problems and exercises accompany each chapter.
Hageman, Louis A
2004-01-01
This graduate-level text examines the practical use of iterative methods in solving large, sparse systems of linear algebraic equations and in resolving multidimensional boundary-value problems. Assuming minimal mathematical background, it profiles the relative merits of several general iterative procedures. Topics include polynomial acceleration of basic iterative methods, Chebyshev and conjugate gradient acceleration procedures applicable to partitioning the linear system into a "red/black" block form, adaptive computational algorithms for the successive overrelaxation (SOR) method, and comp
Li, Min; Huang, Qing-an; Li, Wei-hua
2009-07-01
This paper reports a nodal model for the trapeziform beam element with gradual change cross-sections. Using this model, electromechanical behavior of the electrically actuated bow-tie shaped fixed-fixed beams can be simulated in a system level. The model is developed by treating the governing equations of the trapeziform beam based on the Galerkin residual method and decomposing the 4th-order partial differential equation into discrete modal ordinary differential equations. After that, the equivalent circuits and corresponding nodal model are established. In the model, the nonlinearities including mid-plane stretching and electrostatic forcing are considered. The accuracy of the developed model is verified by extensively comparing the static and dynamic analysis results with those obtained from FEA and available experiment data. The developed model is also applicable to beam-like structures with uniform cross-sections.
EXTENSION OF THE 1D FOUR-GROUP ANALYTIC NODAL METHOD TO FULL MULTIGROUP
B. D. Ganapol; D. W. Nigg
2008-09-01
In the mid 80’s, a four-group/two-region, entirely analytical 1D nodal benchmark appeared. It was readily acknowledged that this special case was as far as one could go in terms of group number and still achieve an analytical solution. In this work, we show that by decomposing the solution to the multigroup diffusion equation into homogeneous and particular solutions, extension to any number of groups is a relatively straightforward exercise using the mathematics of linear algebra.
A nodal collocation method for the calculation of the lambda modes of the P {sub L} equations
Capilla, M. [Departamento de Matematica Aplicada, Universidad Politecnica de Valencia, Camino de Vera 14, E-46022 Valencia (Spain)]. E-mail: tcapilla@mat.upv.es; Talavera, C.F. [Departamento de Matematica Aplicada, Universidad Politecnica de Valencia, Camino de Vera 14, E-46022 Valencia (Spain)]. E-mail: talavera@mat.upv.es; Ginestar, D. [Departamento de Matematica Aplicada, Universidad Politecnica de Valencia, Camino de Vera 14, E-46022 Valencia (Spain)]. E-mail: dginesta@mat.upv.es; Verdu, G. [Departamento de Ingenieria Quimica y Nuclear, Universidad Politecnica de Valencia, Camino de Vera 14, E-46022 Valencia (Spain)]. E-mail: gverdu@iqn.upv.es
2005-11-15
P {sub L} equations are classical approximations to the neutron transport equation admitting a diffusive form. Using this property, a nodal collocation method is developed for the P {sub L} approximations, which is based on the expansion of the flux in terms of orthonormal Legendre polynomials. This method approximates the differential lambda modes problem by an algebraic eigenvalue problem from which the fundamental and the subcritical modes of the system can be calculated. To test the performance of this method, two problems have been considered, a homogeneous slab, which admits an analytical solution, and a seven-region slab corresponding to a more realistic problem.
Esquivel E, J.; Alonso V, G. [ININ, Carretera Mexico-Toluca s/n, 52750 Ocoyoacac, Estado de Mexico (Mexico); Del Valle G, E., E-mail: jaime.esquivel@inin.gob.mx [IPN, Escuela Superior de Fisica y Matematicas, Av. IPN s/n, Col. Lindavista, 07738 Ciudad de Mexico (Mexico)
2015-09-15
The solution of the neutron diffusion equation either for reactors in steady state or time dependent, is obtained through approximations generated by implementing of nodal methods such as RTN-0 (Raviart-Thomas-Nedelec of zero index), which is used in this study. Since the nodal methods are applied in quadrangular geometries, in this paper a technique in which the hexagonal geometry through the transfinite interpolation of Gordon-Hall becomes the appropriate geometry to make use of the nodal method RTN-0 is presented. As a result, a computer program was developed, whereby is possible to obtain among other results the neutron multiplication effective factor (k{sub eff}), and the distribution of radial and/or axial power. To verify the operation of the code, was applied to three benchmark problems: in the first two reactors VVER and FBR, results k{sub eff} and power distribution are obtained, considering the steady state case of reactor; while the third problem a type VVER is analyzed, in its case dependent of time, which qualitative results are presented on the behavior of the reactor power. (Author)
Cronin, V.; Sverdrup, K. A.
2013-05-01
The process of delineating a seismo-lineament has evolved since the first description of the Seismo-Lineament Analysis Method (SLAM) by Cronin et al. (2008, Env & Eng Geol 14(3) 199-219). SLAM is a reconnaissance tool to find the trace of the fault that produced an shallow-focus earthquake by projecting the corresponding nodal planes (NP) upward to their intersections with the ground surface, as represented by a DEM or topographic map. A seismo-lineament is formed by the intersection of the uncertainty volume associated with a given NP and the ground surface. The ground-surface trace of the fault that produced the earthquake is likely to be within one of the two seismo-lineaments associated with the two NPs derived from the earthquake's focal mechanism solution. When no uncertainty estimate has been reported for the NP orientation, the uncertainty volume associated with a given NP is bounded by parallel planes that are [1] tangent to the ellipsoidal uncertainty volume around the focus and [2] parallel to the NP. If the ground surface is planar, the resulting seismo-lineament is bounded by parallel lines. When an uncertainty is reported for the NP orientation, the seismo-lineament resembles a bow tie, with the epicenter located adjacent to or within the "knot." Some published lists of focal mechanisms include only one NP with associated uncertainties. The NP orientation uncertainties in strike azimuth (+/- gamma), dip angle (+/- epsilon) and rake that are output from an FPFIT analysis (Reasenberg and Oppenheimer, 1985, USGS OFR 85-739) are taken to be the same for both NPs (Oppenheimer, 2013, pers com). The boundaries of the NP uncertainty volume are each comprised by planes that are tangent to the focal uncertainty ellipsoid. One boundary, whose nearest horizontal distance from the epicenter is greater than or equal to that of the other boundary, is formed by the set of all planes with strike azimuths equal to the reported NP strike azimuth +/- gamma, and dip angle
Lautard, J.J.
1994-05-01
This paper presents new extension for the mixed dual finite element approximation of the diffusion equation in rectangular geometry. The mixed dual formulation has been extended in order to take into account discontinuity conditions. The iterative method is based on an alternating direction method which uses the current as unknown. This method is fully ``parallelizable`` and has very quick convergence properties. Some results for a 3D calculation on the CRAY computer are presented. (author). 6 refs., 8 figs., 4 tabs.
Tres, Anderson [Universidade Federal do Rio Grande do Sul, Porto Alegre, RS (Brazil). Programa de Pos-Graduacao em Matematica Aplicada; Becker Picoloto, Camila [Universidade Federal do Rio Grande do Sul, Porto Alegre, RS (Brazil). Programa de Pos-Graduacao em Engenharia Mecanica; Prolo Filho, Joao Francisco [Universidade Federal do Rio Grande do Sul, Porto Alegre, RS (Brazil). Inst de Matematica, Estatistica e Fisica; Dias da Cunha, Rudnei; Basso Barichello, Liliane [Universidade Federal do Rio Grande do Sul, Porto Alegre, RS (Brazil). Inst de Matematica
2014-04-15
In this work a study of two-dimensional fixed-source neutron transport problems, in Cartesian geometry, is reported. The approach reduces the complexity of the multidimensional problem using a combination of nodal schemes and the Analytical Discrete Ordinates Method (ADO). The unknown leakage terms on the boundaries that appear from the use of the derivation of the nodal scheme are incorporated to the problem source term, such as to couple the one-dimensional integrated solutions, made explicit in terms of the x and y spatial variables. The formulation leads to a considerable reduction of the order of the associated eigenvalue problems when combined with the usual symmetric quadratures, thereby providing solutions that have a higher degree of computational efficiency. Reflective-type boundary conditions are introduced to represent the domain on a simpler form than that previously considered in connection with the ADO method. Numerical results obtained with the technique are provided and compared to those present in the literature. (orig.)
Egor M. Mikhailovsky
2015-06-01
Full Text Available We proposed a method for numerically solving the problem of flow distribution in hydraulic circuits with lumped parameters for the case of random closing relations. The conventional and unconventional types of relations for the laws of isothermal steady fluid flow through the individual hydraulic circuit components are studied. The unconventional relations are presented by those given implicitly by the flow rate and dependent on the pressure of the working fluid. In addition to the unconventional relations, the formal conditions of applicability were introduced. These conditions provide a unique solution to the flow distribution problem. A new modified nodal pressure method is suggested. The method is more versatile in terms of the closing relation form as compared to the unmodified one, and has lower computational costs as compared to the known technique of double-loop iteration. The paper presents an analysis of the new method and its algorithm, gives a calculated example of a gas transportation network, and its results.
Mohammadnia Meysam
2013-01-01
Full Text Available The flux expansion nodal method is a suitable method for considering nodalization effects in node corners. In this paper we used this method to solve the intra-nodal flux analytically. Then, a computer code, named MA.CODE, was developed using the C# programming language. The code is capable of reactor core calculations for hexagonal geometries in two energy groups and three dimensions. The MA.CODE imports two group constants from the WIMS code and calculates the effective multiplication factor, thermal and fast neutron flux in three dimensions, power density, reactivity, and the power peaking factor of each fuel assembly. Some of the code's merits are low calculation time and a user friendly interface. MA.CODE results showed good agreement with IAEA benchmarks, i. e. AER-FCM-101 and AER-FCM-001.
Reddy, Charitha D; Ceresnak, Scott R; Motonaga, Kara S; Avasarala, Kishor; Feller, Christine; Trela, Anthony; Hanisch, Debra; Dubin, Anne M
2017-07-14
Cryoablation for atrioventricular nodal reentrant tachycardia (AVNRT) is associated with higher recurrence rates than radiofrequency ablation (RFA). Junctional tachycardia marks procedural success with RFA, but no such indicator exists for cryoablation. The purpose of this study as to determine the impact of voltage mapping plus longer ablation lesions on midterm success of cryoablation for children with AVNRT. We performed a single-center retrospective analysis of pediatric patients with AVNRT who underwent cryoablation from 2011 to 2015. Patients ablated using a standard electroanatomic approach (control) were compared with patients ablated using voltage mapping (voltage group). In the voltage group, EnSite NavX navigation and visualization technology (St Jude Medical, St Paul, MN) was used to develop a "bridge" of lower voltage gradients (0.3-0.8 mV) of the posteroseptal right atrium to guide cryoablation. Kaplan-Meier analysis was used to determine freedom from recurrence of supraventricular tachycardia. In all, 122 patients were included (71 voltage, 51 control). There was no difference between groups regarding age, sex, or catheter-tip size. Short-term success was similar in both groups (98.5% voltage vs 92% control; P = .159), but recurrence rates were lower in the voltage group (0% vs 11%, P = .006). Follow-up time was shorter in the voltage group (15 ± 7 months vs 22 ± 17 months, P < .05). The 1-year freedom from recurrence was lower in the voltage group (100% vs 91.5%, P <.05). Ablation times were longer in the voltage group (43.7 ± 20.9 minutes vs 34.3 ± 20.5 minutes, P = .01), but overall procedure times were shorter in the voltage group (157 ± 40 minutes vs 198 ± 133 minutes; P = .018). No significant complication was seen in either group. Voltage gradient mapping and longer lesion time can decrease recurrence rates in pediatric patients with AVNRT. Copyright © 2017 Heart Rhythm Society. Published by Elsevier Inc. All rights reserved.
Daogang Lu
2016-01-01
Full Text Available A three-dimensional, multigroup, diffusion code based on a high order nodal expansion method for hexagonal-z geometry (HNHEX was developed to perform the neutronic analysis of hexagonal-z geometry. In this method, one-dimensional radial and axial spatially flux of each node and energy group are defined as quadratic polynomial expansion and four-order polynomial expansion, respectively. The approximations for one-dimensional radial and axial spatially flux both have second-order accuracy. Moment weighting is used to obtain high order expansion coefficients of the polynomials of one-dimensional radial and axial spatially flux. The partially integrated radial and axial leakages are both approximated by the quadratic polynomial. The coarse-mesh rebalance method with the asymptotic source extrapolation is applied to accelerate the calculation. This code is used for calculation of effective multiplication factor, neutron flux distribution, and power distribution. The numerical calculation in this paper for three-dimensional SNR and VVER 440 benchmark problems demonstrates the accuracy of the code. In addition, the results show that the accuracy of the code is improved by applying quadratic approximation for partially integrated axial leakage and four-order approximation for one-dimensional axial spatially flux in comparison to flat approximation for partially integrated axial leakage and quadratic approximation for one-dimensional axial spatially flux.
Delfin L, A.; Hernandez L, H.; Alonso V, G. [ININ, 52045 Ocoyoacac, Estado de Mexico (Mexico)
2005-07-01
The nodal methods the same as that of matrix-response are used to develop numeric calculations, so much in static as dynamics of reactors, in one, two and three dimensions. The topic of this work is to apply the equations modeled in the RPM0 program, obtained when using the nodal scheme RT-0 (Raviart-Thomas index zero) in the neutron diffusion equation in stationary state X Y geometry, applying finite differences centered in mesh and lineal reactivity; also, to use those equations captured in the NRMPO program developed by E. Malambu that uses the matrix-response method in X Y geometry. The numeric results of the radial distribution of power by fuel assembly of the unit 1, in the cycles 1 and 2 of the CLV obtained by both methods, they are compared with the calculations obtained with the CM-PRESTO code that is a neutronic-thermo hydraulic simulator in three dimensions. The comparison of the radial distribution of power in the cycles 1 and 2 of the CLV with the CM-PRESTO code, it presents for RPM0 maximum errors of 8.2% and 12.4% and for NRMPO 31.2% and 61.3% respectively. The results show that it can be feasible to use the program RPM0 like a quick and efficient tool in the multicycle analysis in the fuel management. (Author)
Hexagonal Green Function Nodal Method%六角形格林函数节块法
安萍; 姚栋
2014-01-01
Based on the conformal mapping ,Green function method was applied in hexa-gonal geometry .Conformal mapping was used to map a hexagonal node to a rectangular node before transverse integration . Then , the transverse integration equations were resolved using Green function method with the second boundary condition . A three-dimensional multi-energy-groups static program NACK was programmed based on those theories .The code was verified by VVER-1000-type core without the reflector ,VVER-440-type three-dimensional two-energy-groups core and two-dimensional core with discontinuity factors .The eigenvalue error is less than 50 pcm ,and the maximum rela-tive error of the node average power is less than 2% .The accuracy of NACK is as good as that of other advanced node method codes .%本文基于保角变换思想将格林函数节块法应用于六角形几何，该模型采用保角变换将六角形节块变换为矩形节块，对变换后的矩形节块扩散方程横向积分并应用第二类边界条件的格林函数法进行求解。基于此模型编制了堆芯三维多群稳态程序 NACK。利用 NACK 程序计算了不带反射层二维VVER-1000、三维两群VVER-440和带不连续因子的二维基准题。计算结果表明，有效增殖因数 kef 的误差均小于50 pcm ，组件功率分布最大相对误差小于2％，验证了程序的正确性。
Dorning, J.J.
1993-05-01
The report is divided into three parts. The main mathematical development of the new systematic simultaneous lattice-cell and fuel-assembly homogenization theory derived from the transport equation is summarized in Part I. Also included in Part I is the validation of this systematic homogenization theory and the resulting calculational procedures for coarse-mesh nodal diffusion methods that follow from it, in the form of their application to a simple one-dimensional test problem. The results of the application of this transport-equation-based systematic homogenization theory are summarized in Part II in which its superior accuracy over traditional flux and volume weighted homogenization procedures and over generalized equivalence theory is demonstrated for small and large practical two-dimensional PWR problems. The mathematical development of a second systematic homogenization theory -- this one derived starting from the diffusion equation -- is summarized in Part III where its application to a practical two-dimensional PWR model also is summarized and its superior accuracy over traditional homogenization methods and generalized equivalence theory is demonstrated for this problem.
Zeng, X.; Scovazzi, G.
2016-06-01
We present a monolithic arbitrary Lagrangian-Eulerian (ALE) finite element method for computing highly transient flows with strong shocks. We use a variational multiscale (VMS) approach to stabilize a piecewise-linear Galerkin formulation of the equations of compressible flows, and an entropy artificial viscosity to capture strong solution discontinuities. Our work demonstrates the feasibility of VMS methods for highly transient shock flows, an area of research for which the VMS literature is extremely scarce. In addition, the proposed monolithic ALE method is an alternative to the more commonly used Lagrangian+remap methods, in which, at each time step, a Lagrangian computation is followed by mesh smoothing and remap (conservative solution interpolation). Lagrangian+remap methods are the methods of choice in shock hydrodynamics computations because they provide nearly optimal mesh resolution in proximity of shock fronts. However, Lagrangian+remap methods are not well suited for imposing inflow and outflow boundary conditions. These issues offer an additional motivation for the proposed approach, in which we first perform the mesh motion, and then the flow computations using the monolithic ALE framework. The proposed method is second-order accurate and stable, as demonstrated by extensive numerical examples in two and three space dimensions.
A nodal discontinuous Galerkin method for reverse-time migration on GPU clusters
Modave, A.; St-Cyr, A.; Mulder, W. A.; Warburton, T.
2015-11-01
Improving both accuracy and computational performance of numerical tools is a major challenge for seismic imaging and generally requires specialized implementations to make full use of modern parallel architectures. We present a computational strategy for reverse-time migration (RTM) with accelerator-aided clusters. A new imaging condition computed from the pressure and velocity fields is introduced. The model solver is based on a high-order discontinuous Galerkin time-domain (DGTD) method for the pressure-velocity system with unstructured meshes and multirate local time stepping. We adopted the MPI+X approach for distributed programming where X is a threaded programming model. In this work we chose OCCA, a unified framework that makes use of major multithreading languages (e.g. CUDA and OpenCL) and offers the flexibility to run on several hardware architectures. DGTD schemes are suitable for efficient computations with accelerators thanks to localized element-to-element coupling and the dense algebraic operations required for each element. Moreover, compared to high-order finite-difference schemes, the thin halo inherent to DGTD method reduces the amount of data to be exchanged between MPI processes and storage requirements for RTM procedures. The amount of data to be recorded during simulation is reduced by storing only boundary values in memory rather than on disk and recreating the forward wavefields. Computational results are presented that indicate that these methods are strong scalable up to at least 32 GPUs for a three-dimensional RTM case.
Arbitrary Order Mixed Mimetic Finite Differences Method with Nodal Degrees of Freedom
Iaroshenko, Oleksandr [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Gyrya, Vitaliy [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Manzini, Gianmarco [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-09-01
In this work we consider a modification to an arbitrary order mixed mimetic finite difference method (MFD) for a diffusion equation on general polygonal meshes [1]. The modification is based on moving some degrees of freedom (DoF) for a flux variable from edges to vertices. We showed that for a non-degenerate element this transformation is locally equivalent, i.e. there is a one-to-one map between the new and the old DoF. Globally, on the other hand, this transformation leads to a reduction of the total number of degrees of freedom (by up to 40%) and additional continuity of the discrete flux.
GPU performance analysis of a nodal discontinuous Galerkin method for acoustic and elastic models
Modave, Axel; Warburton, Tim
2016-01-01
Finite element schemes based on discontinuous Galerkin methods possess features amenable to massively parallel computing accelerated with general purpose graphics processing units (GPUs). However, the computational performance of such schemes strongly depends on their implementation. In the past, several implementation strategies have been proposed. They are based exclusively on specialized compute kernels tuned for each operation, or they can leverage BLAS libraries that provide optimized routines for basic linear algebra operations. In this paper, we present and analyze up-to-date performance results for different implementations, tested in a unified framework on a single NVIDIA GTX980 GPU. We show that specialized kernels written with a one-node-per-thread strategy are competitive for polynomial bases up to the fifth and seventh degrees for acoustic and elastic models, respectively. For higher degrees, a strategy that makes use of the NVIDIA cuBLAS library provides better results, able to reach a net arith...
GPU performance analysis of a nodal discontinuous Galerkin method for acoustic and elastic models
Modave, A.; St-Cyr, A.; Warburton, T.
2016-06-01
Finite element schemes based on discontinuous Galerkin methods possess features amenable to massively parallel computing accelerated with general purpose graphics processing units (GPUs). However, the computational performance of such schemes strongly depends on their implementation. In the past, several implementation strategies have been proposed. They are based exclusively on specialized compute kernels tuned for each operation, or they can leverage BLAS libraries that provide optimized routines for basic linear algebra operations. In this paper, we present and analyze up-to-date performance results for different implementations, tested in a unified framework on a single NVIDIA GTX980 GPU. We show that specialized kernels written with a one-node-per-thread strategy are competitive for polynomial bases up to the fifth and seventh degrees for acoustic and elastic models, respectively. For higher degrees, a strategy that makes use of the NVIDIA cuBLAS library provides better results, able to reach a net arithmetic throughput 35.7% of the theoretical peak value.
Jacqmin, R.P.
1991-12-10
The safety and optimal performance of large, commercial, light-water reactors require the knowledge at all time of the neutron-flux distribution in the core. In principle, this information can be obtained by solving the time-dependent neutron diffusion equations. However, this approach is complicated and very expensive. Sufficiently accurate, real-time calculations (time scale of approximately one second) are not yet possible on desktop computers, even with fast-running, nodal kinetics codes. A semi-experimental, nodal synthesis method which avoids the solution of the time-dependent, neutron diffusion equations is described. The essential idea of this method is to approximate instantaneous nodal group-fluxes by a linear combination of K, precomputed, three-dimensional, static expansion-functions. The time-dependent coefficients of the combination are found from the requirement that the reconstructed flux-distribution agree in a least-squares sense with the readings of J ({ge}K) fixed, prompt-responding neutron-detectors. Possible numerical difficulties with the least-squares solution of the ill-conditioned, J-by-K system of equations are brought under complete control by the use of a singular-value-decomposition technique. This procedure amounts to the rearrangement of the original, linear combination of K expansion functions into an equivalent more convenient, linear combination of R ({le}K) orthogonalized modes'' of decreasing magnitude. Exceedingly small modes are zeroed to eliminate any risk of roundoff-error amplification, and to assure consistency with the limited accuracy of the data. Additional modes are zeroed when it is desirable to limit the sensitivity of the results to measurement noise.
Jacqmin, Robert P. [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States)
1991-12-10
The safety and optimal performance of large, commercial, light-water reactors require the knowledge at all time of the neutron-flux distribution in the core. In principle, this information can be obtained by solving the time-dependent neutron diffusion equations. However, this approach is complicated and very expensive. Sufficiently accurate, real-time calculations (time scale of approximately one second) are not yet possible on desktop computers, even with fast-running, nodal kinetics codes. A semi-experimental, nodal synthesis method which avoids the solution of the time-dependent, neutron diffusion equations is described. The essential idea of this method is to approximate instantaneous nodal group-fluxes by a linear combination of K, precomputed, three-dimensional, static expansion-functions. The time-dependent coefficients of the combination are found from the requirement that the reconstructed flux-distribution agree in a least-squares sense with the readings of J (≥K) fixed, prompt-responding neutron-detectors. Possible numerical difficulties with the least-squares solution of the ill-conditioned, J-by-K system of equations are brought under complete control by the use of a singular-value-decomposition technique. This procedure amounts to the rearrangement of the original, linear combination of K expansion functions into an equivalent more convenient, linear combination of R (≤K) orthogonalized ``modes`` of decreasing magnitude. Exceedingly small modes are zeroed to eliminate any risk of roundoff-error amplification, and to assure consistency with the limited accuracy of the data. Additional modes are zeroed when it is desirable to limit the sensitivity of the results to measurement noise.
Diffusion Monte Carlo methods applied to Hamaker Constant evaluations
Hongo, Kenta
2016-01-01
We applied diffusion Monte Carlo (DMC) methods to evaluate Hamaker constants of liquids for wettabilities, with practical size of a liquid molecule, Si$_6$H$_{12}$ (cyclohexasilane). The evaluated constant would be justified in the sense that it lies within the expected dependence on molecular weights among similar kinds of molecules, though there is no reference experimental values available for this molecule. Comparing the DMC with vdW-DFT evaluations, we clarified that some of the vdW-DFT evaluations could not describe correct asymptotic decays and hence Hamaker constants even though they gave reasonable binding lengths and energies, and vice versa for the rest of vdW-DFTs. We also found the advantage of DMC for this practical purpose over CCSD(T) because of the large amount of BSSE/CBS corrections required for the latter under the limitation of basis set size applicable to the practical size of a liquid molecule, while the former is free from such limitations to the extent that only the nodal structure of...
Computation of Steady State Nodal Voltages for Fast Security Assessment in Power Systems
Møller, Jakob Glarbo; Jóhannsson, Hjörtur; Østergaard, Jacob
2014-01-01
Development of a method for real-time assess-ment of post-contingency nodal voltages is introduced. Linear network theory is applied in an algorithm that utilizes Thevenin equivalent representation of power systems as seen from every voltage-controlled node in a network. The method is evaluated b...
Høskuldsson, Agnar
2008-01-01
The author has developed a framework for mathematical modelling within applied sciences. It is characteristic for data from 'nature and industry' that they have reduced rank for inference. It means that full rank solutions normally do not give satisfactory solutions. The basic idea of H...... with finding a balance between the estimation task and the prediction task. The name H-methods has been chosen because of close analogy with the Heisenberg uncertainty inequality. A similar situation is present in modelling data. The mathematical modelling stops, when the prediction aspect of the model cannot...... be improved. H-methods have been applied to wide range of fields within applied sciences. In each case, the H-methods provide with superior solutions compared to the traditional ones. A background for the H-methods is presented. The H-principle of mathematical modelling is explained. It is shown how...
[Montessori method applied to dementia - literature review].
Brandão, Daniela Filipa Soares; Martín, José Ignacio
2012-06-01
The Montessori method was initially applied to children, but now it has also been applied to people with dementia. The purpose of this study is to systematically review the research on the effectiveness of this method using Medical Literature Analysis and Retrieval System Online (Medline) with the keywords dementia and Montessori method. We selected lo studies, in which there were significant improvements in participation and constructive engagement, and reduction of negative affects and passive engagement. Nevertheless, systematic reviews about this non-pharmacological intervention in dementia rate this method as weak in terms of effectiveness. This apparent discrepancy can be explained because the Montessori method may have, in fact, a small influence on dimensions such as behavioral problems, or because there is no research about this method with high levels of control, such as the presence of several control groups or a double-blind study.
Wołowicz Marcin
2015-09-01
Full Text Available The paper presents dynamic model of hot water storage tank. The literature review has been made. Analysis of effects of nodalization on the prediction error of generalized finite element method (GFEM is provided. The model takes into account eleven various parameters, such as: flue gases volumetric flow rate to the spiral, inlet water temperature, outlet water flow rate, etc. Boiler is also described by sizing parameters, nozzle parameters and heat loss including ambient temperature. The model has been validated on existing data. Adequate laboratory experiments were provided. The comparison between 1-, 5-, 10- and 50-zone boiler is presented. Comparison between experiment and simulations for different zone numbers of the boiler model is presented on the plots. The reason of differences between experiment and simulation is explained.
Bzdušek, Tomáš; Wu, Quansheng; Rüegg, Andreas; Sigrist, Manfred; Soluyanov, Alexey A.
2016-10-01
The band theory of solids is arguably the most successful theory of condensed-matter physics, providing a description of the electronic energy levels in various materials. Electronic wavefunctions obtained from the band theory enable a topological characterization of metals for which the electronic spectrum may host robust, topologically protected, fermionic quasiparticles. Many of these quasiparticles are analogues of the elementary particles of the Standard Model, but others do not have a counterpart in relativistic high-energy theories. A complete list of possible quasiparticles in solids is lacking, even in the non-interacting case. Here we describe the possible existence of a hitherto unrecognized type of fermionic excitation in metals. This excitation forms a nodal chain—a chain of connected loops in momentum space—along which conduction and valence bands touch. We prove that the nodal chain is topologically distinct from previously reported excitations. We discuss the symmetry requirements for the appearance of this excitation and predict that it is realized in an existing material, iridium tetrafluoride (IrF4), as well as in other compounds of this class of materials. Using IrF4 as an example, we provide a discussion of the topological surface states associated with the nodal chain. We argue that the presence of the nodal-chain fermions will result in anomalous magnetotransport properties, distinct from those of materials exhibiting previously known excitations.
Geostatistical methods applied to field model residuals
Maule, Fox; Mosegaard, K.; Olsen, Nils
consists of measurement errors and unmodelled signal), and is typically assumed to be uncorrelated and Gaussian distributed. We have applied geostatistical methods to analyse the residuals of the Oersted(09d/04) field model [http://www.dsri.dk/Oersted/Field_models/IGRF_2005_candidates/], which is based...
Multigrid method applied to the solution of an elliptic, generalized eigenvalue problem
Alchalabi, R.M. [BOC Group, Murray Hill, NJ (United States); Turinsky, P.J. [North Carolina State Univ., Raleigh, NC (United States)
1996-12-31
The work presented in this paper is concerned with the development of an efficient MG algorithm for the solution of an elliptic, generalized eigenvalue problem. The application is specifically applied to the multigroup neutron diffusion equation which is discretized by utilizing the Nodal Expansion Method (NEM). The underlying relaxation method is the Power Method, also known as the (Outer-Inner Method). The inner iterations are completed using Multi-color Line SOR, and the outer iterations are accelerated using Chebyshev Semi-iterative Method. Furthermore, the MG algorithm utilizes the consistent homogenization concept to construct the restriction operator, and a form function as a prolongation operator. The MG algorithm was integrated into the reactor neutronic analysis code NESTLE, and numerical results were obtained from solving production type benchmark problems.
Mohamad Hasan Moradi
2014-12-01
Full Text Available In this paper a hybrid and practical method is presented to allocate and determine combined heat and power capacity (CHP generator at a bus. This method consists of two stages. First, the suitable buses for CHP installation will be found by the bus thermal coefficient . This coefficient indicates the possibility of the heat selling around each bus and will be calculated by using the Fuzzy method. Next, for each of the appropriate buses, considering the obtained heat capacity and electrical power ratio to the heat of the CHPs in the market, several CHPs are recommended. Second, on the one hand, the improvement of the technical criteria after the CHPs installation is derived by using the nodal pricing methods as the financial benefits of the distribution companies and on the other hand, the investors’ financial benefits from the sold heat output of the CHPs is determined. Finally, using the Game Theory and considering the distribution companies and investors as the players, the suitable location and capacity for CHP installation based on the set Game strategy is obtained. The proposed method is implemented to a sample distribution feeder in the Hamadan city and the results are shown.
Monty Adkins
2014-12-01
Full Text Available This paper proposes the notion of Nodalism as a means describing contemporary culture and of understanding my own creative practice in electronic music composition. It draws on theories and ideas from Kirby, Bauman, Bourriaud, Deleuze, Guatarri, and Gochenour, to demonstrate how networks of ideas or connectionist neural models of cognitive behaviour can be used to contextualize, understand and become a creative tool for the creation of contemporary electronic music.
Topological nodal line semimetals
Fang, Chen; Weng, Hongming; Dai, Xi; Fang, Zhong
2016-11-01
We review the recent, mainly theoretical, progress in the study of topological nodal line semimetals in three dimensions. In these semimetals, the conduction and the valence bands cross each other along a one-dimensional curve in the three-dimensional Brillouin zone, and any perturbation that preserves a certain symmetry group (generated by either spatial symmetries or time-reversal symmetry) cannot remove this crossing line and open a full direct gap between the two bands. The nodal line(s) is hence topologically protected by the symmetry group, and can be associated with a topological invariant. In this review, (i) we enumerate the symmetry groups that may protect a topological nodal line; (ii) we write down the explicit form of the topological invariant for each of these symmetry groups in terms of the wave functions on the Fermi surface, establishing a topological classification; (iii) for certain classes, we review the proposals for the realization of these semimetals in real materials; (iv) we discuss different scenarios that when the protecting symmetry is broken, how a topological nodal line semimetal becomes Weyl semimetals, Dirac semimetals, and other topological phases; and (v) we discuss the possible physical effects accessible to experimental probes in these materials. Project partially supported by the National Key Research and Development Program of China (Grant Nos. 2016YFA0302400 and 2016YFA0300604), partially by the National Natural Science Foundation of China (Grant Nos. 11274359 and 11422428), the National Basic Research Program of China (Grant No. 2013CB921700), and the “Strategic Priority Research Program (B)” of the Chinese Academy of Sciences (Grant No. XDB07020100).
Thompson's Method applied to Quantum Electrodynamics (QED)
Nassif, C; Nassif, Claudio
2000-01-01
In this work we apply Thompson's method (of the dimensions) to study the quantum electrodynamics (QED). This method can be considered as a simple and alternative way to the renormalisation group (R.G) approach and when applied to QED lagrangian is able to obtain the running coupling constant behavior $\\alpha (\\mu)$, namely the dependence of $\\alpha$ on the energy scale. We also obtain the dependence of the mass on the energy scale. The calculations are evaluated just at $d_c=4$, where $d_c$ is the upper critical dimension of the problem, so that we obtain logarithmic behavior both for the coupling $\\alpha$ and the mass $m$ on the energy scale $\\mu$.
Nodal Analysis of Circuits Containing Current Conveyors
T. Dostal
2001-09-01
Full Text Available A special method of the nodal analysis of the circuits containingseveral types of the multiport current conveyors is presented in thispaper. The method is based on the given regular and homogeneous modelsof the irregular current conveyors by the gyrators. Then a diakopticsolving and modification of the inversion of the admittance matrix isapplied
LATE ONSET ATRIOVENTRICULAR NODAL TACHYCARDIA
PENTINGA, ML; MEEDER, JG; CRIJNS, HJGM; DEMUINCK, ED; WIESFELD, ACP; LIE, KI
AV nodal tachycardia may present at any age, but onset in late adulthood is considered uncommon. To evaluate whether onset of AV nodal tachycardias at older age is related to organic heart disease (possibly setting the stage for re-entry due to degenerative structural changes) 32 consecutive
LATE ONSET ATRIOVENTRICULAR NODAL TACHYCARDIA
PENTINGA, ML; MEEDER, JG; CRIJNS, HJGM; DEMUINCK, ED; WIESFELD, ACP; LIE, KI
1993-01-01
AV nodal tachycardia may present at any age, but onset in late adulthood is considered uncommon. To evaluate whether onset of AV nodal tachycardias at older age is related to organic heart disease (possibly setting the stage for re-entry due to degenerative structural changes) 32 consecutive patient
Computational methods applied to wind tunnel optimization
Lindsay, David
methods, coordinate transformation theorems and techniques including the Method of Jacobians, and a derivation of the fluid flow fundamentals required for the model. It applies the methods to study the effect of cross-section and fillet variation, and to obtain a sample design of a high-uniformity nozzle.
2015-01-01
Objective:To develop an easy applicable novel nodal grading system to improve the standardization of nodal classification in patients with limited lymphadenectomy. Methods: We formulated a new approach of nodal classification to classify this category of patients. Log-rank test was used for univariate analysis and Cox proportional hazards model was used for univariate and multivariate analysis. We used linear trendχ2 tests, likelihood ratioχ2 test and Akaike information criterion (AIC) value to assess the homogeneity, discriminatory ability and monotonicity of gradients of the two nodal staging systems.Results:Statistical analysis supported that both the hypothesized N’ stage and hypothesized TN’M stage outperforms the present AJCC/UICC staging system.Conclusion:We developed an easy applicable and reproducible novel nodal grading system that has a greater predicting value than the current AJCC/UICC staging system to classify gastric cancer patients with limited lymphadenectomy.
Applying the Scientific Method of Cybersecurity Research
Tardiff, Mark F.; Bonheyo, George T.; Cort, Katherine A.; Edgar, Thomas W.; Hess, Nancy J.; Hutton, William J.; Miller, Erin A.; Nowak, Kathleen E.; Oehmen, Christopher S.; Purvine, Emilie AH; Schenter, Gregory K.; Whitney, Paul D.
2016-09-15
The cyber environment has rapidly evolved from a curiosity to an essential component of the contemporary world. As the cyber environment has expanded and become more complex, so have the nature of adversaries and styles of attacks. Today, cyber incidents are an expected part of life. As a result, cybersecurity research emerged to address adversarial attacks interfering with or preventing normal cyber activities. Historical response to cybersecurity attacks is heavily skewed to tactical responses with an emphasis on rapid recovery. While threat mitigation is important and can be time critical, a knowledge gap exists with respect to developing the science of cybersecurity. Such a science will enable the development and testing of theories that lead to understanding the broad sweep of cyber threats and the ability to assess trade-offs in sustaining network missions while mitigating attacks. The Asymmetric Resilient Cybersecurity Initiative at Pacific Northwest National Laboratory is a multi-year, multi-million dollar investment to develop approaches for shifting the advantage to the defender and sustaining the operability of systems under attack. The initiative established a Science Council to focus attention on the research process for cybersecurity. The Council shares science practices, critiques research plans, and aids in documenting and reporting reproducible research results. The Council members represent ecology, economics, statistics, physics, computational chemistry, microbiology and genetics, and geochemistry. This paper reports the initial work of the Science Council to implement the scientific method in cybersecurity research. The second section describes the scientific method. The third section in this paper discusses scientific practices for cybersecurity research. Section four describes initial impacts of applying the science practices to cybersecurity research.
Applied Mathematical Methods in Theoretical Physics
Masujima, Michio
2005-04-01
All there is to know about functional analysis, integral equations and calculus of variations in a single volume. This advanced textbook is divided into two parts: The first on integral equations and the second on the calculus of variations. It begins with a short introduction to functional analysis, including a short review of complex analysis, before continuing a systematic discussion of different types of equations, such as Volterra integral equations, singular integral equations of Cauchy type, integral equations of the Fredholm type, with a special emphasis on Wiener-Hopf integral equations and Wiener-Hopf sum equations. After a few remarks on the historical development, the second part starts with an introduction to the calculus of variations and the relationship between integral equations and applications of the calculus of variations. It further covers applications of the calculus of variations developed in the second half of the 20th century in the fields of quantum mechanics, quantum statistical mechanics and quantum field theory. Throughout the book, the author presents over 150 problems and exercises -- many from such branches of physics as quantum mechanics, quantum statistical mechanics, and quantum field theory -- together with outlines of the solutions in each case. Detailed solutions are given, supplementing the materials discussed in the main text, allowing problems to be solved making direct use of the method illustrated. The original references are given for difficult problems. The result is complete coverage of the mathematical tools and techniques used by physicists and applied mathematicians Intended for senior undergraduates and first-year graduates in science and engineering, this is equally useful as a reference and self-study guide.
Nodal sets in mathematical physics
Brüning, J.
2007-06-01
We describe the main lines of mathematical research dealing with nodal sets of eigenfunctions since the days of Chladni. We present the material in a form hopefully suited to a nonspecialized but mathematically educated audience.
Applying Chosen Teaching Methods in Technical Education
Henryk Noga
2014-10-01
Full Text Available Education, also the technical one, is supposed to prepare students for adult life, not only by providing them with ready-made knowledge, but first of all by equipping students with an ability to learn, gather and select information. Active methods influence students’ senses, allowing for a better understanding and remembering the subject matter. The study shows some didactic methods used in technical education. The attention has been aid to inventive, exploratory, and inventive-exploratory methods.
A multigrid method applied to reactor kinetics
Nguyen, T.S
2006-07-01
The control and safety analysis of a nuclear reactor strongly relies on numerical simulation of reactor dynamics, in which the neutronics computation is one of the most important tasks. It is necessary to utilize a full three-dimensional model of neutron kinetics for satisfactory results but this requires an extensive computation. The purpose of this research is to explore an efficient method for accurate solution of the spatial neutron kinetics problem. The kinetics of neutrons in a nuclear reactor of practical interest is adequately represented by the few-group diffusion equations with delayed neutron effects taken into account. For solving such a space-time equation system, finite difference methods, though the simplest, must work with a very fine-mesh grid, resulting in an extremely large algebraic system whose solution by basic numerical methods encounters inefficiency. Coarse-mesh methods increase computational efficiency by reducing the number of discretized equations. However, by adding more complexity and limitations, the coarse-mesh computation is still rather time-consuming. Multigrid methods may provide an optimal solution for a large, sparse algebraic system arising from discretization of a partial differential equation or system but have not found many applications in reactor physics due to inherent difficulties. In this research, a finite difference method is used for discretization of the kinetics equations and a multigrid solver is developed to solve the discretized equation system. The Additive Correction Multigrid, the simplest and cheapest method in the multigrid family, is used for grid coarsening, allowing for reaching the coarsest grid without any difficulties. By avoiding the singularity and indefiniteness of the discretized system, the Red-Black Gauss-Seidel method is suited for multigrid smoothing and favours an implementation of parallel computation. Numerical experiments show that our multigrid solver is not only much faster than any
Neuroimaging methods applied in Parkinson's disease
Leenders, KL
2004-01-01
Radiotracer methods provide regional in vivo quantified information about specific biochemical activities in brain tissue. The understanding of the principles governing radiotracer uptake into brain tissue determines the potential value of these tracers in assessing pathophysiology of brain diseases
Applying Mixed Methods Techniques in Strategic Planning
Voorhees, Richard A.
2008-01-01
In its most basic form, strategic planning is a process of anticipating change, identifying new opportunities, and executing strategy. The use of mixed methods, blending quantitative and qualitative analytical techniques and data, in the process of assembling a strategic plan can help to ensure a successful outcome. In this article, the author…
Applying statistical methods to text steganography
Nechta, Ivan
2011-01-01
This paper presents a survey of text steganography methods used for hid- ing secret information inside some covertext. Widely known hiding techniques (such as translation based steganography, text generating and syntactic embed- ding) and detection are considered. It is shown that statistical analysis has an important role in text steganalysis.
Applying Human Computation Methods to Information Science
Harris, Christopher Glenn
2013-01-01
Human Computation methods such as crowdsourcing and games with a purpose (GWAP) have each recently drawn considerable attention for their ability to synergize the strengths of people and technology to accomplish tasks that are challenging for either to do well alone. Despite this increased attention, much of this transformation has been focused on…
Applying Mixed Methods Techniques in Strategic Planning
Voorhees, Richard A.
2008-01-01
In its most basic form, strategic planning is a process of anticipating change, identifying new opportunities, and executing strategy. The use of mixed methods, blending quantitative and qualitative analytical techniques and data, in the process of assembling a strategic plan can help to ensure a successful outcome. In this article, the author…
[The diagnostic methods applied in mycology].
Kurnatowska, Alicja; Kurnatowski, Piotr
2008-01-01
The systemic fungal invasions are recognized with increasing frequency and constitute a primary cause of morbidity and mortality, especially in immunocompromised patients. Early diagnosis improves prognosis, but remains a problem because there is lack of sensitive tests to aid in the diagnosis of systemic mycoses on the one hand, and on the other the patients only present unspecific signs and symptoms, thus delaying early diagnosis. The diagnosis depends upon a combination of clinical observation and laboratory investigation. The successful laboratory diagnosis of fungal infection depends in major part on the collection of appropriate clinical specimens for investigations and on the selection of appropriate microbiological test procedures. So these problems (collection of specimens, direct techniques, staining methods, cultures on different media and non-culture-based methods) are presented in article.
Scanning probe methods applied to molecular electronics
Pavliček, Niko
2013-01-01
Scanning probe methods on insulating films offer a rich toolbox to study electronic, structural and spin properties of individual molecules. This work discusses three issues in the field of molecular and organic electronics. A scanning tunneling microscopy (STM) head to be operated in high magnetic fields has been designed and built up. The STM head is very compact and rigid relying on a robust coarse approach mechanism. This will facilitate investigations of the spin properties of individ...
Atrioventricular nodal reentrant tachycardia treatment using novel potential.
Ardashev, Andrey V; Makarenko, Alexandr S; Zhelyakov, Eugeny G; Shavarov, Andrey A
2010-12-01
Radiofrequency ablation of atrioventricular nodal reentrant tachycardia is commonly guided by slow and sharp bipolar potentials of the atrioventricular slow nodal pathway. We optimized the morphology of the guiding potential by unipolar mapping of the slow nodal pathway. We identified a novel unipolar dual-component atrial electrogram at the anterior limb of the coronary sinus ostium. The first component was a positive delta-wave type that corresponded to the isoelectric phase on a bipolar electrogram. The second component had fast biphasic morphology and corresponded to the R wave on a bipolar atrial electrogram. Of 104 consecutive patients with typical atrioventricular nodal reentrant tachycardia, 51 were treated with ablation guided by the novel potential, and 53 underwent ablation using the conventional technique. There was no recurrence of tachycardia in any of these patients. In those treated by the novel potential, there was significantly less radiofrequency power applied and a shorter duration of application than in patients treated by the traditional approach. The novel approach to mapping and ablation of the slow nodal pathway in atrioventricular nodal reentrant tachycardia guided by unipolar recording was safe and effective, and comparable to the traditional technique.
Versatile Formal Methods Applied to Quantum Information.
Witzel, Wayne [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Rudinger, Kenneth Michael [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Sarovar, Mohan [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)
2015-11-01
Using a novel formal methods approach, we have generated computer-veri ed proofs of major theorems pertinent to the quantum phase estimation algorithm. This was accomplished using our Prove-It software package in Python. While many formal methods tools are available, their practical utility is limited. Translating a problem of interest into these systems and working through the steps of a proof is an art form that requires much expertise. One must surrender to the preferences and restrictions of the tool regarding how mathematical notions are expressed and what deductions are allowed. Automation is a major driver that forces restrictions. Our focus, on the other hand, is to produce a tool that allows users the ability to con rm proofs that are essentially known already. This goal is valuable in itself. We demonstrate the viability of our approach that allows the user great exibility in expressing state- ments and composing derivations. There were no major obstacles in following a textbook proof of the quantum phase estimation algorithm. There were tedious details of algebraic manipulations that we needed to implement (and a few that we did not have time to enter into our system) and some basic components that we needed to rethink, but there were no serious roadblocks. In the process, we made a number of convenient additions to our Prove-It package that will make certain algebraic manipulations easier to perform in the future. In fact, our intent is for our system to build upon itself in this manner.
Archer, Charles Jens; Musselman, Roy Glenn; Peters, Amanda; Pinnow, Kurt Walter; Swartz, Brent Allen; Wallenfelt, Brian Paul
2011-10-04
A massively parallel nodal computer system periodically collects and broadcasts usage data for an internal communications network. A node sending data over the network makes a global routing determination using the network usage data. Preferably, network usage data comprises an N-bit usage value for each output buffer associated with a network link. An optimum routing is determined by summing the N-bit values associated with each link through which a data packet must pass, and comparing the sums associated with different possible routes.
Archer, Charles Jens; Musselman, Roy Glenn; Peters, Amanda; Pinnow, Kurt Walter; Swartz, Brent Allen; Wallenfelt, Brian Paul
2010-03-16
A massively parallel computer system contains an inter-nodal communications network of node-to-node links. Each node implements a respective routing strategy for routing data through the network, the routing strategies not necessarily being the same in every node. The routing strategies implemented in the nodes are dynamically adjusted during application execution to shift network workload as required. Preferably, adjustment of routing policies in selective nodes is performed at synchronization points. The network may be dynamically monitored, and routing strategies adjusted according to detected network conditions.
Optimization methods applied to hybrid vehicle design
Donoghue, J. F.; Burghart, J. H.
1983-01-01
The use of optimization methods as an effective design tool in the design of hybrid vehicle propulsion systems is demonstrated. Optimization techniques were used to select values for three design parameters (battery weight, heat engine power rating and power split between the two on-board energy sources) such that various measures of vehicle performance (acquisition cost, life cycle cost and petroleum consumption) were optimized. The apporach produced designs which were often significant improvements over hybrid designs already reported on in the literature. The principal conclusions are as follows. First, it was found that the strategy used to split the required power between the two on-board energy sources can have a significant effect on life cycle cost and petroleum consumption. Second, the optimization program should be constructed so that performance measures and design variables can be easily changed. Third, the vehicle simulation program has a significant effect on the computer run time of the overall optimization program; run time can be significantly reduced by proper design of the types of trips the vehicle takes in a one year period. Fourth, care must be taken in designing the cost and constraint expressions which are used in the optimization so that they are relatively smooth functions of the design variables. Fifth, proper handling of constraints on battery weight and heat engine rating, variables which must be large enough to meet power demands, is particularly important for the success of an optimization study. Finally, the principal conclusion is that optimization methods provide a practical tool for carrying out the design of a hybrid vehicle propulsion system.
Reflections on Mixing Methods in Applied Linguistics Research
Hashemi, Mohammad R.
2012-01-01
This commentary advocates the use of mixed methods research--that is the integration of qualitative and quantitative methods in a single study--in applied linguistics. Based on preliminary findings from a research project in progress, some reflections on the current practice of mixing methods as a new trend in applied linguistics are put forward.…
Reflections on Mixing Methods in Applied Linguistics Research
Hashemi, Mohammad R.
2012-01-01
This commentary advocates the use of mixed methods research--that is the integration of qualitative and quantitative methods in a single study--in applied linguistics. Based on preliminary findings from a research project in progress, some reflections on the current practice of mixing methods as a new trend in applied linguistics are put forward.…
邓志红; 孙玉良; 李富; Rizwan-uddin
2013-01-01
为高效求解球床高温气冷堆物理-热工耦合问题,发展改进节块展开法求解圆柱几何下的对流扩散方程.针对圆柱几何和对流扩散方程的特殊性,采用三阶多项式和指数函数作为r向横向积分方程的展开函数,在节块展开法的框架下高效求解对流扩散方程.数值验证表明,改进的节块展开方法具有固有的迎风特性,在使用粗网节块时依然能保持稳定性和较高的计算精度.%For efficient calculation of neutronic-thermal hydraulic coupling in pebble bed high temperature gas cooled reactor,a modified nodal expansion method (MNEM) was developed to solve convection-diffusion equation in cylindrical geometry.Within the framework of nodal expansion method,up to third-order polynomials combined with an orthogonalizd exponential function were adopted as basis functions for r-directed transverse integrated equation.Numerical tests reveal that the MNEM posses inherent upwind characteristic,and has good accuracy and stability for coarse meshes.
Nodal bradycardia induced by tocainide.
Mandal, S. K.; Datta, S.K.
1983-01-01
A case of tocainide-induced nodal bradycardia in standard recommended dose is reported. There was no recurrence when the drug was subsequently reintroduced in a reduced dosage. It is suggested that in the elderly, tocainide should be used in a lower dosage than normally recommended.
Symbolic Nodal Analysis of Analog Circuits with Modern Multiport Functional Blocks
C. Sanchez-Lopez
2013-06-01
Full Text Available This paper proposes admittance matrix models to approach the behavior of six modern multiport functional blocks called: differential difference amplifier, differential difference operational floating amplifier, differential difference operational mirror amplifier, differential difference current conveyor, current backward transconductance amplifier and current differencing transconductance amplifier. The novelty is that the behavior of any active device mentioned before can immediately be introduced in the nodal admittance matrix by using the proposed admittance matrix models and without requiring the use of extra variables. Therefore, a standard nodal analysis is applied to compute fully-symbolic small-signal performance parameters of analog circuits containing any active device mentioned above. This means that not only the size of the admittance matrix is smaller than those generated by applying modified nodal analysis method, for instance, but also, the number of nonzero elements and the generations of cancellation-terms are both reduced. An analysis example for each amplifier is provided in order to show the useful of the proposed stamps.
Nodal line optimization and its application to violin top plate design
Yu, Yonggyun; Jang, In Gwun; Kim, In Kyum; Kwak, Byung Man
2010-10-01
In the literature, most problems of structural vibration have been formulated to adjust a specific natural frequency: for example, to maximize the first natural frequency. In musical instruments like a violin; however, mode shapes are equally important because they are related to sound quality in the way that natural frequencies are related to the octave. The shapes of nodal lines, which represent the natural mode shapes, are generally known to have a unique feature for good violins. Among the few studies on mode shape optimization, one typical study addresses the optimization of nodal point location for reducing vibration in a one-dimensional beam structure. However, nodal line optimization, which is required in violin plate design, has not yet been considered. In this paper, the central idea of controlling the shape of the nodal lines is proposed and then applied to violin top plate design. Finite element model for a violin top plate was constructed using shell elements. Then, optimization was performed to minimize the square sum of the displacement of selected nodes located along the target nodal lines by varying the thicknesses of the top plate. We conducted nodal line optimization for the second and the fifth modes together at the same time, and the results showed that the nodal lines obtained match well with the target nodal lines. The information on plate thickness distribution from nodal line optimization would be valuable for tailored trimming of a violin top plate for the given performances.
Preoperative staging of nodal status in gastric cancer
Berlth, Felix; Chon, Seung-Hun; Chevallay, Mickael; Jung, Minoa Karin
2017-01-01
An accurate preoperative staging of nodal status is crucial in gastric cancer, because it has a great impact on prognosis and therapeutic decision-making. Different staging methods have been evaluated for gastric cancer in order to predict nodal involvement. So far, no technique could meet the necessary requirements, which include a high detection rate of infiltrated lymph nodes and a low frequency of false-positive results. This article summarizes different staging methods used to assess lymph node status in patients with gastric cancer, evaluates the evidence, and proposes to establish new methods. PMID:28217758
Antiferromagnetic topological nodal line semimetals
Wang, Jing
2017-08-01
We study three-dimensional nodal line semimetals (NLSMs) with magnetic ordering and strong spin-orbit interaction. Two distinct classes of magnetic NLSMs are proposed. The first class is band-inversion NLSM where the accidental line node is induced by band inversion and locally protected by glide mirror plane and the combined time-reversal and inversion symmetries. This can be viewed as a trivial stacking of the two-dimensional antiferromagnetic Dirac semimetals. The second class is essential NLSM where the nodal features are filling enforced by specific magnetic symmetry group. We further provide two concrete tight-binding models for magnetic NLSMs which belong to these two different classes, respectively. We conclude with a brief discussion on the possible material venues and the experimental implications for such phases.
The impact of audit and feedback on nodal harvest in colorectal cancer
Bu Jingyu
2011-01-01
Full Text Available Abstract Background Adequate nodal harvest (≥ 12 lymph nodes in colorectal cancer has been shown to optimize staging and proposed as a quality indicator of colorectal cancer care. An audit within a single health district in Nova Scotia, Canada presented and published in 2002, revealed that adequate nodal harvest occurred in only 22% of patients. The goal of this current study was to identify factors associated with adequate nodal harvest, and specifically to examine the impact of the audit and feedback strategy on nodal harvest. Methods This population-based study included all patients undergoing resection for primary colorectal cancer in Nova Scotia, Canada, from 01 January 2001 to 31 December 2005. Linkage of the provincial cancer registry with other databases (hospital discharge, physician claims data, and national census data provided clinicodemographic, diagnostic, and treatment-event data. Factors associated with adequate nodal harvest were examined using multivariate logistic regression. The specific interaction between year and health district was examined to identify any potential effect of dissemination of the previously-performed audit. Results Among the 2,322 patients, the median nodal harvest was 8; overall, 719 (31% had an adequate nodal harvest. On multivariate analysis, audited health district (p Conclusions Improvements in colorectal cancer nodal harvest did occur over time. A published audit demonstrating suboptimal nodal harvest appeared to be an effective knowledge translation tool, though more so for the audited health district, suggesting a potentially beneficial effect of audit and feedback strategies.
Building "Applied Linguistic Historiography": Rationale, Scope, and Methods
Smith, Richard
2016-01-01
In this article I argue for the establishment of "Applied Linguistic Historiography" (ALH), that is, a new domain of enquiry within applied linguistics involving a rigorous, scholarly, and self-reflexive approach to historical research. Considering issues of rationale, scope, and methods in turn, I provide reasons why ALH is needed and…
Applying Mixed Methods Research at the Synthesis Level: An Overview
Heyvaert, Mieke; Maes, Bea; Onghena, Patrick
2011-01-01
Historically, qualitative and quantitative approaches have been applied relatively separately in synthesizing qualitative and quantitative evidence, respectively, in several research domains. However, mixed methods approaches are becoming increasingly popular nowadays, and practices of combining qualitative and quantitative research components at…
Applying sociodramatic methods in teaching transition to palliative care
Baile, Walter F; Walters, Rebecca
2013-01-01
We introduce the technique of sociodrama, describe its key components, and illustrate how this simulation method was applied in a workshop format to address the challenge of discussing transition to palliative care...
The harmonics detection method based on neural network applied ...
user
Keywords: Artificial Neural Networks (ANN), p-q theory, (SAPF), Harmonics, Total Harmonic Distortion. 1. ... Recently, some methods based on artificial intelligence have been applied In order to improve ..... The effect is the reduction of.
Applying Mixed Methods Research at the Synthesis Level: An Overview
Heyvaert, Mieke; Maes, Bea; Onghena, Patrick
2011-01-01
Historically, qualitative and quantitative approaches have been applied relatively separately in synthesizing qualitative and quantitative evidence, respectively, in several research domains. However, mixed methods approaches are becoming increasingly popular nowadays, and practices of combining qualitative and quantitative research components at…
Principal -bundles on Nodal Curves
Usha N Bhosle
2001-08-01
Let be a connected semisimple affine algebraic group defined over . We study the relation between stable, semistable -bundles on a nodal curve and representations of the fundamental group of . This study is done by extending the notion of (generalized) parabolic vector bundles to principal -bundles on the desingularization of and using the correspondence between them and principal -bundles on . We give an isomorphism of the stack of generalized parabolic bundles on with a quotient stack associated to loop groups. We show that if is simple and simply connected then the Picard group of the stack of principal -bundles on is isomorphic to ⊕ , being the number of components of .
Effects of Reality Therapy Methods Applied in the Classroom
Shearn, Donald F.; Randolph, Daniel Lee
1978-01-01
Reality therapy methods in the classroom were examined via a four-group experimental design. The groups were as follows: (a) pretested reality therapy, (b) unpretested reality therapy, (c) pretested placebo, and (d) unpretested placebo. Findings were not supportive of reality therapy methods as applied in the classroom. (Author)
Morozov I.A.
2012-03-01
Full Text Available Nowadays paroxysmal AV nodal reentrant tachycardia (AVNRT is one of the most widespread arrhythmias. In most cases AVRNT is a recurrent process, and it worsens the life quality of such patients, reduces their workability and increases the incidence of applying for medical help. Thus AVNRT today is of special attention among investigators. The interest of clinicians to the problem of cardiac arrhythmias is associated with permanent dissatisfaction with the results of antiarrhythmic therapy and also with the rapid development of the surgical methods of treatment, i.e. the use of radio frequency catheter ablation.
王晓东; 欧阳洁; 王玉龙; 蒋涛
2012-01-01
A stable and efficient element-free Galerkin method is proposed for steady convection-diffusion problems. In the method integrations are computed with a local Taylor expansion nodal integral technique. According to convection-dominated degree, integration nodes are adaptively shifted opposite to the streamline direction. Compared with conventional element-free Galerkin method with stabilization, the method exhibits better stability and higher efficiency in solving convection-dominated convection-diffusion problems. It is a pure meshfree method, which is independent of background integral. Moreover, the method is easy to be implemented.%建立求解稳态对流-扩散方程的一种稳定、高效的无单元Galerkin方法.该方法计算积分时采用基于局部Taylor展开的节点积分,并根据对流占优的程度对积分节点进行自适应迎风偏移.与传统的使用稳定化的无单元Galerkin方法相比,该方法是一种不依赖于背景网格积分的纯无网格方法,具有更好的稳定性和较高的计算效率, 其程序实施更为简便.
The crowding factor method applied to parafoveal vision
Ghahghaei, Saeideh; Walker, Laura
2016-01-01
Crowding increases with eccentricity and is most readily observed in the periphery. During natural, active vision, however, central vision plays an important role. Measures of critical distance to estimate crowding are difficult in central vision, as these distances are small. Any overlap of flankers with the target may create an overlay masking confound. The crowding factor method avoids this issue by simultaneously modulating target size and flanker distance and using a ratio to compare crowded to uncrowded conditions. This method was developed and applied in the periphery (Petrov & Meleshkevich, 2011b). In this work, we apply the method to characterize crowding in parafoveal vision (crowding than in the periphery, yet radial/tangential asymmetries are clearly preserved. There are considerable idiosyncratic differences observed between participants. The crowding factor method provides a powerful tool for examining crowding in central and peripheral vision, which will be useful in future studies that seek to understand visual processing under natural, active viewing conditions. PMID:27690170
Teaching students to apply multiple physical modeling methods
Wiegers, T.; Verlinden, J.C.; Vergeest, J.S.M.
2014-01-01
Design students should be able to explore a variety of shapes before elaborating one particular shape. Current modelling courses don’t address this issue. We developed the course Rapid Modelling, which teaches students to explore multiple shape models in a short time, applying different methods and
Method for applying daytime colors to nighttime imagery in realtime
Hogervorst, M.A.; Toet, A.
2008-01-01
We present a fast and efficient method to derive and apply natural colors to nighttime imagery from multiband sensors. The color mapping is derived from the combination of a multiband image and a corresponding natural color reference image. The mapping optimizes the match between the multiband image
Teaching students to apply multiple physical modeling methods
Wiegers, T.; Verlinden, J.C.; Vergeest, J.S.M.
2014-01-01
Design students should be able to explore a variety of shapes before elaborating one particular shape. Current modelling courses don’t address this issue. We developed the course Rapid Modelling, which teaches students to explore multiple shape models in a short time, applying different methods and
Hybrid method of solution applied to simulation of pulse chromatography
M. A. Cremasco
2009-06-01
Full Text Available In this communication, the method proposed by Cremasco et al. (2003 is applied to predict single and low concentration pulse chromatography. In previous work, a general rate model was presented to describe the breakthrough curve, where a hybrid solution was proposed for the linear adsorption. The liquid phase concentration inside the particle was found analytically and related with the bed liquid phase through Duhamel's Theorem, while the bulk-phase equation was solved by a numerical method. In this paper, this method is applied to describe pulse chromatography of solutes that present linear adsorption isotherms. The simulated results of pulse chromatography are compared with experimental ones for aromatic amino acid experiments from literature.
Linear algebraic methods applied to intensity modulated radiation therapy.
Crooks, S M; Xing, L
2001-10-01
Methods of linear algebra are applied to the choice of beam weights for intensity modulated radiation therapy (IMRT). It is shown that the physical interpretation of the beam weights, target homogeneity and ratios of deposited energy can be given in terms of matrix equations and quadratic forms. The methodology of fitting using linear algebra as applied to IMRT is examined. Results are compared with IMRT plans that had been prepared using a commercially available IMRT treatment planning system and previously delivered to cancer patients.
Methods of applied mathematics with a software overview
Davis, Jon H
2016-01-01
This textbook, now in its second edition, provides students with a firm grasp of the fundamental notions and techniques of applied mathematics as well as the software skills to implement them. The text emphasizes the computational aspects of problem solving as well as the limitations and implicit assumptions inherent in the formal methods. Readers are also given a sense of the wide variety of problems in which the presented techniques are useful. Broadly organized around the theme of applied Fourier analysis, the treatment covers classical applications in partial differential equations and boundary value problems, and a substantial number of topics associated with Laplace, Fourier, and discrete transform theories. Some advanced topics are explored in the final chapters such as short-time Fourier analysis and geometrically based transforms applicable to boundary value problems. The topics covered are useful in a variety of applied fields such as continuum mechanics, mathematical physics, control theory, and si...
Singularly Perturbation Method Applied To Multivariable PID Controller Design
Mashitah Che Razali
2015-01-01
Full Text Available Proportional integral derivative (PID controllers are commonly used in process industries due to their simple structure and high reliability. Efficient tuning is one of the relevant issues of PID controller type. The tuning process always becomes a challenging matter especially for multivariable system and to obtain the best control tuning for different time scales system. This motivates the use of singularly perturbation method into the multivariable PID (MPID controller designs. In this work, wastewater treatment plant and Newell and Lee evaporator were considered as system case studies. Four MPID control strategies, Davison, Penttinen-Koivo, Maciejowski, and Combined methods, were applied into the systems. The singularly perturbation method based on Naidu and Jian Niu algorithms was applied into MPID control design. It was found that the singularly perturbed system obtained by Naidu method was able to maintain the system characteristic and hence was applied into the design of MPID controllers. The closed loop performance and process interactions were analyzed. It is observed that less computation time is required for singularly perturbed MPID controller compared to the conventional MPID controller. The closed loop performance shows good transient responses, low steady state error, and less process interaction when using singularly perturbed MPID controller.
Using VPython to Apply Mathematics to Physics in Mathematical Methods
Demaree, Dedra; Eagan, J.; Finn, P.; Knight, B.; Singleton, J.; Therrien, A.
2006-12-01
At the College of the Holy Cross, the sophomore mathematical methods of physics students completed VPython programming projects. This is the first time VPython has been used in a physics course at this college. These projects were aimed at applying some methods learned to actual physical situations. Students first completed worksheets from North Carolina State University to learn the programming environment. They then used VPython to apply the mathematics of vectors and differential equations learned in class to solve physics situations which appear simple but are not easy to solve analytically. For most of these students it was their first programming experience. It was also one of the only chances we had to do actual physics applications during the semester due to the large amount of mathematical content covered. In addition to showcasing the students’ final programs, this poster will share their view of including VPython in this course.
Numerov numerical method applied to the Schr\\"odinger equation
Caruso, F
2014-01-01
In this paper it is shown how to solve numerically eigenvalue problems associated to second order linear ordinary differential equations, containing also terms which depend on the variable. A didactic presentation of the Numerov Method is given and, in the sequel, it is applied to two quantum non-relativistic problems with well known analytical solutions: the simple harmonic oscillator and the hydrogen atom. The numerical results are compared to those obtained analytically.
A new method of AHP applied to personal credit evaluation
JIANG Ming-hui; XIONG Qi; CAO Jing
2006-01-01
This paper presents a new negative judgment matrix that combines the advantages of the reciprocal judgment matrix and the fuzzy complementary judgment matrix, and then puts forth the properties of this new matrix. In view of these properties, this paper derives a clear sequencing formula for the new negative judgment matrix, which improves the sequencing principle of AHP. Finally, this new method is applied to personal credit evaluation to show its advantages of conciseness and swiftness.
Quantitative EEG Applying the Statistical Recognition Pattern Method
Engedal, Knut; Snaedal, Jon; Hoegh, Peter
2015-01-01
BACKGROUND/AIM: The aim of this study was to examine the discriminatory power of quantitative EEG (qEEG) applying the statistical pattern recognition (SPR) method to separate Alzheimer's disease (AD) patients from elderly individuals without dementia and from other dementia patients. METHODS...... accepted criteria by at least 2 clinicians. EEGs were recorded in a standardized way and analyzed independently of the clinical diagnoses, using the SPR method. RESULTS: In receiver operating characteristic curve analyses, the qEEGs separated AD patients from healthy elderly individuals with an area under...... the curve (AUC) of 0.90, representing a sensitivity of 84% and a specificity of 81%. The qEEGs further separated patients with Lewy body dementia or Parkinson's disease dementia from AD patients with an AUC of 0.9, a sensitivity of 85% and a specificity of 87%. CONCLUSION: qEEG using the SPR method could...
An ultrasonic guided wave method to estimate applied biaxial loads
Shi, Fan; Michaels, Jennifer E.; Lee, Sang Jun
2012-05-01
Guided waves propagating in a homogeneous plate are known to be sensitive to both temperature changes and applied stress variations. Here we consider the inverse problem of recovering homogeneous biaxial stresses from measured changes in phase velocity at multiple propagation directions using a single mode at a specific frequency. Although there is no closed form solution relating phase velocity changes to applied stresses, prior results indicate that phase velocity changes can be closely approximated by a sinusoidal function with respect to angle of propagation. Here it is shown that all sinusoidal coefficients can be estimated from a single uniaxial loading experiment. The general biaxial inverse problem can thus be solved by fitting an appropriate sinusoid to measured phase velocity changes versus propagation angle, and relating the coefficients to the unknown stresses. The phase velocity data are obtained from direct arrivals between guided wave transducers whose direct paths of propagation are oriented at different angles. This method is applied and verified using sparse array data recorded during a fatigue test. The additional complication of the resulting fatigue cracks interfering with some of the direct arrivals is addressed via proper selection of transducer pairs. Results show that applied stresses can be successfully recovered from the measured changes in guided wave signals.
Methods for model selection in applied science and engineering.
Field, Richard V., Jr.
2004-10-01
Mathematical models are developed and used to study the properties of complex systems and/or modify these systems to satisfy some performance requirements in just about every area of applied science and engineering. A particular reason for developing a model, e.g., performance assessment or design, is referred to as the model use. Our objective is the development of a methodology for selecting a model that is sufficiently accurate for an intended use. Information on the system being modeled is, in general, incomplete, so that there may be two or more models consistent with the available information. The collection of these models is called the class of candidate models. Methods are developed for selecting the optimal member from a class of candidate models for the system. The optimal model depends on the available information, the selected class of candidate models, and the model use. Classical methods for model selection, including the method of maximum likelihood and Bayesian methods, as well as a method employing a decision-theoretic approach, are formulated to select the optimal model for numerous applications. There is no requirement that the candidate models be random. Classical methods for model selection ignore model use and require data to be available. Examples are used to show that these methods can be unreliable when data is limited. The decision-theoretic approach to model selection does not have these limitations, and model use is included through an appropriate utility function. This is especially important when modeling high risk systems, where the consequences of using an inappropriate model for the system can be disastrous. The decision-theoretic method for model selection is developed and applied for a series of complex and diverse applications. These include the selection of the: (1) optimal order of the polynomial chaos approximation for non-Gaussian random variables and stationary stochastic processes, (2) optimal pressure load model to be
Advanced methods for image registration applied to JET videos
Craciunescu, Teddy, E-mail: teddy.craciunescu@jet.uk [EURATOM-MEdC Association, NILPRP, Bucharest (Romania); Murari, Andrea [Consorzio RFX, Associazione EURATOM-ENEA per la Fusione, Padova (Italy); Gelfusa, Michela [Associazione EURATOM-ENEA – University of Rome “Tor Vergata”, Roma (Italy); Tiseanu, Ion; Zoita, Vasile [EURATOM-MEdC Association, NILPRP, Bucharest (Romania); Arnoux, Gilles [EURATOM/CCFE Fusion Association, Culham Science Centre, Abingdon, Oxon (United Kingdom)
2015-10-15
Graphical abstract: - Highlights: • Development of an image registration method for JET IR and fast visible cameras. • Method based on SIFT descriptors and coherent point drift points set registration technique. • Method able to deal with extremely noisy images and very low luminosity images. • Computation time compatible with the inter-shot analysis. - Abstract: The last years have witnessed a significant increase in the use of digital cameras on JET. They are routinely applied for imaging in the IR and visible spectral regions. One of the main technical difficulties in interpreting the data of camera based diagnostics is the presence of movements of the field of view. Small movements occur due to machine shaking during normal pulses while large ones may arise during disruptions. Some cameras show a correlation of image movement with change of magnetic field strength. For deriving unaltered information from the videos and for allowing correct interpretation an image registration method, based on highly distinctive scale invariant feature transform (SIFT) descriptors and on the coherent point drift (CPD) points set registration technique, has been developed. The algorithm incorporates a complex procedure for rejecting outliers. The method has been applied for vibrations correction to videos collected by the JET wide angle infrared camera and for the correction of spurious rotations in the case of the JET fast visible camera (which is equipped with an image intensifier). The method has proved to be able to deal with the images provided by this camera frequently characterized by low contrast and a high level of blurring and noise.
Mixed mesh/nodal magnetic equivalent circuit modeling of a six-phase claw-pole automotive alternator
Horvath, Daniel C.
Magnetic equivalent circuits (MECs) have been employed by many researchers to model the relationship between magnetic flux and current in electromagnetic systems such as electric machines, transformers and inductors [1] ,[2]. Magnetic circuits are analogous to electric circuits where voltage, current, resistance and conductance are the respective counterparts of magneto-motive force (MMF), magnetic flux, reluctance and permeance. The solution of MECs can be accomplished with the plethora of techniques developed for electrical circuit analysis. Specifically, mesh analysis, based on Kirchoff's Voltage Law (KVL), and nodal analysis, based on Kirchoffs Current Law (KCL), are two very common solution techniques. Once an MEC is established, the question is often of which circuit analysis technique should be applied in order to minimize computational effort. For linear circuits, there is little advantage to using mesh over nodal analysis. Using one method may yield a system with fewer equations, but for most problems the difference in unknowns is insignificant. When analyzing nonlinear magnetic systems, researchers have noted a significant difference in mesh versus nodal analysis. Derbas et al have noted that for nonlinear MECs a mesh analysis reduces the number of iterations required to solve the nonlinear system using a Newton-Raphson method [3]. It was further shown that for strong nonlinearities caused by magnetic saturation, a nodal-based solution will often fail to converge whereas a mesh-based solution will converge. It is relatively easy to apply MEC analysis to stationary magnetic systems. However, modeling electric machinery with MECs can be challenging since the circuit structure can depend on the position of the rotor. Specifically, in the case in which mesh-based solution techniques are applied, the circuit components representing the airgap will tend to infinite values as stator/rotor structures (i.e. teeth) come into and out of alignment. As a result, one
Maternal Nodal inversely affects NODAL and STOX1 expression in the fetal placenta
Hari Krishna Thulluru
2013-08-01
Full Text Available Nodal, a secreted signaling protein from the TGFβ-super family plays a vital role during early embryonic development. Recently, it was found that maternal decidua-specific Nodal knockout mice show intrauterine growth restriction (IUGR and preterm birth. As the chromosomal location of NODAL is in the same linkage area as the susceptibility gene STOX1, associated with the familial form of early-onset, IUGR-complicated pre-eclampsia, their potential maternal-fetal interaction was investigated. Pre-eclamptic mothers with children who carried the STOX1 susceptibility allele themselves all carried the NODAL H165R SNP, which causes a 50% reduced activity. Surprisingly, in decidua Nodal knockout mice the fetal placenta showed up-regulation of STOX1 and NODAL expression. Conditioned media of human first trimester decidua and a human endometrial stromal cell line (T-HESC treated with siRNAs against NODAL or carrying the H165R SNP were also able to induce NODAL and STOX1 expression when added to SGHPL-5 first trimester extravillous trophoblast cells. Finally, a human TGFß-BMP-Signaling-Pathway PCR-Array on decidua and the T-HESC cell line with Nodal knockdown revealed upregulation of Activin-A, which was confirmed in conditioned media by ELISA. We show that maternal decidua Nodal knockdown gives upregulation of NODAL and STOX1 mRNA expression in fetal extravillous trophoblast cells, potentially via upregulation of Activin-A in the maternal decidua. As both Activin-A and Nodal have been implicated in pre-eclampsia, being increased in serum of pre-eclamptic women and upregulated in pre-eclamptic placentas respectively, this interaction at the maternal-fetal interface might play a substantial role in the development of pre-eclampsia.
A simple method of applying ear dressing in microtia patients
Vinita Puri
2012-01-01
Full Text Available Introduction: Numerous splints and ear guards have been described for dressing in microtia patients but each has its own merit and demerit. We have devised a simple method of applying such dressings on the operating table. Materials and Methods: A rectangular piece of lubricated dressing material like paraffin gauze or antibiotic impregnated dressing is cut. The dressing material is than split partially into one thirds in a staggered manner. The dressing material is then applied to the retroauricular sulcus. The fans of the dressing material are then turned onto themselves over the projecting ear which makes the dressing stable in its position. Results: The authors have been regularly using this dressing for reconstruction in all cases of microtia. The dressing stays firmly in place in the peri-operative period and is subsequently replaced by stents. Conclusion: It is a low cost, readily available, simple, fast and effective method of ear dressing in the peri-operative period for microtia cases.
Numerical spectral methods applied to flow in highly heterogeneous aquifers
Van Lent, T.J.
1992-01-01
The small perturbation approximation is of central importance to many of the current stochastic approaches to groundwater flow and transport. However, the range of validity of this approximation is not clear. In this thesis, the author applies a numerical spectral approach to investigate the range of validity of the small perturbation approximation for head and specific discharge moments in one- and two-dimensional finite domains. The objectives of this thesis are three-fold. First, to investigate numerical Fourier methods applicable to periodic domains. This periodic formulation allows an approximation to stationarity to an arbitrary degree. Secondly, apply Fourier methods to the numerical derivation of generalized covariance functions of head and specific discharge using a small perturbation approximation. Lastly, use the numerical spectral methods to investigate the range of validity of the small perturbation approximation for head and specific discharge moments. The findings are that the small perturbation approximation tends to underestimate the variance of large-scale head and specific discharge fluctuations, and error increases with increasing log-conductivity variance. Moreover, the validity of the small perturbation approximating for head depends upon log-conductivity variance, initial log-conductivity covariance function, and domain size. The head fluctuations are not ergodic. The specific discharge fluctuations, on the other hand, do appear ergodic. The specific discharge moments are less affected by initial log-conductivity covariance choice. The small perturbation approximation performs well in estimating total variance in the longitudinal direction, but underestimates transverse specific discharge variance.
Classification of Specialized Farms Applying Multivariate Statistical Methods
Zuzana Hloušková
2017-01-01
Full Text Available Classification of specialized farms applying multivariate statistical methods The paper is aimed at application of advanced multivariate statistical methods when classifying cattle breeding farming enterprises by their economic size. Advantage of the model is its ability to use a few selected indicators compared to the complex methodology of current classification model that requires knowledge of detailed structure of the herd turnover and structure of cultivated crops. Output of the paper is intended to be applied within farm structure research focused on future development of Czech agriculture. As data source, the farming enterprises database for 2014 has been used, from the FADN CZ system. The predictive model proposed exploits knowledge of actual size classes of the farms tested. Outcomes of the linear discriminatory analysis multifactor classification method have supported the chance of filing farming enterprises in the group of Small farms (98 % filed correctly, and the Large and Very Large enterprises (100 % filed correctly. The Medium Size farms have been correctly filed at 58.11 % only. Partial shortages of the process presented have been found when discriminating Medium and Small farms.
Twisted Vector Bundles on Pointed Nodal Curves
Ivan Kausz
2005-05-01
Motivated by the quest for a good compactification of the moduli space of -bundles on a nodal curve we establish a striking relationship between Abramovich’s and Vistoli’s twisted bundles and Gieseker vector bundles.
Aircraft Nodal Data Acquisition System (ANDAS) Project
National Aeronautics and Space Administration — Development of an Aircraft Nodal Data Acquisition System (ANDAS) is proposed. The proposed methodology employs the development of a very thin (135m) hybrid...
Multiple nodal locoregional recurrence of pheochromocytoma
César Pablo Ramírez-Plaza
2015-01-01
Conclusion: Isolated lymph nodal recurrence is very rare in malignant PCC, with only 7 cases previously published. The role of surgery is essential to get long-term survival because provides clinical and functional control of the disease.
Aircraft Nodal Data Acquisition System (ANDAS) Project
National Aeronautics and Space Administration — Development of an Aircraft Nodal Data Acquisition System (ANDAS) based upon the short haul Zigbee networking standard is proposed. It employs a very thin (135 um)...
Clinico-pathological signiifcance of extra-nodal spread in special types of breast cancer
Ecmel Isik Kaygusuz; Handan Cetiner; Hulya Yavuz
2014-01-01
Objective: To investigate the signiifcance of extra-nodal spread in special histological sub-types of breast cancer and the relationship of such spread with prognostic parameters. Methods: A total of 303 breast cancer cases were classiifed according to tumor type, and each tumor group was subdivided according to age, tumor diameter, lymph node metastasis, extra-nodal spread, vein invasion in the adjacent soft tissue, distant metastasis, and immunohistochemical characteristics [estrogen receptor (ER), progesterone receptor (PR) existence, p53, c-erbB-2, and proliferative rate (Ki-67)]. hTe 122 cases with extra-nodal spread were clinically followed up. Results: An extra-nodal spread was observed in 40% (122 cases) of the 303 breast cancer cases. hTe spread most frequently presented in micro papillary carcinoma histological sub-type (40 cases, 75%), but least frequently presents in mucinous carcinoma (2 cases, 8%). Patients with extra-nodal spread had a high average number of metastatic lymph nodes (8.3) and a high distant metastasis rate (38 cases, 31%) compared with patients without extra-nodal spread. Conclusion: hTe existence of extra-nodal spread in the examined breast cancer sub-types has predictive value in forecasting the number of metastatic lymph nodes and the disease prognosis.
Gamma neutron method applied to field measurement of hydrodynamic dispersion
Brissaud, F.; Pappalardo, A.; Couchat, Ph.
1983-06-01
The gamma neutron method is applied to the study of solute movements during field irrigations under steady-state and transient hydrodynamic conditions. Two different types of behavior are discussed. In the first, the labeled water pulse velocity matches the conservation of the vertical rate of water and, when the deuterated water concentration profiles are mass-conservative, the experimental results are accurately described by the equation of dispersion. In the second, the pore water velocity differs considerably from that of strictly vertical displacements and the concentration profiles are not massconservative.
Distributions of Nodal Prices in PJM Market
Kunio, Matsumoto; Yoshio, Ichida; Michiko, Makino; Hiroaki, Tanaka
As the deregulation of electric business proceeds, it is important to analyze the distributions of prices in the power market. In this paper, we analyze the nodal prices of the PJM market, which is representative of power markets in the US. First, we verify Weibull’s property of the distribution of nodal prices. Then we verify Poisson’s property of the interval of loss process.
Element-free Galerkin method applied to quantum dot and quantum well nanostructures
Sperotto, Lucas Kriesel [Instituto Tecnologico de Aeronautica (ITA/IEAv), Sao Jose dos Campos, SP (Brazil). Instituto de Estudos Avancados; Passaro, Angelo; Tanaka, Roberto Y. [Instituto de Estudos Avancados (IEAv), Sao Jose dos Campos, SP (Brazil); Marques, Gleber N. [Universidade do Estado de Mato Grosso (UNEMAT), MT (Brazil)
2012-07-01
Full text: The development of native technologies for the fabrication of infrared photodetectors based on quantum wells and quantum dots is the goal of a set of Brazilian Research Institutes and Universities gathered in a National Institute for Science and Technology. The research covers all phases of the production of such devices in Brazil, from the design to the growing of nanostructured semiconductors, processing and characterization of samples. In this context, a set of computer programs have been developed in the recent years in order to assist the design of such structures, some of them based on the Finite Element Methods (FEM). The Element-Free Galerkin Method (EFGM) is an attractive numerical alternative to the FEM. To perform an EFGM approximation it is required a set of nodal points and the shape functions associated to each node. In this sense its similar to FEM. In the EFGM, the Moving Least Squares (MLS) is used to build highly continuous shape functions, which also result in approximations (solutions) highly continuous. The assembling of the final linear system requires support for numerical integration, which in this work is the same triangular mesh generated for the FEM. One of the main drawbacks of the EFGM is the reproduction of the physical discontinuities inherent to each phenomenon, which means discontinuities of the state variable and/or of its spatial derivatives. If no additional numerical treatment is adopted, spurious oscillations arise in the approximation nearby the discontinuity lines. For instance, some aid techniques such as the domain truncation have been successfully applied for the treatment of material interfaces in the computation of electrostatic and electromagnetic fields. Although the EFGM has been successfully tested for one-dimensional quantum well structures, additional techniques are required for ensuring the Dirichlet boundary conditions, e.g. Lagrange multipliers, which spoil the symmetrical character of the final
Efficient electronic structure methods applied to metal nanoparticles
Larsen, Ask Hjorth
Nano-scale structures are increasingly applied in the design of catalysts and electronic devices. A theoretical understanding of the basic properties of such systems is enabled through modern electronic structure methods such as density functional theory. This thesis describes the development...... more strongly than large ones. This can be understood mostly as a geometric eect. Convergence of chemisorption energies within 0.1 eV of bulk values happens at about 200 atoms for Pt and 600 atoms for Au. Particularly for O on Au, large variations due to electronic effects are seen for smaller clusters....... The basis set method is used to study the electronic effects for the contiguous range of clusters up to several hundred atoms. The s-electrons hybridize to form electronic shells consistent with the jellium model, leading to electronic magic numbers for clusters with full shells. Large electronic gaps...
Applied systems ecology: models, data, and statistical methods
Eberhardt, L L
1976-01-01
In this report, systems ecology is largely equated to mathematical or computer simulation modelling. The need for models in ecology stems from the necessity to have an integrative device for the diversity of ecological data, much of which is observational, rather than experimental, as well as from the present lack of a theoretical structure for ecology. Different objectives in applied studies require specialized methods. The best predictive devices may be regression equations, often non-linear in form, extracted from much more detailed models. A variety of statistical aspects of modelling, including sampling, are discussed. Several aspects of population dynamics and food-chain kinetics are described, and it is suggested that the two presently separated approaches should be combined into a single theoretical framework. It is concluded that future efforts in systems ecology should emphasize actual data and statistical methods, as well as modelling.
Adapted G-mode Clustering Method applied to Asteroid Taxonomy
Hasselmann, Pedro H.; Carvano, Jorge M.; Lazzaro, D.
2013-11-01
The original G-mode was a clustering method developed by A. I. Gavrishin in the late 60's for geochemical classification of rocks, but was also applied to asteroid photometry, cosmic rays, lunar sample and planetary science spectroscopy data. In this work, we used an adapted version to classify the asteroid photometry from SDSS Moving Objects Catalog. The method works by identifying normal distributions in a multidimensional space of variables. The identification starts by locating a set of points with smallest mutual distance in the sample, which is a problem when data is not planar. Here we present a modified version of the G-mode algorithm, which was previously written in FORTRAN 77, in Python 2.7 and using NumPy, SciPy and Matplotlib packages. The NumPy was used for array and matrix manipulation and Matplotlib for plot control. The Scipy had a import role in speeding up G-mode, Scipy.spatial.distance.mahalanobis was chosen as distance estimator and Numpy.histogramdd was applied to find the initial seeds from which clusters are going to evolve. Scipy was also used to quickly produce dendrograms showing the distances among clusters. Finally, results for Asteroids Taxonomy and tests for different sample sizes and implementations are presented.
Matched-filtering Line Search Methods Applied to Suzaku Data
Miyazaki, Naoto; Enoto, Teruaki; Axelsson, Magnus; Ohashi, Takaya
2016-01-01
A detailed search for emission and absorption lines and assessing their upper limits are performed for Suzaku data. The method utilizes a matched-filtering approach to maximize the signal-to-noise ratio for a given energy resolution, which could be applicable to many types of line search. We first applied it to well-known AGN spectra that have been reported to have ultra-fast outflows, and find that our results are consistent with previous findings at the ~3{\\sigma} level. We proceeded to search for emission and absorption features in the two bright magnetars 4U 0142+61 and 1RXS J1708-4009, applying the filtering method to Suzaku data. We found that neither source showed any significant indication of line features, even using long Suzaku observations and dividing their spectra into spin phases. The upper limits on the equivalent width of emission/absorption lines are constrained to be a few eV at ~1 keV, and a few hundreds of eV at ~10 keV. This strengthens previous reports that persistently bright magnetars ...
A Multifactorial Analysis of Reconstruction Methods Applied After Total Gastrectomy
Oktay Büyükaşık
2010-12-01
Full Text Available Aim: The aim of this study was to evaluate the reconstruction methods applied after total gastrectomy in terms of postoperative symptomology and nutrition. Methods: This retrospective study was conducted on 31 patients who underwent total gastrectomy due to gastric cancer in 2. Clinic of General Surgery, SSK Ankara Training Hospital. 6 different reconstruction methods were used and analyzed in terms of age, sex and postoperative complications. One from esophagus and two biopsy specimens from jejunum were taken through upper gastrointestinal endoscopy from all cases, and late period morphological and microbiological changes were examined. Postoperative weight change, dumping symptoms, reflux esophagitis, solid/liquid dysphagia, early satiety, postprandial pain, diarrhea and anorexia were assessed. Results: Of 31 patients,18 were males and 13 females; the youngest one was 33 years old, while the oldest- 69 years old. It was found that reconstruction without pouch was performed in 22 cases and with pouch in 9 cases. Early satiety, postprandial pain, dumping symptoms, diarrhea and anemia were found most commonly in cases with reconstruction without pouch. The rate of bacterial colonization of the jejunal mucosa was identical in both groups. Reflux esophagitis was most commonly seen in omega esophagojejunostomy (EJ, while the least-in Roux-en-Y, Tooley and Tanner 19 EJ. Conclusion: Reconstruction with pouch performed after total gastrectomy is still a preferable method. (The Medical Bulletin of Haseki 2010; 48:126-31
The virtual fields method applied to spalling tests on concrete
Pierron, F.; Forquin, P.
2012-08-01
For one decade spalling techniques based on the use of a metallic Hopkinson bar put in contact with a concrete sample have been widely employed to characterize the dynamic tensile strength of concrete at strain-rates ranging from a few tens to two hundreds of s-1. However, the processing method mainly based on the use of the velocity profile measured on the rear free surface of the sample (Novikov formula) remains quite basic and an identification of the whole softening behaviour of the concrete is out of reach. In the present paper a new processing method is proposed based on the use of the Virtual Fields Method (VFM). First, a digital high speed camera is used to record the pictures of a grid glued on the specimen. Next, full-field measurements are used to obtain the axial displacement field at the surface of the specimen. Finally, a specific virtual field has been defined in the VFM equation to use the acceleration map as an alternative `load cell'. This method applied to three spalling tests allowed to identify Young's modulus during the test. It was shown that this modulus is constant during the initial compressive part of the test and decreases in the tensile part when micro-damage exists. It was also shown that in such a simple inertial test, it was possible to reconstruct average axial stress profiles using only the acceleration data. Then, it was possible to construct local stress-strain curves and derive a tensile strength value.
Six Sigma methods applied to cryogenic coolers assembly line
Ventre, Jean-Marc; Germain-Lacour, Michel; Martin, Jean-Yves; Cauquil, Jean-Marc; Benschop, Tonny; Griot, René
2009-05-01
Six Sigma method have been applied to manufacturing process of a rotary Stirling cooler: RM2. Name of the project is NoVa as main goal of the Six Sigma approach is to reduce variability (No Variability). Project has been based on the DMAIC guideline following five stages: Define, Measure, Analyse, Improve, Control. Objective has been set on the rate of coolers succeeding performance at first attempt with a goal value of 95%. A team has been gathered involving people and skills acting on the RM2 manufacturing line. Measurement System Analysis (MSA) has been applied to test bench and results after R&R gage show that measurement is one of the root cause for variability in RM2 process. Two more root causes have been identified by the team after process mapping analysis: regenerator filling factor and cleaning procedure. Causes for measurement variability have been identified and eradicated as shown by new results from R&R gage. Experimental results show that regenerator filling factor impacts process variability and affects yield. Improved process haven been set after new calibration process for test bench, new filling procedure for regenerator and an additional cleaning stage have been implemented. The objective for 95% coolers succeeding performance test at first attempt has been reached and kept for a significant period. RM2 manufacturing process is now managed according to Statistical Process Control based on control charts. Improvement in process capability have enabled introduction of sample testing procedure before delivery.
Analytical methods applied to diverse types of Brazilian propolis
2011-01-01
Propolis is a bee product, composed mainly of plant resins and beeswax, therefore its chemical composition varies due to the geographic and plant origins of these resins, as well as the species of bee. Brazil is an important supplier of propolis on the world market and, although green colored propolis from the southeast is the most known and studied, several other types of propolis from Apis mellifera and native stingless bees (also called cerumen) can be found. Propolis is usually consumed as an extract, so the type of solvent and extractive procedures employed further affect its composition. Methods used for the extraction; analysis the percentage of resins, wax and insoluble material in crude propolis; determination of phenolic, flavonoid, amino acid and heavy metal contents are reviewed herein. Different chromatographic methods applied to the separation, identification and quantification of Brazilian propolis components and their relative strengths are discussed; as well as direct insertion mass spectrometry fingerprinting. Propolis has been used as a popular remedy for several centuries for a wide array of ailments. Its antimicrobial properties, present in propolis from different origins, have been extensively studied. But, more recently, anti-parasitic, anti-viral/immune stimulating, healing, anti-tumor, anti-inflammatory, antioxidant and analgesic activities of diverse types of Brazilian propolis have been evaluated. The most common methods employed and overviews of their relative results are presented. PMID:21631940
Analytical methods applied to diverse types of Brazilian propolis
Marcucci Maria
2011-06-01
Full Text Available Abstract Propolis is a bee product, composed mainly of plant resins and beeswax, therefore its chemical composition varies due to the geographic and plant origins of these resins, as well as the species of bee. Brazil is an important supplier of propolis on the world market and, although green colored propolis from the southeast is the most known and studied, several other types of propolis from Apis mellifera and native stingless bees (also called cerumen can be found. Propolis is usually consumed as an extract, so the type of solvent and extractive procedures employed further affect its composition. Methods used for the extraction; analysis the percentage of resins, wax and insoluble material in crude propolis; determination of phenolic, flavonoid, amino acid and heavy metal contents are reviewed herein. Different chromatographic methods applied to the separation, identification and quantification of Brazilian propolis components and their relative strengths are discussed; as well as direct insertion mass spectrometry fingerprinting. Propolis has been used as a popular remedy for several centuries for a wide array of ailments. Its antimicrobial properties, present in propolis from different origins, have been extensively studied. But, more recently, anti-parasitic, anti-viral/immune stimulating, healing, anti-tumor, anti-inflammatory, antioxidant and analgesic activities of diverse types of Brazilian propolis have been evaluated. The most common methods employed and overviews of their relative results are presented.
A gait planning method applied to hexapod biomimetic robot locomotion
Chen Fu; Yan Jihong; Zang Xizhe; Zhao Jie
2009-01-01
In order to fulfill the goal of autonomous walking on rough terrain, a distributed gait planning method applied to hexapod biomimetic robot locomotion is proposed based on the research effort of gait coordination mechanism of stick insect. The mathematical relation of walking velocity and gait pattern was depicted, a set of local rules operating between adjacent legs were put forward, and a distributed network of local rules for gait control was constructed. With the interaction of adjacent legs, adaptive adjustment of phase sequence fluctuation of walking legs resulting from change of terrain conditions or variety of walking speed was implemented to generate statically stable gait. In the simulation experiments, adaptive adjustment of inter-leg phase sequence and smooth transition of velocity and gait pattern were realized, and static stableness was ensured simultaneously, which provided the hexapod robot with the capability of walking on rough terrain stably and expeditiously.
Structural dynamic responses analysis applying differential quadrature method
PU Jun-ping; ZHENG Jian-jun
2006-01-01
Unconditionally stable higher-order accurate time step integration algorithms based on the differential quadrature method (DQM) for second-order initial value problems were applied and the quadrature rules of DQM, computing of the weighting coefficients and choices of sampling grid points were discussed. Some numerical examples dealing with the heat transfer problem, the second-order differential equation of imposed vibration of linear single-degree-of-freedom systems and double-degree-of-freedom systems, the nonlinear move differential equation and a beam forced by a changing load were computed,respectively. The results indicated that the algorithm can produce highly accurate solutions with minimal time consumption, and that the system total energy can remain conservative in the numerical computation.
SIMULATE-4 multigroup nodal code with microscopic depletion model
Bahadir, T. [Studsvik Scandpower, Inc., Newton, MA (United States); Lindahl, St.O. [Studsvik Scandpower AB, Vasteras (Sweden); Palmtag, S.P. [Studsvik Scandpower, Inc., Idaho Falls, ID (United States)
2005-07-01
SIMULATE-4 is a three-dimensional multigroup analytical nodal code with microscopic depletion capability. It has been developed employing 'first principal models' thus avoiding ad hoc approximations. The multigroup diffusion equations or, optionally, the simplified P{sub 3} equations are solved. Cross sections are described by a hybrid microscopic-macroscopic model that includes approximately 50 heavy nuclides and fission products. Heterogeneities in the axial direction of an assembly are treated systematically. Radially, the assembly is divided into heterogeneous sub-meshes, thereby overcoming the shortcomings of spatially-averaged assembly cross sections and discontinuity factors generated with zero net-current boundary conditions. Numerical tests against higher order transport methods and critical experiments show substantial improvements compared to results of existing nodal models. (authors)
Metrological evaluation of characterization methods applied to nuclear fuels
Faeda, Kelly Cristina Martins; Lameiras, Fernando Soares; Camarano, Denise das Merces; Ferreira, Ricardo Alberto Neto; Migliorini, Fabricio Lima; Carneiro, Luciana Capanema Silva; Silva, Egonn Hendrigo Carvalho, E-mail: kellyfisica@gmail.co, E-mail: fernando.lameiras@pq.cnpq.b, E-mail: dmc@cdtn.b, E-mail: ranf@cdtn.b, E-mail: flmigliorini@hotmail.co, E-mail: lucsc@hotmail.co, E-mail: egonn@ufmg.b [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil)
2010-07-01
In manufacturing the nuclear fuel, characterizations are performed in order to assure the minimization of harmful effects. The uranium dioxide is the most used substance as nuclear reactor fuel because of many advantages, such as: high stability even when it is in contact with water at high temperatures, high fusion point, and high capacity to retain fission products. Several methods are used for characterization of nuclear fuels, such as thermogravimetric analysis for the ratio O / U, penetration-immersion method, helium pycnometer and mercury porosimetry for the density and porosity, BET method for the specific surface, chemical analyses for relevant impurities, and the laser flash method for thermophysical properties. Specific tools are needed to control the diameter and the sphericity of the microspheres and the properties of the coating layers (thickness, density, and degree of anisotropy). Other methods can also give information, such as scanning and transmission electron microscopy, X-ray diffraction, microanalysis, and mass spectroscopy of secondary ions for chemical analysis. The accuracy of measurement and level of uncertainty of the resulting data are important. This work describes a general metrological characterization of some techniques applied to the characterization of nuclear fuel. Sources of measurement uncertainty were analyzed. The purpose is to summarize selected properties of UO{sub 2} that have been studied by CDTN in a program of fuel development for Pressurized Water Reactors (PWR). The selected properties are crucial for thermalhydraulic codes to study basic design accidents. The thermal characterization (thermal diffusivity and thermal conductivity) and the penetration immersion method (density and open porosity) of UO{sub 2} samples were focused. The thermal characterization of UO{sub 2} samples was determined by the laser flash method between room temperature and 448 K. The adaptive Monte Carlo Method was used to obtain the endpoints of
Single-Case Designs and Qualitative Methods: Applying a Mixed Methods Research Perspective
Hitchcock, John H.; Nastasi, Bonnie K.; Summerville, Meredith
2010-01-01
The purpose of this conceptual paper is to describe a design that mixes single-case (sometimes referred to as single-subject) and qualitative methods, hereafter referred to as a single-case mixed methods design (SCD-MM). Minimal attention has been given to the topic of applying qualitative methods to SCD work in the literature. These two…
Single-Case Designs and Qualitative Methods: Applying a Mixed Methods Research Perspective
Hitchcock, John H.; Nastasi, Bonnie K.; Summerville, Meredith
2010-01-01
The purpose of this conceptual paper is to describe a design that mixes single-case (sometimes referred to as single-subject) and qualitative methods, hereafter referred to as a single-case mixed methods design (SCD-MM). Minimal attention has been given to the topic of applying qualitative methods to SCD work in the literature. These two…
Supervised Machine Learning Methods Applied to Predict Ligand- Binding Affinity.
Heck, Gabriela S; Pintro, Val O; Pereira, Richard R; de Ávila, Mauricio B; Levin, Nayara M B; de Azevedo, Walter F
2017-01-01
Calculation of ligand-binding affinity is an open problem in computational medicinal chemistry. The ability to computationally predict affinities has a beneficial impact in the early stages of drug development, since it allows a mathematical model to assess protein-ligand interactions. Due to the availability of structural and binding information, machine learning methods have been applied to generate scoring functions with good predictive power. Our goal here is to review recent developments in the application of machine learning methods to predict ligand-binding affinity. We focus our review on the application of computational methods to predict binding affinity for protein targets. In addition, we also describe the major available databases for experimental binding constants and protein structures. Furthermore, we explain the most successful methods to evaluate the predictive power of scoring functions. Association of structural information with ligand-binding affinity makes it possible to generate scoring functions targeted to a specific biological system. Through regression analysis, this data can be used as a base to generate mathematical models to predict ligandbinding affinities, such as inhibition constant, dissociation constant and binding energy. Experimental biophysical techniques were able to determine the structures of over 120,000 macromolecules. Considering also the evolution of binding affinity information, we may say that we have a promising scenario for development of scoring functions, making use of machine learning techniques. Recent developments in this area indicate that building scoring functions targeted to the biological systems of interest shows superior predictive performance, when compared with other approaches. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
The virtual fields method applied to spalling tests on concrete
Forquin P.
2012-08-01
Full Text Available For one decade spalling techniques based on the use of a metallic Hopkinson bar put in contact with a concrete sample have been widely employed to characterize the dynamic tensile strength of concrete at strain-rates ranging from a few tens to two hundreds of s−1. However, the processing method mainly based on the use of the velocity profile measured on the rear free surface of the sample (Novikov formula remains quite basic and an identification of the whole softening behaviour of the concrete is out of reach. In the present paper a new processing method is proposed based on the use of the Virtual Fields Method (VFM. First, a digital high speed camera is used to record the pictures of a grid glued on the specimen. Next, full-field measurements are used to obtain the axial displacement field at the surface of the specimen. Finally, a specific virtual field has been defined in the VFM equation to use the acceleration map as an alternative ‘load cell’. This method applied to three spalling tests allowed to identify Young’s modulus during the test. It was shown that this modulus is constant during the initial compressive part of the test and decreases in the tensile part when micro-damage exists. It was also shown that in such a simple inertial test, it was possible to reconstruct average axial stress profiles using only the acceleration data. Then, it was possible to construct local stress-strain curves and derive a tensile strength value.
Fast multipole method applied to Lagrangian simulations of vortical flows
Ricciardi, Túlio R.; Wolf, William R.; Bimbato, Alex M.
2017-10-01
Lagrangian simulations of unsteady vortical flows are accelerated by the multi-level fast multipole method, FMM. The combination of the FMM algorithm with a discrete vortex method, DVM, is discussed for free domain and periodic problems with focus on implementation details to reduce numerical dissipation and avoid spurious solutions in unsteady inviscid flows. An assessment of the FMM-DVM accuracy is presented through a comparison with the direct calculation of the Biot-Savart law for the simulation of the temporal evolution of an aircraft wake in the Trefftz plane. The role of several parameters such as time step restriction, truncation of the FMM series expansion, number of particles in the wake discretization and machine precision is investigated and we show how to avoid spurious instabilities. The FMM-DVM is also applied to compute the evolution of a temporal shear layer with periodic boundary conditions. A novel approach is proposed to achieve accurate solutions in the periodic FMM. This approach avoids a spurious precession of the periodic shear layer and solutions are shown to converge to the direct Biot-Savart calculation using a cotangent function.
Potassium fertilizer applied by different methods in the zucchini crop
Carlos N. V. Fernandes
Full Text Available ABSTRACT Aiming to evaluate the effect of potassium (K doses applied by the conventional method and fertigation in zucchini (Cucurbita pepo L., a field experiment was conducted in Fortaleza, CE, Brazil. The statistical design was a randomized block, with four replicates, in a 4 x 2 factorial scheme, which corresponded to four doses of K (0, 75, 150 and 300 kg K2O ha-1 and two fertilization methods (conventional and fertigation. The analyzed variables were: fruit mass (FM, number of fruits (NF, fruit length (FL, fruit diameter (FD, pulp thickness (PT, soluble solids (SS, yield (Y, water use efficiency (WUE and potassium use efficiency (KUE, besides an economic analysis using the net present value (NPV, internal rate of return (IRR and payback period (PP. K doses influenced FM, FD, PT and Y, which increased linearly, with the highest value estimated at 36,828 kg ha-1 for the highest K dose (300 kg K2O ha-1. This dose was also responsible for the largest WUE, 92 kg ha-1 mm-1. KUE showed quadratic behavior and the dose of 174 kg K2O ha-1 led to its maximum value (87.41 kg ha-1 (kg K2O ha-1-1. All treatments were economically viable, and the most profitable months were May, April, December and November.
Nodal Quasiparticle in Pseudogapped Colossal Magnetoresistive Manganites
Mannella, N.
2010-06-02
A characteristic feature of the copper oxide high-temperature superconductors is the dichotomy between the electronic excitations along the nodal (diagonal) and antinodal (parallel to the Cu-O bonds) directions in momentum space, generally assumed to be linked to the d-wave symmetry of the superconducting state. Angle-resolved photoemission measurements in the superconducting state have revealed a quasiparticle spectrum with a d-wave gap structure that exhibits a maximum along the antinodal direction and vanishes along the nodal direction. Subsequent measurements have shown that, at low doping levels, this gap structure persists even in the high-temperature metallic state, although the nodal points of the superconducting state spread out in finite Fermi arcs. This is the so-called pseudogap phase, and it has been assumed that it is closely linked to the superconducting state, either by assigning it to fluctuating superconductivity or by invoking orders which are natural competitors of d-wave superconductors. Here we report experimental evidence that a very similar pseudogap state with a nodal-antinodal dichotomous character exists in a system that is markedly different from a superconductor: the ferromagnetic metallic groundstate of the colossal magnetoresistive bilayer manganite La{sub 1.2}Sr{sub 1.8}Mn{sub 2}O{sub 7}. Our findings therefore cast doubt on the assumption that the pseudogap state in the copper oxides and the nodal-antinodal dichotomy are hallmarks of the superconductivity state.
S. Amir ali akbari
2005-01-01
Full Text Available Background and purpose : Withdrawal method is one of the most common contraceptive ways (% 23 nationwide. With regard to its relatively high failure rate (% 4-23 with unwanted pregnancies as well as resulted complications, this descriptive study was conducted to find out the reasons for not applying safe contraceptive methods.Materials and methods: Subject population consisted of 379 married women between 15 and 45 living in Amol city and using withdrawal method. The subjects were selected through different stages with final randomization Data were collected by a questionnaire in which personal, social, and obstetrical history, and reasons for using withdrawal method as well as for not applying other methods were documented. The validity and reliability of the tool were determined by content validity and test-re test method respectively. Pearson coefficient of correlation as well as t-test and ANOVA were used for quantitative and qualitative variables respectively.Results : Findings showed that the mean age of subjects was 28.1 ± 6.4, who had used withdrawal method at least for 3 months. Mean duration in using withdrawal method was 53.6 ± 6.6 months. Health care workers were considered the most frequent source of information (%26.6 and the main reasons for not applying safe contraceptive methods included fear of side effects (% 28.2, husband opposition (% 28.2, contraindication (% 9 Lactation (% 8.7 and indifference to conception (% 7.7. The most common reasons for applying withdrawal method were husband preference (% 25.1, having less complications (% 20.3, availability (% 19.5 and being safer than other methods (% 12.7.Conclusion : It can be concluded that much more efforts should be made to devise and perform family planning programs correctly, and training health care workers continuously and finally educating people either by mass media or by professionals to encourage applying safe contraceptive methods.
Flow-based market coupling. Stepping stone towards nodal pricing?
Van der Welle, A.J. [ECN Policy Studies, Petten (Netherlands)
2012-07-15
For achieving one internal energy market for electricity by 2014, market coupling is deployed to integrate national markets into regional markets and ultimately one European electricity market. The extent to which markets can be coupled depends on the available transmission capacities between countries. Since interconnections are congested from time to time, congestion management methods are deployed to divide the scarce available transmission capacities over market participants. For further optimization of the use of available transmission capacities while maintaining current security of supply levels, flow-based market coupling (FBMC) will be implemented in the CWE region by 2013. Although this is an important step forward, important hurdles for efficient congestion management remain. Hence, flow based market coupling is compared to nodal pricing, which is often considered as the most optimal solution from theoretical perspective. In the context of decarbonised power systems it is concluded that advantages of nodal pricing are likely to exceed its disadvantages, warranting further development of FBMC in the direction of nodal pricing.
Applying sociodramatic methods in teaching transition to palliative care.
Baile, Walter F; Walters, Rebecca
2013-03-01
We introduce the technique of sociodrama, describe its key components, and illustrate how this simulation method was applied in a workshop format to address the challenge of discussing transition to palliative care. We describe how warm-up exercises prepared 15 learners who provide direct clinical care to patients with cancer for a dramatic portrayal of this dilemma. We then show how small-group brainstorming led to the creation of a challenging scenario wherein highly optimistic family members of a 20-year-old young man with terminal acute lymphocytic leukemia responded to information about the lack of further anticancer treatment with anger and blame toward the staff. We illustrate how the facilitators, using sociodramatic techniques of doubling and role reversal, helped learners to understand and articulate the hidden feelings of fear and loss behind the family's emotional reactions. By modeling effective communication skills, the facilitators demonstrated how key communication skills, such as empathic responses to anger and blame and using "wish" statements, could transform the conversation from one of conflict to one of problem solving with the family. We also describe how we set up practice dyads to give the learners an opportunity to try out new skills with each other. An evaluation of the workshop and similar workshops we conducted is presented. Copyright © 2013 U.S. Cancer Pain Relief Committee. Published by Elsevier Inc. All rights reserved.
An introduction to quantum chemical methods applied to drug design.
Stenta, Marco; Dal Peraro, Matteo
2011-06-01
The advent of molecular medicine allowed identifying the malfunctioning of subcellular processes as the source of many diseases. Since then, drugs are not only discovered, but actually designed to fulfill a precise task. Modern computational techniques, based on molecular modeling, play a relevant role both in target identification and drug lead development. By flanking and integrating standard experimental techniques, modeling has proven itself as a powerful tool across the drug design process. The success of computational methods depends on a balance between cost (computation time) and accuracy. Thus, the integration of innovative theories and more powerful hardware architectures allows molecular modeling to be used as a reliable tool for rationalizing the results of experiments and accelerating the development of new drug design strategies. We present an overview of the most common quantum chemistry computational approaches, providing for each one a general theoretical introduction to highlight limitations and strong points. We then discuss recent developments in software and hardware resources, which have allowed state-of-the-art of computational quantum chemistry to be applied to drug development.
Decompositions of injection patterns for nodal flow allocation in renewable electricity networks
Schäfer, Mirko; Tranberg, Bo; Hempel, Sabrina; Schramm, Stefan; Greiner, Martin
2017-08-01
The large-scale integration of fluctuating renewable power generation represents a challenge to the technical and economical design of a sustainable future electricity system. In this context, the increasing significance of long-range power transmission calls for innovative methods to understand the emerging complex flow patterns and to integrate price signals about the respective infrastructure needs into the energy market design. We introduce a decomposition method of injection patterns. Contrary to standard flow tracing approaches, it provides nodal allocations of link flows and costs in electricity networks by decomposing the network injection pattern into market-inspired elementary import/export building blocks. We apply the new approach to a simplified data-driven model of a European electricity grid with a high share of renewable wind and solar power generation.
Analytic methods in applied probability in memory of Fridrikh Karpelevich
Suhov, Yu M
2002-01-01
This volume is dedicated to F. I. Karpelevich, an outstanding Russian mathematician who made important contributions to applied probability theory. The book contains original papers focusing on several areas of applied probability and its uses in modern industrial processes, telecommunications, computing, mathematical economics, and finance. It opens with a review of Karpelevich's contributions to applied probability theory and includes a bibliography of his works. Other articles discuss queueing network theory, in particular, in heavy traffic approximation (fluid models). The book is suitable
Nodal Structure of the Electronic Wigner Function
Schmider, Hartmut; Dahl, Jens Peder
1996-01-01
On the example of several atomic and small molecular systems, the regular behavior of nodal patterns in the electronic one-particle reduced Wigner function is demonstrated. An expression found earlier relates the nodal pattern solely to the dot-product of the position and the momentum vector......, if both arguments are large. An argument analogous to the ``bond-oscillatory principle'' for momentum densities links the nuclear framework in a molecule to an additional oscillatory term in momenta parallel to bonds. It is shown that these are visible in the Wigner function in terms of characteristic...
Neural network method applied to particle image velocimetry
Grant, Ian; Pan, X.
1993-12-01
realised. An important class of neural network is the multi-layer perceptron. The neurons are distributed on surfaces and linked by weighted interconnections. In the present paper we demonstrate how this type of net can developed into a competitive, adaptive filter which will identify PIV image pairs in a number of commonly occurring flow types. Previous work by the authors in particle tracking analysis (1, 2) has shown the efficiency of statistical windowing techniques in flows without systematic (in time or space) variations. The effectiveness of the present neural net is illustrated by applying it to digital simulations ofturbulent and rotating flows. Work reported by Cenedese et al (3) has taken a different approach in examining the potential for neural net methods applied to PIV.
Nodal Variational Principle for Excited States
Zahariev, Federico; Levy, Mel
2016-01-01
It is proven that the exact excited-state wavefunction and energy may be obtained by minimizing the energy expectation value of a trial wave function that is constrained only to have the correct nodes of the state of interest. This excited-state nodal minimum principle has the advantage that it requires neither minimization with the con- straint of wavefunction orthogonality to all lower eigenstates nor the antisymmetry of the trial wavefunctions. It is also found that the minimization over the entire space can be partitioned into several in- terconnected minimizations within the individual nodal regions, and the exact excited-state energy may be obtained by a minimization in just one or several of these nodal regions. For the proofs of the the- orem, it is observed that the many-electron eigenfunction, restricted to a nodal region, is equivalent to a ground state wavefunction of one electron in a higher dimensional space; and an explicit excited-state energy variational expression is obtained by generalizing...
Nodal yield in selective neck dissection
Norling, Rikke; Therkildsen, Marianne H; Bradley, Patrick J
2013-01-01
The total lymph node yield in neck dissection is highly variable and depends on anatomical, surgical and pathological parameters. A minimum yield of six lymph nodes for a selective neck dissection (SND) as recommended in guidelines lies in the lower range of the reported clinical nodal yields...
Approximate Schur complement preconditioning of the lowest order nodal discretizations
Moulton, J.D.; Ascher, U.M. [Univ. of British Columbia, Vancouver, British Columbia (Canada); Morel, J.E. [Los Alamos National Lab., NM (United States)
1996-12-31
Particular classes of nodal methods and mixed hybrid finite element methods lead to equivalent, robust and accurate discretizations of 2nd order elliptic PDEs. However, widespread popularity of these discretizations has been hindered by the awkward linear systems which result. The present work exploits this awkwardness, which provides a natural partitioning of the linear system, by defining two optimal preconditioners based on approximate Schur complements. Central to the optimal performance of these preconditioners is their sparsity structure which is compatible with Dendy`s black box multigrid code.
CT simulation in nodal positive breast cancer
Horst, E.; Schuck, A.; Moustakis, C.; Schaefer, U.; Micke, O.; Kronholz, H.L.; Willich, N. [Muenster Univ. (Germany). Dept. of Radiation Oncology
2001-10-01
Background: A variety of solutions are used to match tangential fields and opposed lymph node fields in irradiation of nodal positive breast cancer. The choice is depending on the technical equipment which is available and the clinical situation. The CT simulation of a non-monoisocentric technique was evaluated in terms of accuracy and reproducibility. Patients, Material and Methods: The field match parameters were adjusted virtually at CT simulation and were compared with parameters derived mathematically. The coordinate transfer from the CT simulator to the conventional simulator was analyzed in 25 consecutive patients. Results: The angles adjusted virtually for a geometrically exact coplanar field match corresponded with the angles calculated for each set-up. The mean isocenter displacement was 5.7 mm and the total uncertainty of the coordinate transfer was 6.7 mm (1 SD). Limitations in the patient set-up became obvious because of the steep arm abduction necessary to fit the 70 cm CT gantry aperture. Required modifications of the arm position and coordinate transfer errors led to a significant shift of the marked matchline of >1.0 cm in eight of 25 patients (32%). Conclusion: The virtual CT simulation allows a precise and graphic definition of the field match parameters. However, modifications of the virtual set-up basically due to technical limitations were required in a total of 32% of cases, so that a hybrid technique was adapted at present that combines virtual adjustment of the ideal field alignment parameters with conventional simulation. (orig.) [German] Hintergrund: Fuer den Feldanschluss zwischen Brusttangenten und ventrodorsal opponierenden Lymphknotenfeldern bei der Bestrahlung des nodal positiven Mammakarzinoms sind verschiedene Methoden in Gebrauch, wobei fuer die Auswahl technische und klinische Gegebenheiten massgeblich sind. Die CT-Simulation einer nicht monoisozentrischen Technik wird in dieser Untersuchung hinsichtlich Genauigkeit und
Zhao, Qian; Wang, Peng; Goel, Lalit
2013-01-01
The deregulation of power systems allows customers to participate in power market operation. In deregulated power systems, nodal price and nodal reliability are adopted to represent locational operation cost and reliability performance. Since contingency reserve (CR) plays an important role...... in reliable operation, the CR commitment should be considered in operational reliability analysis. In this paper, a CR model based on customer reliability requirements has been formulated and integrated into power market settlement. A two-step market clearing process has been proposed to determine generation...... and CR allocation. Customers' nodal unit commitment risk and nodal energy interruption have been evaluated through contingency analysis. Customers' reliability cost including reserve service cost and energy interruption cost have also been evaluated....
IMPROVEMENT OF QUALITY IN PRODUCTION PROCESS BY APPLYING KAIKAKU METHOD
Milan Radenkovic
2013-12-01
Full Text Available In this paper, Kaikaku method is presented. The essence of this method is introduction, principles and ways of implementation in the real systems. The main point how Kaikaku method influences on quality. It is presented on the practical example (furniture industry, one way how to implement Kaikaku method and how influence on quality improvement of production process.
Applied RCM2 Algorithms Based on Statistical Methods
Fausto Pedro García Márquez; Diego J. Pedregal
2007-01-01
The main purpose of this paper is to implement a system capable of detecting faults in railway point mechanisms. This is achieved by developing an algorithm that takes advantage of three empirical criteria simultaneously capable of detecting faults from records of measurements of force against time. The system is dynamic in several respects: the base reference data is computed using all the curves free from faults as they are encountered in the experimental data; the algorithm that uses the three criteria simultaneously may be applied in on-line situations as each new data point becomes available; and recursive algorithms are applied to filter noise from the raw data in an automatic way. Encouraging results are found in practice when the system is applied to a number of experiments carried out by an industrial sponsor.
Sako, Keisuke; Pradhan, Saurabh J; Barone, Vanessa; Inglés-Prieto, Álvaro; Müller, Patrick; Ruprecht, Verena; Čapek, Daniel; Galande, Sanjeev; Janovjak, Harald; Heisenberg, Carl-Philipp
2016-07-19
During metazoan development, the temporal pattern of morphogen signaling is critical for organizing cell fates in space and time. Yet, tools for temporally controlling morphogen signaling within the embryo are still scarce. Here, we developed a photoactivatable Nodal receptor to determine how the temporal pattern of Nodal signaling affects cell fate specification during zebrafish gastrulation. By using this receptor to manipulate the duration of Nodal signaling in vivo by light, we show that extended Nodal signaling within the organizer promotes prechordal plate specification and suppresses endoderm differentiation. Endoderm differentiation is suppressed by extended Nodal signaling inducing expression of the transcriptional repressor goosecoid (gsc) in prechordal plate progenitors, which in turn restrains Nodal signaling from upregulating the endoderm differentiation gene sox17 within these cells. Thus, optogenetic manipulation of Nodal signaling identifies a critical role of Nodal signaling duration for organizer cell fate specification during gastrulation.
Keisuke Sako
2016-07-01
Full Text Available During metazoan development, the temporal pattern of morphogen signaling is critical for organizing cell fates in space and time. Yet, tools for temporally controlling morphogen signaling within the embryo are still scarce. Here, we developed a photoactivatable Nodal receptor to determine how the temporal pattern of Nodal signaling affects cell fate specification during zebrafish gastrulation. By using this receptor to manipulate the duration of Nodal signaling in vivo by light, we show that extended Nodal signaling within the organizer promotes prechordal plate specification and suppresses endoderm differentiation. Endoderm differentiation is suppressed by extended Nodal signaling inducing expression of the transcriptional repressor goosecoid (gsc in prechordal plate progenitors, which in turn restrains Nodal signaling from upregulating the endoderm differentiation gene sox17 within these cells. Thus, optogenetic manipulation of Nodal signaling identifies a critical role of Nodal signaling duration for organizer cell fate specification during gastrulation.
Goldberg, J M
1979-01-01
Changes in Intra-SA nodal pacemaker localization were produced through stimulation of the decentralized cervical vagi and stellate ganglia in the anesthetized dog. Shifts in pacemaker to the rostral, middle, or caudal regions of the SA node produced a change in the timing as well as a change in the sequence of activation of recording sites overlying the AV node. Epicardial pacing with a plaque electrode from either the rostral, middle, or caudal regions of the SA node produced the same activation sequence of the AV nodal electrodes irrespective of the epicardial SA nodal pacing site. The inability of epicardial SA nodal pacing to precisely reproduce the activation pattern of the atrial septum overlying the AV node observed with a natural SA nodal pacemaker can be explained by the geographic relationship of the pacemaker cells within the node to the preferential internodal pathways and the area of atrial tissue stimulated by pacing. Pacing activates a large mass of tissue, whereas an intrinsic pacemaker probably acts as a more localized focus. The inability of pacing to reproduce the activation pattern seen with spontaneous rhythm may be a determinant in the varied P wave morphology seen with coronary sinus or AV nodal junctional rhythms, as compared with more consistent morphology seen with pacing.
CONTINUATION METHOD APPLIED IN KINEMATICS OF PARALLEL ROBOT
董滨; 张祥德
2001-01-01
Continuation method solving forward kinematics problem of parallel robot was discussed. And through a coefficient-parameter continuation method the efficiency and feasibility of continuation method were improved. Using this method all forward solutions of a new parallel robot model which was put forward lately by Robot Open Laboratory of Science Institute of China were obtained. Therefore it provided the basis of mechanism analysis and real-time control for new model.
简孔斌; 王玲; 徐柏榆
2014-01-01
Faults of lines of power distribution network may cause power outages and directly affect power supply continuity and reliability.Timely and correct fault positioning plays important roles in removing faults,shortening power outage time and improving power supply quality.When fault happening to the line,voltage sag was occurred in the system which might cause loss in sensitive load in the system.In addition,it was able to recognize fault types and adjust fault position by using this feature and therefore,a new method for realizing fault positioning of the power distribution network based on nodal im-pedance matrix and by using voltage sag data was proposed.Combining network parameters and off-line simulation analysis, node voltage data base with different fault types was established.When detecting fault,it was able to carry out search and match for collected voltage sag data in the node voltage data base in order to confirm fault intervals and fault points.This method directly used search results to confirm fault position instead of increasing other algorithm or transformation which was provided with advantages of simple principle and rapid calculation.By means of power system computer aided design simulation software and practical circuit in lab,accuracy and reliability of this method was proved and verified.%配电网线路发生故障时,将导致用户停电,直接影响供电连续性和可靠性,及时准确的故障定位对及时排除故障,缩短停电时间,提高供电质量有重要意义。当线路发生故障时,一方面将在全系统内引起电压暂降,给系统内的敏感负荷带来损失；另一方面,利用该特征可识别故障类型并判断故障位置,为此提出利用电压暂降数据识别故障类型,基于节点阻抗矩阵实现配电网故障定位的新方法,并结合网络参数和离线仿真分析,建立不同故障类型和故障情况下的节点电压数据库,当检测到故障后,将采集到的电压暂降数据
A series method applied to engineering calculations in structural dynamics
Reyes Márquez, Auxiliadora; Reyes Perales, José Antonio; Cortés Molina, Mónica; García Alonso, Fernando Luis
2014-01-01
This paper shows an application of the Φ-functions series method to calculate the response of structures in face of an earthquake, modelled by a 2DOF. The Φ-functions series method is an adaptation of the ideas of Scheifele to integrate forced and damped oscillators. This algorithm presents the advantage of integrating precisely the perturbed problem with only two Φ-functions. Method coefficients are calculated by simple algebraic recurrences in which the perturbation function is involved. Re...
Nodal Diffusion Burnable Poison Treatment for Prismatic Reactor Cores
A. M. Ougouag; R. M. Ferrer
2010-10-01
The prismatic block version of the High Temperature Reactor (HTR) considered as a candidate Very High Temperature Reactor (VHTR)design may use burnable poison pins in locations at some corners of the fuel blocks (i.e., assembly equivalent structures). The presence of any highly absorbing materials, such as these burnable poisons, within fuel blocks for hexagonal geometry, graphite-moderated High Temperature Reactors (HTRs) causes a local inter-block flux depression that most nodal diffusion-based method have failed to properly model or otherwise represent. The location of these burnable poisons near vertices results in an asymmetry in the morphology of the assemblies (or blocks). Hence the resulting inadequacy of traditional homogenization methods, as these “spread” the actually local effect of the burnable poisons throughout the assembly. Furthermore, the actual effect of the burnable poison is primarily local with influence in its immediate vicinity, which happens to include a small region within the same assembly as well as similar regions in the adjacent assemblies. Traditional homogenization methods miss this artifact entirely. This paper presents a novel method for treating the local effect of the burnable poison explicitly in the context of a modern nodal method.
The Nodal Location of Metastases in Melanoma Sentinel Lymph Nodes
Riber-Hansen, Rikke; Nyengaard, Jens; Hamilton-Dutoit, Stephen
2009-01-01
BACKGROUND: The design of melanoma sentinel lymph node (SLN) histologic protocols is based on the premise that most metastases are found in the central parts of the nodes, but the evidence for this belief has never been thoroughly tested. METHODS: The nodal location of melanoma metastases in 149...... were 77%, 79%, and 78%, respectively. No difference in either the mean volume or the maximum diameter of the metastases located exclusively outside the central and the peripheral protocols was found (volume: 0.036 vs. 0.031 mm and diameter: 0.320 vs. 0.332 mm). CONCLUSIONS: In SLNs, melanoma metastases...
Evaluation of Controller Tuning Methods Applied to Distillation Column Control
Nielsen, Kim; W. Andersen, Henrik; Kümmel, Professor Mogens
1998-01-01
of this is to examine whether ZN and BLT design yield satisfactory control of distillation columns. Further, PI controllers are tuned according to a proposed multivariable frequency domain method. A major conclusion is that the ZN tuned controllers yield undesired overshoot and oscillation and poor stability robustness...... properties. BLT tuning removes the overshoot and oscillation, however, at the expense of a more sluggish response. We conclude that if a simple control design is to be used, the BLT method should be referred compared to the ZN method. The frequency domain design approach presented yields a more proper trade...... off between oscillation, response time and stability robustness. However, this method is more complicated to use than the ZN and BLT methods. Moreover, it is shown that properly tuned diagonal PI controllers can provide performance and robustness properties which equal well tuned PI controllers...
Kinetic Monte Carlo method applied to nucleic acid hairpin folding.
Sauerwine, Ben; Widom, Michael
2011-12-01
Kinetic Monte Carlo on coarse-grained systems, such as nucleic acid secondary structure, is advantageous for being able to access behavior at long time scales, even minutes or hours. Transition rates between coarse-grained states depend upon intermediate barriers, which are not directly simulated. We propose an Arrhenius rate model and an intermediate energy model that incorporates the effects of the barrier between simulated states without enlarging the state space itself. Applying our Arrhenius rate model to DNA hairpin folding, we demonstrate improved agreement with experiment compared to the usual kinetic Monte Carlo model. Further improvement results from including rigidity of single-stranded stacking.
Statistical methods for damage detection applied to civil structures
Gres, Szymon; Ulriksen, Martin Dalgaard; Döhler, Michael
2017-01-01
of the two damage detection methods is similar, hereby implying merit of the new Mahalanobis distance-based approach, as it is less computational complex. The fusion of the damage indicators in the control chart provides the most accurate view on the progressively damaged systems....... and compared to the well-known subspace-based damage detection algorithm in the context of two large case studies. Both methods are implemented in the modal analysis and structural health monitoring software ARTeMIS, in which the joint features of the methods are concluded in a control chart in an attempt...
Applying a life cycle approach to project management methods
Biggins, David; Trollsund, F.; Høiby, A.L.
2016-01-01
Project management is increasingly important to organisations because projects are the method\\ud by which organisations respond to their environment. A key element within project management\\ud is the standards and methods that are used to control and conduct projects, collectively known as\\ud project management methods (PMMs) and exemplified by PRINCE2, the Project Management\\ud Institute’s and the Association for Project Management’s Bodies of Knowledge (PMBOK and\\ud APMBOK. The purpose of t...
Methods applied in studies of benthic marine debris.
Spengler, Angela; Costa, Monica F
2008-02-01
The ocean floor is one of the main accumulation sites of marine debris. The study of this kind of debris still lags behind that of shorelines. It is necessary to identify the methods used to evaluate this debris and how the results are presented and interpreted. From the available literature on benthic marine debris (26 studies), six sampling methods were registered: bottom trawl net, sonar, submersible, snorkeling, scuba diving and manta tow. The most frequent method used was bottom trawl net, followed by the three methods of diving. The majority of the debris was classified according to their former use and the results usually expressed as items per unity of area. To facilitate comparisons of the contamination levels among sites and regions some standardization requirements are suggested.
Spectral methods applied to fluidized bed combustors. Final report
Brown, R.C.; Christofides, N.J.; Junk, K.W.; Raines, T.S.; Thiede, T.D.
1996-08-01
The objective of this project was to develop methods for characterizing fuels and sorbents from time-series data obtained during transient operation of fluidized bed boilers. These methods aimed at determining time constants for devolatilization and char burnout using carbon dioxide (CO{sub 2}) profiles and from time constants for the calcination and sulfation processes using CO{sub 2} and sulfur dioxide (SO{sub 2}) profiles.
Experimental Methods Applied in a Study of Stall Flutter in an Axial Flow Fan
John D. Gill
2004-01-01
Full Text Available Flutter testing is an integral part of aircraft gas turbine engine development. In typical flutter testing blade mounted sensors in the form of strain gages and casing mounted sensors in the form of light probes (NSMS are used. Casing mounted sensors have the advantage of being non-intrusive and can detect the vibratory response of each rotating blade. Other types of casing mounted sensors can also be used to detect flutter of rotating blades. In this investigation casing mounted high frequency response pressure transducers are used to characterize the part-speed stall flutter response of a single stage unshrouded axial-flow fan. These dynamic pressure transducers are evenly spaced around the circumference at a constant axial location upstream of the fan blade leading edge plane. The pre-recorded experimental data at 70% corrected speed is analyzed for the case where the fan is back-pressured into the stall flutter zone. The experimental data is analyzed using two probe and multi-probe techniques. The analysis techniques for each method are presented. Results from these two analysis methods indicate that flutter occurred at a frequency of 411 Hz with a dominant nodal diameter of 2. The multi-probe analysis technique is a valuable method that can be used to investigate the initiation of flutter in turbomachines.
Lunar nodal tide in the Baltic Sea
Andrzej Wróblewski
2001-03-01
Full Text Available The nodal tide in the Baltic Sea was studied on the basis of the Stockholm tide-gauge readings for 1825-1984; data from the tide gauge at Swinoujscie for the same period provided comparative material. The Stockholm readings are highly accurate and are considered representative of sea levels in the whole Baltic; hence, the final computations were performed for the readings from this particular tide gauge for the period 1888-1980. The tidal amplitude obtained from measurements uncorrected for atmospheric pressure or wind field was compared with that forced only by atmospheric effects. The amplitude of the recorded nodal tide was the same as the equilibrium tide amplitude calculated for Stockholm. Calculations for equilibrium tide amplitudes were also performed for the extreme latitudes of the Baltic basin.
Prediction of useful casting structure applying Cellular Automaton method
Z. Ignaszak
2009-07-01
Full Text Available The results of simulation investigations of primary casting’s structure made of hypoeutectic Al-Si alloy using the Calcosoft system with CAFE 3D (Cellular Automaton Finite Element module are presented. CAFE 3-D module let to predict the structure formation of complete castings indicating the spatial distribution of columnar and equiaxed grains. That simplified model concerns only hypoeutectic phase. Simulation investigations of structure concern the useful casting of camshaft which solidified in high-insulation mould with properly chills distribution. These conditions let to apply the expedient locally different simplified the grains blocs geometry which are called by the authors as pseudo-crystals. The mechanical properties in selected cross-sections of casing are estimated.
Saghafi, Mahdi [Department of Energy Engineering, Sharif University of Technology, Azadi Avenue, Tehran (Iran, Islamic Republic of); Ghofrani, Mohammad Bagher, E-mail: ghofrani@sharif.edu [Department of Energy Engineering, Sharif University of Technology, Azadi Avenue, Tehran (Iran, Islamic Republic of); D’Auria, Francesco [San Piero a Grado Nuclear Research Group (GRNSPG), University of Pisa, Via Livornese 1291, San Piero a Grado, Pisa (Italy)
2016-07-15
Highlights: • A thermal-hydraulic nodalization for PSB-VVER test facility has been developed. • Station blackout accident is modeled with the developed nodalization in MELCOR code. • The developed nodalization is qualified at both steady state and transient levels. • MELCOR predictions are qualitatively and quantitatively in acceptable range. • Fast Fourier Transform Base Method is used to quantify accuracy of code predictions. - Abstract: This paper deals with the development of a qualified thermal-hydraulic nodalization for modeling Station Black-Out (SBO) accident in PSB-VVER Integral Test Facility (ITF). This study has been performed in the framework of a research project, aiming to develop an appropriate accident management support tool for Bushehr nuclear power plant. In this regard, a nodalization has been developed for thermal-hydraulic modeling of the PSB-VVER ITF by MELCOR integrated code. The nodalization is qualitatively and quantitatively qualified at both steady-state and transient levels. The accuracy of the MELCOR predictions is quantified in the transient level using the Fast Fourier Transform Base Method (FFTBM). FFTBM provides an integral representation for quantification of the code accuracy in the frequency domain. It was observed that MELCOR predictions are qualitatively and quantitatively in the acceptable range. In addition, the influence of different nodalizations on MELCOR predictions was evaluated and quantified using FFTBM by developing 8 sensitivity cases with different numbers of control volumes and heat structures in the core region and steam generator U-tubes. The most appropriate case, which provided results with minimum deviations from the experimental data, was then considered as the qualified nodalization for analysis of SBO accident in the PSB-VVER ITF. This qualified nodalization can be used for modeling of VVER-1000 nuclear power plants when performing SBO accident analysis by MELCOR code.
Robustness of Modal Parameter Estimation Methods Applied to Lightweight Structures
Dickow, Kristoffer Ahrens; Kirkegaard, Poul Henning; Andersen, Lars Vabbersgaard
2013-01-01
of nominally identical test subjects. However, the literature on modal testing of timber structures is rather limited and the applicability and robustness of dierent curve tting methods for modal analysis of such structures is not described in detail. The aim of this paper is to investigate the robustness...... of two parameter estimation methods built into the commercial modal testing software B&K Pulse Re ex Advanced Modal Analysis. The investigations are done by means of frequency response functions generated from a nite-element model and subjected to articial noise before being analyzed with Pulse Re ex...... and the Polyreference Time method are fairly robust and well suited for the structure being analyzed....
Robustness of Modal Parameter Estimation Methods Applied to Lightweight Structures
Dickow, Kristoffer Ahrens; Kirkegaard, Poul Henning; Andersen, Lars Vabbersgaard
2013-01-01
of nominally identical test subjects. However, the literature on modal testing of timber structures is rather limited and the applicability and robustness of dierent curve tting methods for modal analysis of such structures is not described in detail. The aim of this paper is to investigate the robustness....... The ability to handle closely spaced modes and broad frequency ranges is investigated for a numerical model of a lightweight junction under dierent signal-to-noise ratios. The selection of both excitation points and response points are discussed. It is found that both the Rational Fraction Polynomial-Z method...... of two parameter estimation methods built into the commercial modal testing software B&K Pulse Re ex Advanced Modal Analysis. The investigations are done by means of frequency response functions generated from a nite-element model and subjected to articial noise before being analyzed with Pulse Re ex...
Nonstandard Finite Difference Method Applied to a Linear Pharmacokinetics Model
Oluwaseun Egbelowo
2017-05-01
Full Text Available We extend the nonstandard finite difference method of solution to the study of pharmacokinetic–pharmacodynamic models. Pharmacokinetic (PK models are commonly used to predict drug concentrations that drive controlled intravenous (I.V. transfers (or infusion and oral transfers while pharmacokinetic and pharmacodynamic (PD interaction models are used to provide predictions of drug concentrations affecting the response of these clinical drugs. We structure a nonstandard finite difference (NSFD scheme for the relevant system of equations which models this pharamcokinetic process. We compare the results obtained to standard methods. The scheme is dynamically consistent and reliable in replicating complex dynamic properties of the relevant continuous models for varying step sizes. This study provides assistance in understanding the long-term behavior of the drug in the system, and validation of the efficiency of the nonstandard finite difference scheme as the method of choice.
Variance reduction methods applied to deep-penetration problems
Cramer, S.N.
1984-01-01
All deep-penetration Monte Carlo calculations require variance reduction methods. Before beginning with a detailed approach to these methods, several general comments concerning deep-penetration calculations by Monte Carlo, the associated variance reduction, and the similarities and differences of these with regard to non-deep-penetration problems will be addressed. The experienced practitioner of Monte Carlo methods will easily find exceptions to any of these generalities, but it is felt that these comments will aid the novice in understanding some of the basic ideas and nomenclature. Also, from a practical point of view, the discussions and developments presented are oriented toward use of the computer codes which are presented in segments of this Monte Carlo course.
DAKOTA reliability methods applied to RAVEN/RELAP-7.
Swiler, Laura Painton; Mandelli, Diego; Rabiti, Cristian; Alfonsi, Andrea
2013-09-01
This report summarizes the result of a NEAMS project focused on the use of reliability methods within the RAVEN and RELAP-7 software framework for assessing failure probabilities as part of probabilistic risk assessment for nuclear power plants. RAVEN is a software tool under development at the Idaho National Laboratory that acts as the control logic driver and post-processing tool for the newly developed Thermal-Hydraulic code RELAP-7. Dakota is a software tool developed at Sandia National Laboratories containing optimization, sensitivity analysis, and uncertainty quantification algorithms. Reliability methods are algorithms which transform the uncertainty problem to an optimization problem to solve for the failure probability, given uncertainty on problem inputs and a failure threshold on an output response. The goal of this work is to demonstrate the use of reliability methods in Dakota with RAVEN/RELAP-7. These capabilities are demonstrated on a demonstration of a Station Blackout analysis of a simplified Pressurized Water Reactor (PWR).
Applying the Priority Distribution Method for Employee Motivation
Jonas Žaptorius
2013-09-01
Full Text Available In an age of increasing healthcare expenditure, the efficiency of healthcare services is a burning issue. This paper deals with the creation of a performance-related remuneration system, which would meet requirements for efficiency and sustainable quality. In real world scenarios, it is difficult to create an objective and transparent employee performance evaluation model dealing with both qualitative and quantitative criteria. To achieve these goals, the use of decision support methods is suggested and analysed. The systematic approach of practical application of the Priority Distribution Method to healthcare provider organisations is created and described.
Multiple nodal locoregional recurrence of pheochromocytoma
Ramírez-Plaza, César Pablo; Cárdenas, Elena Margarita Sanchiz; Humanes, Rocío Soler
2015-01-01
Introduction Malignancy is present in 10% of pheochromocytomas (PCC) and is defined as local/vascular infiltration of surrounding tissues or the presence of chromaffin cells deposits in distant organs. The presence of isolated nodal recurrence is very rare and only 7 cases have been reported in the medical literature. Presentation of the case The case of a 32-y male with a symptomatic recurrence of a previously operated (2-years ago) PCC is presented. Radiological and functional imaging studies confirmed the presence of multiple nodules in the surgical site. A radical left nephrectomy with extensive lymphatic clearance in order to get an R0 resection was performed. The pathologist confirmed the diagnosis of massive locoregional nodal invasion. Discussion A detailed histological report and a thorough genetic study must be considered in every operated PCC in order to identify mutations and profiles of risk for malignancy. When recurrence or metastastic disease is suspected, imaging and functional exams are done in order to obtain a proper staging. Radical surgery for the metastatic disease is the only treatment that may provide prolonged survival. If an R0 resection is not possible, then a debulking surgery is a good option when the benefit/risk ratio is acceptable. Conclusion Isolated lymph nodal recurrence is very rare in malignant PCC, with only 7 cases previously published. The role of surgery is essential to get long-term survival because provides clinical and functional control of the disease. PMID:26117450
Inversion method applied to the rotation curves of galaxies
Márquez-Caicedo, L. A.; Lora-Clavijo, F. D.; Sanabria-Gómez, J. D.
2017-07-01
We used simulated annealing, Montecarlo and genetic algorithm methods for matching both numerical data of density and velocity profiles in some low surface brigthness galaxies with theoretical models of Boehmer-Harko, Navarro-Frenk-White and Pseudo Isothermal Profiles for galaxies with dark matter halos. We found that Navarro-Frenk-White model does not fit at all in contrast with the other two models which fit very well. Inversion methods have been widely used in various branches of science including astrophysics (Charbonneau 1995, ApJS, 101, 309). In this work we have used three different parametric inversion methods (MonteCarlo, Genetic Algorithm and Simmulated Annealing) in order to determine the best fit of the observed data of the density and velocity profiles of a set of low surface brigthness galaxies (De Block et al. 2001, ApJ, 122, 2396) with three models of galaxies containing dark mattter. The parameters adjusted by the inversion methods were the central density and a characteristic distance in the Boehmer-Harko BH (Boehmer & Harko 2007, JCAP, 6, 25), Navarro-Frenk-White NFW (Navarro et al. 2007, ApJ, 490, 493) and Pseudo Isothermal Profile PI (Robles & Matos 2012, MNRAS, 422, 282). The results obtained showed that the BH and PI Profile dark matter galaxies fit very well for both the density and the velocity profiles, in contrast the NFW model did not make good adjustments to the profiles in any analized galaxy.
System Identification and POD Method Applied to Unsteady Aerodynamics
Tang, Deman; Kholodar, Denis; Juang, Jer-Nan; Dowell, Earl H.
2001-01-01
The representation of unsteady aerodynamic flow fields in terms of global aerodynamic modes has proven to be a useful method for reducing the size of the aerodynamic model over those representations that use local variables at discrete grid points in the flow field. Eigenmodes and Proper Orthogonal Decomposition (POD) modes have been used for this purpose with good effect. This suggests that system identification models may also be used to represent the aerodynamic flow field. Implicit in the use of a systems identification technique is the notion that a relative small state space model can be useful in describing a dynamical system. The POD model is first used to show that indeed a reduced order model can be obtained from a much larger numerical aerodynamical model (the vortex lattice method is used for illustrative purposes) and the results from the POD and the system identification methods are then compared. For the example considered, the two methods are shown to give comparable results in terms of accuracy and reduced model size. The advantages and limitations of each approach are briefly discussed. Both appear promising and complementary in their characteristics.
Ideal and computer mathematics applied to meshfree methods
Kansa, E.
2016-10-01
Early numerical methods to solve ordinary and partial differential relied upon human computers who used mechanical devices. The algorithms used changed little over the evolution of electronic computers having only low order convergence rates. A meshfree scheme was developed for problems that converges exponentially using the latest computational science toolkit.
The method of characteristics applied to analyse 2DH models
Sloff, C.J.
1992-01-01
To gain insight into the physical behaviour of 2D hydraulic models (mathematically formulated as a system of partial differential equations), the method of characteristics is used to analyse the propagation of physical meaningful disturbances. These disturbances propagate as wave fronts along bichar
Tensor product decomposition methods applied to complex flow data
von Larcher, Thomas; Klein, Rupert; Schneider, Reinhold; Wolf, Sebastian; Huber, Benjamin
2017-04-01
Low-rank multilevel approximation methods are an important tool in numerical analysis and in scientific computing. Those methods are often suited to attack high-dimensional problems successfully and allow very compact representations of large data sets. Specifically, hierarchical tensor product decomposition methods emerge as an promising approach for application to data that are concerned with cascade-of-scales problems as, e.g., in turbulent fluid dynamics. We focus on two particular objectives, that is representing turbulent data in an appropriate compact form and, secondly and as a long-term goal, finding self-similar vortex structures in multiscale problems. The question here is whether tensor product methods can support the development of improved understanding of the multiscale behavior and whether they are an improved starting point in the development of compact storage schemes for solutions of such problems relative to linear ansatz spaces. We present the reconstruction capabilities of a tensor decomposition based modeling approach tested against 3D turbulent channel flow data.
Photogrammetric methods applied to Svalbard glaciers: accuracies and challenges
Trond Eiken
2012-06-01
Full Text Available Use of digital images is expanding as a tool for glacier monitoring, and small-format time-lapse cameras are increasingly being used for glacier monitoring of fast-flowing glaciers. Stereoscopic imagery is preferable since it yields direct displacement results but stereo photogrammetry has more requirements regarding geometry in set-up and control points, as well as the additional cost of another complete camera system. We investigate a combination of methods to achieve satisfactory control of accuracy with resulting significant day-to-day velocity variations ranging from 1.5–4 m day−1 made at a distance of 2 km. Validation of results was made by comparing different methods, partly using the same image material, but also in combination with aerial and satellite images. Monoscopic results can also be used to gain continuity in a stereo data set when geometry or visibility is poor. We also explore the use of ordinary photographs taken from airliners for compilation of orthoimages as a potential low cost method for detection of sudden changes. The method, showing some tens of metres accuracy, was verified for monitoring velocities and front positions during a glacier surge and was also used to validate monoscopic time-lapse images.
Some methods of computational geometry applied to computer graphics
Overmars, M.H.; Edelsbrunner, H.; Seidel, R.
1984-01-01
Abstract Windowing a two-dimensional picture means to determine those line segments of the picture that are visible through an axis-parallel window. A study of some algorithmic problems involved in windowing a picture is offered. Some methods from computational geometry are exploited to store the
A second order Rosenbrock method applied to photochemical dispersion problems
Verwer, J.G.; Spee, E.J.; Blom, J.G.; Hundsdorfer, W.
1997-01-01
A 2nd-order, L-stable Rosenbrock method from the field of stiff ordinary differential equations is studied for application to atmospheric dispersion problems describing photochemistry, advective and turbulent diffusive transport. Partial differential equation problems of this type occur in the field
E-LEARNING METHOD APPLIED TO TECHNICAL GRAPHICS SUBJECTS
GOANTA Adrian Mihai
2011-11-01
Full Text Available The paper presents some of the author’s endeavors in creating video courses for the students from the Faculty of Engineering in Braila related to subjects involving technical graphics . There are also mentioned the steps taken in completing the method and how to achieve a feedback on the rate of access to these types of courses by the students.
章小龙; 李小军; 陈国兴; 周正华
2016-01-01
黏弹性人工边界在场地地震反应和结构-地基动力相互作用等问题的计算中已得到了广泛的应用。地震波在黏弹性人工边界中的输入是通过将地震波转化为作用于人工边界处的等效载荷来实现的。计算等效节点载荷的常规方法默认边界节点对应区域内的应力为均布力，但实际上该节点对应区域内的应力分布通常是不均匀的。本文在有限元方法结合黏弹性局部人工边界的显式时域波动方法的基础上，建立了无限域散射问题地震波等效载荷计算的一种改进方法。该方法采用细化网格与应力积分相结合的方法计算人工边界等效节点力，有效地降低了人工边界上等效节点力的计算误差。以不同角度入射地震波的二维算例为例，算例给出的波场位移云图和节点位移时程曲线验证了本文方法的有效性，其计算精度与网格尺寸和地震波入射角度密切相关，且网格越小、入射角度越小，计算精度越高。对于相同的网格尺寸，本文采用方法的计算精度明显高于常规方法，尤其是对于斜入射问题优势更为明显。%The viscous-elastic artificial boundary is widely used in the analysis of site seismic response and dynamic structure-soil system interaction problems. Seismic input is usually taken as equivalent nodal forces incorporating in the viscous-elastic artificial boundary, and stress in the control area of any artificial boundary node in the conventional method is considered as uniform distribution. However, its distribution is actually uneven. An improved method is proposed for the seismic input of wave propagation scattering problem in infinite domain. In the proposed method, an viscous-elastic artificial boundary is first introduced;seismic input is considered as the equivalent node forces to be incorporated directly in these local boundaries, and the node force obtained using the mesh refinement
Theoretical and applied aerodynamics and related numerical methods
Chattot, J J
2015-01-01
This book covers classical and modern aerodynamics, theories and related numerical methods, for senior and first-year graduate engineering students, including: -The classical potential (incompressible) flow theories for low speed aerodynamics of thin airfoils and high and low aspect ratio wings. - The linearized theories for compressible subsonic and supersonic aerodynamics. - The nonlinear transonic small disturbance potential flow theory, including supercritical wing sections, the extended transonic area rule with lift effect, transonic lifting line and swept or oblique wings to minimize wave drag. Unsteady flow is also briefly discussed. Numerical simulations based on relaxation mixed-finite difference methods are presented and explained. - Boundary layer theory for all Mach number regimes and viscous/inviscid interaction procedures used in practical aerodynamics calculations. There are also four chapters covering special topics, including wind turbines and propellers, airplane design, flow analogies and h...
Generic Methods for Formalising Sequent Calculi Applied to Provability Logic
Dawson, Jeremy E.; Goré, Rajeev
We describe generic methods for reasoning about multiset-based sequent calculi which allow us to combine shallow and deep embeddings as desired. Our methods are modular, permit explicit structural rules, and are widely applicable to many sequent systems, even to other styles of calculi like natural deduction and term rewriting systems. We describe new axiomatic type classes which enable simplification of multiset or sequent expressions using existing algebraic manipulation facilities. We demonstrate the benefits of our combined approach by formalising in Isabelle/HOL a variant of a recent, non-trivial, pen-and-paper proof of cut-admissibility for the provability logic GL, where we abstract a large part of the proof in a way which is immediately applicable to other calculi. Our work also provides a machine-checked proof to settle the controversy surrounding the proof of cut-admissibility for GL.
Current Human Reliability Analysis Methods Applied to Computerized Procedures
Ronald L. Boring
2012-06-01
Computerized procedures (CPs) are an emerging technology within nuclear power plant control rooms. While CPs have been implemented internationally in advanced control rooms, to date no US nuclear power plant has implemented CPs in its main control room (Fink et al., 2009). Yet, CPs are a reality of new plant builds and are an area of considerable interest to existing plants, which see advantages in terms of enhanced ease of use and easier records management by omitting the need for updating hardcopy procedures. The overall intent of this paper is to provide a characterization of human reliability analysis (HRA) issues for computerized procedures. It is beyond the scope of this document to propose a new HRA approach or to recommend specific methods or refinements to those methods. Rather, this paper serves as a review of current HRA as it may be used for the analysis and review of computerized procedures.
EMD Method Applied to Identification of Logging Sequence Strata
Zhao Ni
2015-10-01
Full Text Available In this work, we compare Fourier transform, wavelet transform, and empirical mode decomposition (EMD, and point out that EMD method decomposes complex signal into a series of component functions through curves of local mean value. Each of Intrinsic Mode Functions (IMFs - component functions contains all the information on the original signal. Therefore, it is more suitable for the interface identification of logging sequence strata.
SPH method applied to high speed cutting modelling
LIMIDO, Jérôme; Espinosa, Christine; Salaün, Michel; Lacome, Jean-Luc
2007-01-01
The purpose of this study is to introduce a new approach of high speed cutting numerical modelling. A Lagrangian smoothed particle hydrodynamics (SPH)- based model is arried out using the Ls-Dyna software. SPH is a meshless method, thus large material distortions that occur in the cutting problem are easily managed and SPH contact control permits a "natural" workpiece/chip separation. The developed approach is compared to machining dedicated code results and experimental data. The SPH cutting...
The colour analysis method applied to homogeneous rocks
Halász Amadé
2015-12-01
Full Text Available Computer-aided colour analysis can facilitate cyclostratigraphic studies. Here we report on a case study involving the development of a digital colour analysis method for examination of the Boda Claystone Formation which is the most suitable in Hungary for the disposal of high-level radioactive waste. Rock type colours are reddish brown or brownish red, or any shade between brown and red. The method presented here could be used to differentiate similar colours and to identify gradual transitions between these; the latter are of great importance in a cyclostratigraphic analysis of the succession. Geophysical well-logging has demonstrated the existence of characteristic cyclic units, as detected by colour and natural gamma. Based on our research, colour, natural gamma and lithology correlate well. For core Ib-4, these features reveal the presence of orderly cycles with thicknesses of roughly 0.64 to 13 metres. Once the core has been scanned, this is a time- and cost-effective method.
Differential correction method applied to measurement of the FAST reflector
Li, Xin-Yi; Zhu, Li-Chun; Hu, Jin-Wen; Li, Zhi-Heng
2016-08-01
The Five-hundred-meter Aperture Spherical radio Telescope (FAST) adopts an active deformable main reflector which is composed of 4450 triangular panels. During an observation, the illuminated area of the reflector is deformed into a 300-m diameter paraboloid and directed toward a source. To achieve accurate control of the reflector shape, positions of 2226 nodes distributed around the entire reflector must be measured with sufficient precision within a limited time, which is a challenging task because of the large scale. Measurement of the FAST reflector makes use of stations and node targets. However, in this case the effect of the atmosphere on measurement accuracy is a significant issue. This paper investigates a differential correction method for total stations measurement of the FAST reflector. A multi-benchmark differential correction method, including a scheme for benchmark selection and weight assignment, is proposed. On-site evaluation experiments show there is an improvement of 70%-80% in measurement accuracy compared with the uncorrected measurement, verifying the effectiveness of the proposed method.
Applying stochastic methods to building thermal design and control
Scartezzini, J.L.; Bottazzi, F.; Nygard-Ferguson, M. (Solar Energy and Building Physics Laboratory, Ecole Polytechnique Federale de Lausanne (CH))
1990-01-01
The object of this project is to develop numerical tools based on stochastic methods, issued from the theory of probability. Two objectives have been identified: I. The development of stochastic simulation techniques for thermal design and analysis of passive solar systems and buildings; II. The development of strategies for predictive controllers which can account for the stochastic behaviour of the weather and the occupants of buildings. The advantage of the stochastic approach is to treat the weather evolution and occupants behaviour by their probabilities. Previously to this work, an important effort was made towards the development of a stochastic approach to numerical simulations of passive solar systems. A smaller project has also treated the application of stochastic methods to predictive building thermal control. Encouraging results were obtained. They gave however rise to questions studied within the framework of this project: Design and analysis (hybrid dynamic simulation, Markovian stochastic simulation), predictive control. Two different institutions of the Swiss Federal Institute of Technology in Lausanne collaborate in this project: The 'Solar Energy and Building Physics Laboratory (LESO-PB)' in the Physics Department and the 'Chair of Operations Research' in the Mathematics Department. This document is a synthesis report of the work carried out within the project 'Application des methodes stochastiques: dimensionnement et regulation (Phase I)'. A detailed description of the results is available in French. (author) 20 figs., 10 refs.
Stolzer, Alan J.; Halford, Carl
2007-01-01
In a previous study, multiple regression techniques were applied to Flight Operations Quality Assurance-derived data to develop parsimonious model(s) for fuel consumption on the Boeing 757 airplane. The present study examined several data mining algorithms, including neural networks, on the fuel consumption problem and compared them to the multiple regression results obtained earlier. Using regression methods, parsimonious models were obtained that explained approximately 85% of the variation in fuel flow. In general data mining methods were more effective in predicting fuel consumption. Classification and Regression Tree methods reported correlation coefficients of .91 to .92, and General Linear Models and Multilayer Perceptron neural networks reported correlation coefficients of about .99. These data mining models show great promise for use in further examining large FOQA databases for operational and safety improvements.
Reducing mining losses when applying room and pillar mining methods
Duchrow, G.; Schilder, C.
1985-12-01
In potassium mining in the German Democratic Republic, the reduction of losses is an important problem, and considerable scientific and technical efforts have been made on this sector. There were four stages of development: A period of 'empirical' dimensioning was followed by dimensioning on a mathematical basis and by the optimized design of winning parameter relationships. The latest stage focuses on the optimisation of pillar parameters in suitable rock salt and sylvinite fields. The different stages of development are described, explained, and illustrated by examples. The efficiency in loss reduction is determined, and methods for monitoring the winning operations are presented. (orig./MOS).
Applying corpus methods to written academic texts: Explorations of MICUSP
Ute Römer
2010-08-01
Full Text Available Based on explorations of the Michigan Corpus of Upper-level Student Papers (MICUSP, the present paper provides an introduction to the central techniques in corpus analysis, including the creation and examination of word lists, keyword lists, concordances, and cluster lists. It also presents a MICUSP-based case study of the demonstrative pronoun this and the distribution and use of its attended and unattended forms in different disciplinary subsets of the corpus. The paper aims to demonstrate how corpus linguistics and corpus methods can contribute to writing research and provide fruitful insights into student academic writing.
Methods of analysis applied on the e-shop Arsta
Flégl, Jan
2013-01-01
Bachelor thesis is focused on summarizing methods of e-shop analysis. The first chapter summarizes and describes the basics of e-commerce and e-shops in general. The second chapter deals with search engines, their functioning and in what ways it is possible to influence the order of search results. Special attention is paid to the optimization and search engine marketing. The third chapter summarizes basic tools of the Google Analytics. The fourth chapter uses findings of all the previous cha...
Artificial Intelligence Methods Applied to Parameter Detection of Atrial Fibrillation
Arotaritei, D.; Rotariu, C.
2015-09-01
In this paper we present a novel method to develop an atrial fibrillation (AF) based on statistical descriptors and hybrid neuro-fuzzy and crisp system. The inference of system produce rules of type if-then-else that care extracted to construct a binary decision system: normal of atrial fibrillation. We use TPR (Turning Point Ratio), SE (Shannon Entropy) and RMSSD (Root Mean Square of Successive Differences) along with a new descriptor, Teager- Kaiser energy, in order to improve the accuracy of detection. The descriptors are calculated over a sliding window that produce very large number of vectors (massive dataset) used by classifier. The length of window is a crisp descriptor meanwhile the rest of descriptors are interval-valued type. The parameters of hybrid system are adapted using Genetic Algorithm (GA) algorithm with fitness single objective target: highest values for sensibility and sensitivity. The rules are extracted and they are part of the decision system. The proposed method was tested using the Physionet MIT-BIH Atrial Fibrillation Database and the experimental results revealed a good accuracy of AF detection in terms of sensitivity and specificity (above 90%).
Benchmark Comparison of Cloud Analytics Methods Applied to Earth Observations
Lynnes, Chris; Little, Mike; Huang, Thomas; Jacob, Joseph; Yang, Phil; Kuo, Kwo-Sen
2016-01-01
Cloud computing has the potential to bring high performance computing capabilities to the average science researcher. However, in order to take full advantage of cloud capabilities, the science data used in the analysis must often be reorganized. This typically involves sharding the data across multiple nodes to enable relatively fine-grained parallelism. This can be either via cloud-based file systems or cloud-enabled databases such as Cassandra, Rasdaman or SciDB. Since storing an extra copy of data leads to increased cost and data management complexity, NASA is interested in determining the benefits and costs of various cloud analytics methods for real Earth Observation cases. Accordingly, NASA's Earth Science Technology Office and Earth Science Data and Information Systems project have teamed with cloud analytics practitioners to run a benchmark comparison on cloud analytics methods using the same input data and analysis algorithms. We have particularly looked at analysis algorithms that work over long time series, because these are particularly intractable for many Earth Observation datasets which typically store data with one or just a few time steps per file. This post will present side-by-side cost and performance results for several common Earth observation analysis operations.
Benchmark Comparison of Cloud Analytics Methods Applied to Earth Observations
Lynnes, C.; Little, M. M.; Huang, T.; Jacob, J. C.; Yang, C. P.; Kuo, K. S.
2016-12-01
Cloud computing has the potential to bring high performance computing capabilities to the average science researcher. However, in order to take full advantage of cloud capabilities, the science data used in the analysis must often be reorganized. This typically involves sharding the data across multiple nodes to enable relatively fine-grained parallelism. This can be either via cloud-based filesystems or cloud-enabled databases such as Cassandra, Rasdaman or SciDB. Since storing an extra copy of data leads to increased cost and data management complexity, NASA is interested in determining the benefits and costs of various cloud analytics methods for real Earth Observation cases. Accordingly, NASA's Earth Science Technology Office and Earth Science Data and Information Systems project have teamed with cloud analytics practitioners to run a benchmark comparison on cloud analytics methods using the same input data and analysis algorithms. We have particularly looked at analysis algorithms that work over long time series, because these are particularly intractable for many Earth Observation datasets which typically store data with one or just a few time steps per file. This post will present side-by-side cost and performance results for several common Earth observation analysis operations.
A multilateral filtering method applied to airplane runway image
Yu, Zhang; Run-quan, Wang
2008-01-01
By considering the features of the airport runway image filtering, an improved bilateral filtering method was proposed which can remove noise with edge preserving. Firstly the steerable filtering decomposition is used to calculate the sub-band parameters of 4 orients, and the texture feature matrix is then obtained from the sub-band local median energy. The texture similar, the spatial closer and the color similar functions are used to filter the image.The effect of the weighting function parameters is qualitatively analyzed also. In contrast with the standard bilateral filter and the simulation results for the real airport runway image show that the multilateral filtering is more effective than the standard bilateral filtering.
Applying Simulation Method in Formulation of Gluten-Free Cookies
Nikitina Marina
2017-01-01
Full Text Available At present time priority direction in the development of new food products its developing of technology products for special purposes. These types of products are gluten-free confectionery products, intended for people with celiac disease. Gluten-free products are in demand among consumers, it needs to expand assortment, and improvement of quality indicators. At this article results of studies on the development of pastry products based on amaranth flour does not contain gluten. Study based on method of simulation recipes gluten-free confectionery functional orientation to optimize their chemical composition. The resulting products will allow to diversify and supplement the necessary nutrients diet for people with gluten intolerance, as well as for those who follow a gluten-free diet.
Complexity Methods Applied to Turbulence in Plasma Astrophysics
Vlahos, Loukas
2016-01-01
In this review many of the well known tools for the analysis of Complex systems are used in order to study the global coupling of the turbulent convection zone with the solar atmosphere where the magnetic energy is dissipated explosively. Several well documented observations are not easy to interpret with the use of Magnetohydrodynamic (MHD) and/or Kinetic numerical codes. Such observations are: (1) The size distribution of the Active Regions (AR) on the solar surface, (2) The fractal and multi fractal characteristics of the observed magnetograms, (3) The Self-Organised characteristics of the explosive magnetic energy release and (4) the very efficient acceleration of particles during the flaring periods in the solar corona. We review briefly the work published the last twenty five years on the above issues and propose solutions by using methods borrowed from the analysis of complex systems. The scenario which emerged is as follows: (a) The fully developed turbulence in the convection zone generates and trans...
Applied statistical methods in agriculture, health and life sciences
Lawal, Bayo
2014-01-01
This textbook teaches crucial statistical methods to answer research questions using a unique range of statistical software programs, including MINITAB and R. This textbook is developed for undergraduate students in agriculture, nursing, biology and biomedical research. Graduate students will also find it to be a useful way to refresh their statistics skills and to reference software options. The unique combination of examples is approached using MINITAB and R for their individual strengths. Subjects covered include among others data description, probability distributions, experimental design, regression analysis, randomized design and biological assay. Unlike other biostatistics textbooks, this text also includes outliers, influential observations in regression and an introduction to survival analysis. Material is taken from the author's extensive teaching and research in Africa, USA and the UK. Sample problems, references and electronic supplementary material accompany each chapter.
Applying Human-Centered Design Methods to Scientific Communication Products
Burkett, E. R.; Jayanty, N. K.; DeGroot, R. M.
2016-12-01
Knowing your users is a critical part of developing anything to be used or experienced by a human being. User interviews, journey maps, and personas are all techniques commonly employed in human-centered design practices because they have proven effective for informing the design of products and services that meet the needs of users. Many non-designers are unaware of the usefulness of personas and journey maps. Scientists who are interested in developing more effective products and communication can adopt and employ user-centered design approaches to better reach intended audiences. Journey mapping is a qualitative data-collection method that captures the story of a user's experience over time as related to the situation or product that requires development or improvement. Journey maps help define user expectations, where they are coming from, what they want to achieve, what questions they have, their challenges, and the gaps and opportunities that can be addressed by designing for them. A persona is a tool used to describe the goals and behavioral patterns of a subset of potential users or customers. The persona is a qualitative data model that takes the form of a character profile, built upon data about the behaviors and needs of multiple users. Gathering data directly from users avoids the risk of basing models on assumptions, which are often limited by misconceptions or gaps in understanding. Journey maps and user interviews together provide the data necessary to build the composite character that is the persona. Because a persona models the behaviors and needs of the target audience, it can then be used to make informed product design decisions. We share the methods and advantages of developing and using personas and journey maps to create more effective science communication products.
Atomistic Method Applied to Computational Modeling of Surface Alloys
Bozzolo, Guillermo H.; Abel, Phillip B.
2000-01-01
The formation of surface alloys is a growing research field that, in terms of the surface structure of multicomponent systems, defines the frontier both for experimental and theoretical techniques. Because of the impact that the formation of surface alloys has on surface properties, researchers need reliable methods to predict new surface alloys and to help interpret unknown structures. The structure of surface alloys and when, and even if, they form are largely unpredictable from the known properties of the participating elements. No unified theory or model to date can infer surface alloy structures from the constituents properties or their bulk alloy characteristics. In spite of these severe limitations, a growing catalogue of such systems has been developed during the last decade, and only recently are global theories being advanced to fully understand the phenomenon. None of the methods used in other areas of surface science can properly model even the already known cases. Aware of these limitations, the Computational Materials Group at the NASA Glenn Research Center at Lewis Field has developed a useful, computationally economical, and physically sound methodology to enable the systematic study of surface alloy formation in metals. This tool has been tested successfully on several known systems for which hard experimental evidence exists and has been used to predict ternary surface alloy formation (results to be published: Garces, J.E.; Bozzolo, G.; and Mosca, H.: Atomistic Modeling of Pd/Cu(100) Surface Alloy Formation. Surf. Sci., 2000 (in press); Mosca, H.; Garces J.E.; and Bozzolo, G.: Surface Ternary Alloys of (Cu,Au)/Ni(110). (Accepted for publication in Surf. Sci., 2000.); and Garces, J.E.; Bozzolo, G.; Mosca, H.; and Abel, P.: A New Approach for Atomistic Modeling of Pd/Cu(110) Surface Alloy Formation. (Submitted to Appl. Surf. Sci.)). Ternary alloy formation is a field yet to be fully explored experimentally. The computational tool, which is based on
Simplified Methods Applied to Nonlinear Motion of Spar Platforms
Haslum, Herbjoern Alf
2000-07-01
Simplified methods for prediction of motion response of spar platforms are presented. The methods are based on first and second order potential theory. Nonlinear drag loads and the effect of the pumping motion in a moon-pool are also considered. Large amplitude pitch motions coupled to extreme amplitude heave motions may arise when spar platforms are exposed to long period swell. The phenomenon is investigated theoretically and explained as a Mathieu instability. It is caused by nonlinear coupling effects between heave, surge, and pitch. It is shown that for a critical wave period, the envelope of the heave motion makes the pitch motion unstable. For the same wave period, a higher order pitch/heave coupling excites resonant heave response. This mutual interaction largely amplifies both the pitch and the heave response. As a result, the pitch/heave instability revealed in this work is more critical than the previously well known Mathieu's instability in pitch which occurs if the wave period (or the natural heave period) is half the natural pitch period. The Mathieu instability is demonstrated both by numerical simulations with a newly developed calculation tool and in model experiments. In order to learn more about the conditions for this instability to occur and also how it may be controlled, different damping configurations (heave damping disks and pitch/surge damping fins) are evaluated both in model experiments and by numerical simulations. With increased drag damping, larger wave amplitudes and more time are needed to trigger the instability. The pitch/heave instability is a low probability of occurrence phenomenon. Extreme wave periods are needed for the instability to be triggered, about 20 seconds for a typical 200m draft spar. However, it may be important to consider the phenomenon in design since the pitch/heave instability is very critical. It is also seen that when classical spar platforms (constant cylindrical cross section and about 200m draft
Applying the partitioned multiobjective risk method (PMRM) to portfolio selection.
Reyes Santos, Joost; Haimes, Yacov Y
2004-06-01
The analysis of risk-return tradeoffs and their practical applications to portfolio analysis paved the way for Modern Portfolio Theory (MPT), which won Harry Markowitz a 1992 Nobel Prize in Economics. A typical approach in measuring a portfolio's expected return is based on the historical returns of the assets included in a portfolio. On the other hand, portfolio risk is usually measured using volatility, which is derived from the historical variance-covariance relationships among the portfolio assets. This article focuses on assessing portfolio risk, with emphasis on extreme risks. To date, volatility is a major measure of risk owing to its simplicity and validity for relatively small asset price fluctuations. Volatility is a justified measure for stable market performance, but it is weak in addressing portfolio risk under aberrant market fluctuations. Extreme market crashes such as that on October 19, 1987 ("Black Monday") and catastrophic events such as the terrorist attack of September 11, 2001 that led to a four-day suspension of trading on the New York Stock Exchange (NYSE) are a few examples where measuring risk via volatility can lead to inaccurate predictions. Thus, there is a need for a more robust metric of risk. By invoking the principles of the extreme-risk-analysis method through the partitioned multiobjective risk method (PMRM), this article contributes to the modeling of extreme risks in portfolio performance. A measure of an extreme portfolio risk, denoted by f(4), is defined as the conditional expectation for a lower-tail region of the distribution of the possible portfolio returns. This article presents a multiobjective problem formulation consisting of optimizing expected return and f(4), whose solution is determined using Evolver-a software that implements a genetic algorithm. Under business-as-usual market scenarios, the results of the proposed PMRM portfolio selection model are found to be compatible with those of the volatility-based model
Boundary integral method applied in chaotic quantum billiards
Li, B; Li, Baowen; Robnik, Marko
1995-01-01
The boundary integral method (BIM) is a formulation of Helmholtz equation in the form of an integral equation suitable for numerical discretization to solve the quantum billiard. This paper is an extensive numerical survey of BIM in a variety of quantum billiards, integrable (circle, rectangle), KAM systems (Robnik billiard) and fully chaotic (ergodic, such as stadium, Sinai billiard and cardioid billiard). On the theoretical side we point out some serious flaws in the derivation of BIM in the literature and show how the final formula (which nevertheless was correct) should be derived in a sound way and we also argue that a simple minded application of BIM in nonconvex geometries presents serious difficulties or even fails. On the numerical side we have analyzed the scaling of the averaged absolute value of the systematic error \\Delta E of the eigenenergy in units of mean level spacing with the density of discretization (b = number of numerical nodes on the boundary within one de Broglie wavelength), and we f...
Perturbation Method of Analysis Applied to Substitution Measurements of Buckling
Persson, Rolf
1966-11-15
Calculations with two-group perturbation theory on substitution experiments with homogenized regions show that a condensation of the results into a one-group formula is possible, provided that a transition region is introduced in a proper way. In heterogeneous cores the transition region comes in as a consequence of a new cell concept. By making use of progressive substitutions the properties of the transition region can be regarded as fitting parameters in the evaluation procedure. The thickness of the region is approximately equal to the sum of 1/(1/{tau} + 1/L{sup 2}){sup 1/2} for the test and reference regions. Consequently a region where L{sup 2} >> {tau}, e.g. D{sub 2}O, contributes with {radical}{tau} to the thickness. In cores where {tau} >> L{sup 2} , e.g. H{sub 2}O assemblies, the thickness of the transition region is determined by L. Experiments on rod lattices in D{sub 2}O and on test regions of D{sub 2}O alone (where B{sup 2} = - 1/L{sup 2} ) are analysed. The lattice measurements, where the pitches differed by a factor of {radical}2, gave excellent results, whereas the determination of the diffusion length in D{sub 2}O by this method was not quite successful. Even regions containing only one test element can be used in a meaningful way in the analysis.
What health care managers do: applying Mintzberg's structured observation method.
Arman, Rebecka; Dellve, Lotta; Wikström, Ewa; Törnström, Linda
2009-09-01
Aim The aim of the present study was to explore and describe what characterizes first- and second-line health care managers' use of time. Background Many Swedish health care managers experience difficulties managing their time. Methods Structured and unstructured observations were used. Ten first- and second-line managers in different health care settings were studied in detail from 3.5 and 4 days each. Duration and frequency of different types of work activities were analysed. Results The individual variation was considerable. The managers' days consisted to a large degree of short activities (<9 minutes). On average, nearly half of the managers' time was spent in meetings. Most of the managers' time was spent with subordinates and <1% was spent alone with their superiors. Sixteen per cent of their time was spent on administration and only a small fraction on explicit strategic work. Conclusions The individual variations in time use patterns suggest the possibility of interventions to support changes in time use patterns. Implications for nursing management A reliable description of what managers do paves the way for analyses of what they should do to be effective.
Nondestructive methods of analysis applied to oriental swords
Edge, David
2015-12-01
Full Text Available Various neutron techniques were employed at the Budapest Nuclear Centre in an attempt to find the most useful method for analysing the high-carbon steels found in Oriental arms and armour, such as those in the Wallace Collection, London. Neutron diffraction was found to be the most useful in terms of identifying such steels and also indicating the presence of hidden patternEn el Centro Nuclear de Budapest se han empleado varias técnicas neutrónicas con el fin de encontrar un método adecuado para analizar las armas y armaduras orientales con un alto contenido en carbono, como algunas de las que se encuentran en la Colección Wallace de Londres. El empleo de la difracción de neutrones resultó ser la técnica más útil de cara a identificar ese tipo de aceros y también para encontrar patrones escondidos.
Variational methods applied to problems of diffusion and reaction
Strieder, William
1973-01-01
This monograph is an account of some problems involving diffusion or diffusion with simultaneous reaction that can be illuminated by the use of variational principles. It was written during a period that included sabbatical leaves of one of us (W. S. ) at the University of Minnesota and the other (R. A. ) at the University of Cambridge and we are grateful to the Petroleum Research Fund for helping to support the former and the Guggenheim Foundation for making possible the latter. We would also like to thank Stephen Prager for getting us together in the first place and for showing how interesting and useful these methods can be. We have also benefitted from correspondence with Dr. A. M. Arthurs of the University of York and from the counsel of Dr. B. D. Coleman the general editor of this series. Table of Contents Chapter 1. Introduction and Preliminaries . 1. 1. General Survey 1 1. 2. Phenomenological Descriptions of Diffusion and Reaction 2 1. 3. Correlation Functions for Random Suspensions 4 1. 4. Mean Free ...
SIRIUS - A one-dimensional multigroup analytic nodal diffusion theory code
Forslund, P. [Westinghouse Atom AB, Vaesteraas (Sweden)
2000-09-01
In order to evaluate relative merits of some proposed intranodal cross sections models, a computer code called Sirius has been developed. Sirius is a one-dimensional, multigroup analytic nodal diffusion theory code with microscopic depletion capability. Sirius provides the possibility of performing a spatial homogenization and energy collapsing of cross sections. In addition a so called pin power reconstruction method is available for the purpose of reconstructing 'heterogeneous' pin qualities. consequently, Sirius has the capability of performing all the calculations (incl. depletion calculations) which are an integral part of the nodal calculation procedure. In this way, an unambiguous numerical analysis of intranodal cross section models is made possible. In this report, the theory of the nodal models implemented in sirius as well as the verification of the most important features of these models are addressed.
Complexity methods applied to turbulence in plasma astrophysics
Vlahos, L.; Isliker, H.
2016-09-01
In this review many of the well known tools for the analysis of Complex systems are used in order to study the global coupling of the turbulent convection zone with the solar atmosphere where the magnetic energy is dissipated explosively. Several well documented observations are not easy to interpret with the use of Magnetohydrodynamic (MHD) and/or Kinetic numerical codes. Such observations are: (1) The size distribution of the Active Regions (AR) on the solar surface, (2) The fractal and multi fractal characteristics of the observed magnetograms, (3) The Self-Organised characteristics of the explosive magnetic energy release and (4) the very efficient acceleration of particles during the flaring periods in the solar corona. We review briefly the work published the last twenty five years on the above issues and propose solutions by using methods borrowed from the analysis of complex systems. The scenario which emerged is as follows: (a) The fully developed turbulence in the convection zone generates and transports magnetic flux tubes to the solar surface. Using probabilistic percolation models we were able to reproduce the size distribution and the fractal properties of the emerged and randomly moving magnetic flux tubes. (b) Using a Non Linear Force Free (NLFF) magnetic extrapolation numerical code we can explore how the emerged magnetic flux tubes interact nonlinearly and form thin and Unstable Current Sheets (UCS) inside the coronal part of the AR. (c) The fragmentation of the UCS and the redistribution of the magnetic field locally, when the local current exceeds a Critical threshold, is a key process which drives avalanches and forms coherent structures. This local reorganization of the magnetic field enhances the energy dissipation and influences the global evolution of the complex magnetic topology. Using a Cellular Automaton and following the simple rules of Self Organized Criticality (SOC), we were able to reproduce the statistical characteristics of the
Exact boundary controllability of nodal profile for quasilinear hyperbolic systems
Li, Tatsien; Gu, Qilong
2016-01-01
This book provides a comprehensive overview of the exact boundary controllability of nodal profile, a new kind of exact boundary controllability stimulated by some practical applications. This kind of controllability is useful in practice as it does not require any precisely given final state to be attained at a suitable time t=T by means of boundary controls, instead it requires the state to exactly fit any given demand (profile) on one or more nodes after a suitable time t=T by means of boundary controls. In this book we present a general discussion of this kind of controllability for general 1-D first order quasilinear hyperbolic systems and for general 1-D quasilinear wave equations on an interval as well as on a tree-like network using a modular-structure construtive method, suggested in LI Tatsien's monograph "Controllability and Observability for Quasilinear Hyperbolic Systems"(2010), and we establish a complete theory on the local exact boundary controllability of nodal profile for 1-D quasilinear hyp...
A nodal collocation approximation for the multi-dimensional P{sub L} equations - 2D applications
Capilla, M. [Departamento de Matematica Aplicada, Universidad Politecnica de Valencia, Camino de Vera 14, E-46022 Valencia (Spain)], E-mail: tcapilla@mat.upv.es; Talavera, C.F. [Departamento de Matematica Aplicada, Universidad Politecnica de Valencia, Camino de Vera 14, E-46022 Valencia (Spain)], E-mail: talavera@mat.upv.es; Ginestar, D. [Departamento de Matematica Aplicada, Universidad Politecnica de Valencia, Camino de Vera 14, E-46022 Valencia (Spain)], E-mail: dginesta@mat.upv.es; Verdu, G. [Departamento de Ingenieria Quimica y Nuclear, Universidad Politecnica de Valencia, Camino de Vera 14, E-46022 Valencia (Spain)], E-mail: gverdu@iqn.upv.es
2008-10-15
A classical approach to solve the neutron transport equation is to apply the spherical harmonics method obtaining a finite approximation known as the P{sub L} equations. In this work, the derivation of the P{sub L} equations for multi-dimensional geometries is reviewed and a nodal collocation method is developed to discretize these equations on a rectangular mesh based on the expansion of the neutronic fluxes in terms of orthogonal Legendre polynomials. The performance of the method and the dominant transport Lambda Modes are obtained for a homogeneous 2D problem, a heterogeneous 2D anisotropic scattering problem, a heterogeneous 2D problem and a benchmark problem corresponding to a MOX fuel reactor core.
Small renal tumor with lymph nodal enlargement: A histopathological surprise
Mujeeburahiman Thottathil; Ashish Verma; Nischith D′souza; Altaf Khan
2016-01-01
Renal cancer with lymph nodal mass on the investigation is clinically suggestive of an advanced tumor. Small renal cancers are not commonly associated with lymph nodal metastasis. Association of renal cell carcinoma with renal tuberculosis (TB) in the same kidney is also rare. We report here a case of small renal cancer with multiple hilar and paraaortic lymph nodes who underwent radical nephrectomy, and histopathology report showed renal and lymph nodal TB too.
Mugica R, C.A.; Valle G, E. del [IPN, ESFM, Departamento de Ingenieria Nuclear, 07738 Mexico D.F. (Mexico)]. e-mail: cmugica@ipn.mx
2005-07-01
In 2002, E. del Valle and Ernest H. Mund developed a technique to solve numerically the Neutron transport equations in discrete ordinates and hexagonal geometry using two nodal schemes type finite element weakly discontinuous denominated WD{sub 5,3} and WD{sub 12,8} (of their initials in english Weakly Discontinuous). The technique consists on representing each hexagon in the union of three rhombuses each one of which it is transformed in a square in the one that the methods WD{sub 5,3} and WD{sub 12,8} were applied. In this work they are solved the mentioned equations of transport using the same discretization technique by hexagon but using two nodal schemes type finite element strongly discontinuous denominated SD{sub 3} and SD{sub 8} (of their initials in english Strongly Discontinuous). The application in each case as well as a reference problem for those that results are provided for the effective multiplication factor is described. It is carried out a comparison with the obtained results by del Valle and Mund for different discretization meshes so much angular as spatial. (Author)
Topological surface states in nodal superconductors.
Schnyder, Andreas P; Brydon, Philip M R
2015-06-24
Topological superconductors have become a subject of intense research due to their potential use for technical applications in device fabrication and quantum information. Besides fully gapped superconductors, unconventional superconductors with point or line nodes in their order parameter can also exhibit nontrivial topological characteristics. This article reviews recent progress in the theoretical understanding of nodal topological superconductors, with a focus on Weyl and noncentrosymmetric superconductors and their protected surface states. Using selected examples, we review the bulk topological properties of these systems, study different types of topological surface states, and examine their unusual properties. Furthermore, we survey some candidate materials for topological superconductivity and discuss different experimental signatures of topological surface states.
Tunable Weyl Semimetals in Periodically Driven Nodal Line Semimetals
Yan, Zhongbo
2016-01-01
Weyl semimetals and nodal line semimetals are characterized by linear band-touching at nodal points and lines, respectively. We predict that a circularly polarized light drives nodal line semimetals into Weyl semimetals. The Weyl points of the Floquet Weyl semimetal thus obtained are tunable by the incident light, which enables investigations of them in a highly controllable manner. The transition from nodal line semimetals to Weyl semimetals is accompanied by the emergence of a large and tunable anomalous Hall conductivity. Our predictions are experimentally testable in thin films of topological semimetals by either pump-probe ARPES or transport measurement.
Luo, Yijun; Liu, Yuhui; Wang, Xiaoli; Zhang, Bin; Yu, Jinming; Wang, Chengang; Huang, Yong
2016-01-01
Background To map detail distribution of metastatic supraclavicular (SCV) lymph nodes (LN) in esophageal cancer (EC) patients and determine the precise radiation therapy clinical target volume (CTV). Methods A total of 101 thoracic esophageal carcinoma patients after surgery experienced SCV LN metastasis were retrospectively examined. The SCV region is further divided into four subgroups. Using hand drawings registration, nodes were mapped to a template computed tomogram to provide a visual impression of nodal frequencies and anatomic distribution. Results In all, 158 nodes were considered to be clinical metastatic in the SCV region in the 101 patients, 74 on the left and 84 on the right. Seven of 158 (4.4%) positive LN were located in group I, 78 of 158 (49.37%) were located in group II, 72 of 158 nodes (45.6%) were located in group III, 1 of 158 (0.63%) located in group IV. Conclusions According to our results, the SCV group II and group III are considered to be the high risk regions of esophageal squamous cell carcinoma (ESCC) LN metastasis, which were defined as elective nodal irradiation (ENI) areas. PMID:28066592
周斌全; 胡申江; 鲁端; 王建安
2002-01-01
Objectives: This study was aimed at assessing the value of the adenosine test for noninvasive diagnosis of dual AV nodal physiology(DAVNP) in patients with AV nodal reentrant tachycardia (AVNRT). Methods: 53 patients with paroxysmal supraventricular tachycardia (PSVT) were given incremental doses of adenosine intravenously during sinus rhythm before electrophysiological study. The adenosine test was repeated on a subset of 18 patients with AVNRT after radiofrequency catheter ablation. Results: Sudden increments of PR interval of more than 60 msec between two consecutive beats were observed in 26(83.9%) of 31 patients with typical AVNRT and 2 (9.1%) of 22 patients with AVRT and AT (P<0.01). The maximal PR increment between 2 consecutive beats in the AVNRT group(105±45ms) was significantly greater than that in the AVRT and AT group (20±13ms) (P<0.01).In postablation adenosine test, DAVNP was eliminated in all 8 patients who underwent slow pathway abolition that EPS showed the slow pathway disappeared and 4 of 10 patients who underwent slow pathway modification that EPS showed the slow pathway persisted. Six of 10 patients who exhibited persistent duality showed a marked reduction in the number of beats conducted in the slow pathway after adenosine injection(P<0.01).Conclusions: Administration of adenosine during sinus rhythm may be a useful bedside test for diagnosis of DAVNP in high percentage of patients with typical AVNRT and additionally for evaluating the effects of radiofrequency ablation.
Automatic symbolic analysis of SC networks using a modified nodal approach
Zivkovic, V.A.; Petkovic, P.M.; Milanovic, D.P.
1998-01-01
This paper presents a symbolic analysis of Switched-Capacitor (SC) circuits in the z-domain using Modified Nodal Approach (MNA). We have selected the MNA method as one of the widely established approaches in circuit analysis. The analyses are performed using SymsimC symbolic simulator which also ena
V. Martinez-Quiroga
2014-01-01
Full Text Available System codes along with necessary nodalizations are valuable tools for thermal hydraulic safety analysis. Qualifying both codes and nodalizations is an essential step prior to their use in any significant study involving code calculations. Since most existing experimental data come from tests performed on the small scale, any qualification process must therefore address scale considerations. This paper describes the methodology developed at the Technical University of Catalonia in order to contribute to the qualification of Nuclear Power Plant nodalizations by means of scale disquisitions. The techniques that are presented include the so-called Kv-scaled calculation approach as well as the use of “hybrid nodalizations” and “scaled-up nodalizations.” These methods have revealed themselves to be very helpful in producing the required qualification and in promoting further improvements in nodalization. The paper explains both the concepts and the general guidelines of the method, while an accompanying paper will complete the presentation of the methodology as well as showing the results of the analysis of scaling discrepancies that appeared during the posttest simulations of PKL-LSTF counterpart tests performed on the PKL-III and ROSA-2 OECD/NEA Projects. Both articles together produce the complete description of the methodology that has been developed in the framework of the use of NPP nodalizations in the support to plant operation and control.
Optimal Alternative to the Akima's Method of Smooth Interpolation Applied in Diabetology
Emanuel Paul
2006-12-01
Full Text Available It is presented a new method of cubic piecewise smooth interpolation applied to experimental data obtained by glycemic profile for diabetics. This method is applied to create a soft useful in clinical diabetology. The method give an alternative to the Akima's procedure of the derivatives computation on the knots from [Akima, J. Assoc. Comput. Mach., 1970] and have an optimal property.
New Anti-Nodal Monoclonal Antibodies Targeting the Nodal Pre-Helix Loop Involved in Cripto-1 Binding
Annalia Focà
2015-09-01
Full Text Available Nodal is a potent embryonic morphogen belonging to the TGF-β superfamily. Typically, it also binds to the ALK4/ActRIIB receptor complex in the presence of the co-receptor Cripto-1. Nodal expression is physiologically restricted to embryonic tissues and human embryonic stem cells, is absent in normal cells but re-emerges in several human cancers, including melanoma, breast, and colon cancer. Our aim was to obtain mAbs able to recognize Nodal on a major CBR (Cripto-Binding-Region site and to block the Cripto-1-mediated signalling. To achieve this, antibodies were raised against hNodal(44–67 and mAbs generated by the hybridoma technology. We have selected one mAb, named 3D1, which strongly associates with full-length rhNodal (KD 1.4 nM and recognizes the endogenous protein in a panel of human melanoma cell lines by western blot and FACS analyses. 3D1 inhibits the Nodal-Cripto-1 binding and blocks Smad2/3 phosphorylation. Data suggest that inhibition of the Nodal-Cripto-1 axis is a valid therapeutic approach against melanoma and 3D1 is a promising and interesting agent for blocking Nodal-Cripto mediated tumor development. These findings increase the interest for Nodal as both a diagnostic and prognostic marker and as a potential new target for therapeutic intervention.
Analysis of nodal aberration properties in off-axis freeform system design.
Shi, Haodong; Jiang, Huilin; Zhang, Xin; Wang, Chao; Liu, Tao
2016-08-20
Freeform surfaces have the advantage of balancing off-axis aberration. In this paper, based on the framework of nodal aberration theory (NAT) applied to the coaxial system, the third-order astigmatism and coma wave aberration expressions of an off-axis system with Zernike polynomial surfaces are derived. The relationship between the off-axis and surface shape acting on the nodal distributions is revealed. The nodal aberration properties of the off-axis freeform system are analyzed and validated by using full-field displays (FFDs). It has been demonstrated that adding Zernike terms, up to nine, to the off-axis system modifies the nodal locations, but the field dependence of the third-order aberration does not change. On this basis, an off-axis two-mirror freeform system with 500 mm effective focal length (EFL) and 300 mm entrance pupil diameter (EPD) working in long-wave infrared is designed. The field constant aberrations induced by surface tilting are corrected by selecting specific Zernike terms. The design results show that the nodes of third-order astigmatism and coma move back into the field of view (FOV). The modulation transfer function (MTF) curves are above 0.4 at 20 line pairs per millimeter (lp/mm) which meets the infrared reconnaissance requirement. This work provides essential insight and guidance for aberration correction in off-axis freeform system design.
Grootendorst, Diederik J.; Fratila, Raluca M.; Visscher, Martijn; Ten Haken, Bennie; van Wezel, Richard; Steenbergen, Wiendelt; Manohar, Srirang; Ruers, Theo J. M.
2013-02-01
Detection of tumor metastases in the lymphatic system is essential for accurate staging of various malignancies, however fast, accurate and cost-effective intra-operative evaluation of the nodal status remains difficult to perform with common available medical imaging techniques. In recent years, numerous studies have confirmed the additional value of superparamagnetic iron oxide dispersions (SPIOs) for nodal staging purposes, prompting the clearance of different SPIO dispersions for clinical practice. We evaluate whether a combination of photoacoustic (PA) imaging and a clinically approved SPIO dispersion, could be applied for intra-operative nodal staging. Metastatic adenocarcinoma was inoculated in Copenhagen rats for 5 or 8 days. After SPIO injection, the lymph nodes were photoacoustically imaged both in vivo and ex vivo whereafter imaging results were correlated with MR and histology. Results were compared to a control group without tumor inoculation. In the tumor groups clear irregularities, as small as 1 mm, were observed in the PA contrast pattern of the nodes together with an decrease of PA response. These irregularities could be correlated to the absence of contrast in the MR images and could be linked to metastatic deposits seen in the histological slides. The PA and MR images of the control animals did not show these features. We conclude that the combination of photoacoustic imaging with a clinically approved iron oxide nanoparticle dispersion is able to detect lymph node metastases in an animal model. This approach opens up new possibilities for fast intra-operative nodal staging in a clinical setting.
Yingbing Wang
2013-01-01
Full Text Available Objective. Talc pleurodesis is a common procedure performed to treat complications related to lung cancer. The purpose of our study was to characterize any thoracic nodal findings on FDG PET/CT associated with prior talc pleurodesis. Materials and Methods. The electronic medical record identified 44 patients who underwent PET/CT between January 2006 and December 2010 and had a history of talc pleurodesis. For each exam, we evaluated the distribution pattern, size, and attenuation of intrathoracic lymph nodes and the associated standardized uptake value. Results. High-attenuation intrathoracic lymph nodes were noted in 11 patients (25%, and all had corresponding increased FDG uptake (range 2–9 mm. Involved nodal groups were anterior peridiaphragmatic (100%, paracardiac (45%, internal mammary (25%, and peri-IVC (18% nodal stations. Seven of the 11 patients (63% had involvement of multiple lymph nodal groups. Mean longitudinal PET/CT and standalone CT followups of 15±11 months showed persistence of both high-attenuation and increased uptake at these sites, without increase in nodal size suggesting metastatic disease involvement. Conclusions. FDG avid, high-attenuation lymph nodes along the lymphatic drainage pathway for parietal pleura are a relatively common finding following talc pleurodesis and should not be mistaken for nodal metastases during the evaluation of patients with history of lung cancer.
Nodal wear model: corrosion in carbon blast furnace hearths
Verdeja, L. F.
2003-06-01
Full Text Available Criterions developed for the Nodal Wear Model (NWM were applied to estimate the shape of the corrosion profiles that a blast furnace hearth may acquire during its campaign. Taking into account design of the hearth, the boundary conditions, the characteristics of the refractory materials used and the operation conditions of the blast furnace, simulation of wear profiles with central well, mushroom and elephant foot shape were accomplished. The foundations of the NWM are constructed considering that the corrosion of the refractory is a function of the temperature present at each point (node of the liquid metal-refractory interface and the corresponding physical and chemical characteristics of the corrosive fluid.
Se aplican los criterios del Modelo de Desgaste Nodal (MDN para la estimación de los perfiles de corrosión que podría ir adquiriendo el crisol de un homo alto durante su campaña. Atendiendo al propio diseño del crisol, a las condiciones límites de contorno, a las características del material refractario utilizado y a las condiciones de operación del horno, se consiguen simular perfiles de desgaste con "pozo central", con "forma de seta" ó de "pie de elefante". Los fundamentos del MDN se apoyan en la idea de considerar que la corrosión del refractario es función de la temperatura que el sistema pueda presentar en cada punto (nodo de la intercara refractario-fundido y de las correspondientes características físico-químicas del fluido corrosivo.
Metrics for phylogenetic networks II: nodal and triplets metrics.
Cardona, Gabriel; Llabrés, Mercè; Rosselló, Francesc; Valiente, Gabriel
2009-01-01
The assessment of phylogenetic network reconstruction methods requires the ability to compare phylogenetic networks. This is the second in a series of papers devoted to the analysis and comparison of metrics for tree-child time consistent phylogenetic networks on the same set of taxa. In this paper, we generalize to phylogenetic networks two metrics that have already been introduced in the literature for phylogenetic trees: the nodal distance and the triplets distance. We prove that they are metrics on any class of tree-child time consistent phylogenetic networks on the same set of taxa, as well as some basic properties for them. To prove these results, we introduce a reduction/expansion procedure that can be used not only to establish properties of tree-child time consistent phylogenetic networks by induction, but also to generate all tree-child time consistent phylogenetic networks with a given number of leaves.
Embryonic morphogen nodal promotes breast cancer growth and progression.
Daniela F Quail
Full Text Available Breast cancers expressing human embryonic stem cell (hESC-associated genes are more likely to progress than well-differentiated cancers and are thus associated with poor patient prognosis. Elevated proliferation and evasion of growth control are similarly associated with disease progression, and are classical hallmarks of cancer. In the current study we demonstrate that the hESC-associated factor Nodal promotes breast cancer growth. Specifically, we show that Nodal is elevated in aggressive MDA-MB-231, MDA-MB-468 and Hs578t human breast cancer cell lines, compared to poorly aggressive MCF-7 and T47D breast cancer cell lines. Nodal knockdown in aggressive breast cancer cells via shRNA reduces tumour incidence and significantly blunts tumour growth at primary sites. In vitro, using Trypan Blue exclusion assays, Western blot analysis of phosphorylated histone H3 and cleaved caspase-9, and real time RT-PCR analysis of BAX and BCL2 gene expression, we demonstrate that Nodal promotes expansion of breast cancer cells, likely via a combinatorial mechanism involving increased proliferation and decreased apopotosis. In an experimental model of metastasis using beta-glucuronidase (GUSB-deficient NOD/SCID/mucopolysaccharidosis type VII (MPSVII mice, we show that although Nodal is not required for the formation of small (<100 cells micrometastases at secondary sites, it supports an elevated proliferation:apoptosis ratio (Ki67:TUNEL in micrometastatic lesions. Indeed, at longer time points (8 weeks, we determined that Nodal is necessary for the subsequent development of macrometastatic lesions. Our findings demonstrate that Nodal supports tumour growth at primary and secondary sites by increasing the ratio of proliferation:apoptosis in breast cancer cells. As Nodal expression is relatively limited to embryonic systems and cancer, this study establishes Nodal as a potential tumour-specific target for the treatment of breast cancer.
周斌全; 胡申江; 等
2002-01-01
Objectives:This study was aimed at assessing the value of the adenosine test for noninvasive diagnosis of dual AV nodal physiology(DAVNP) in patients with AV nodal reentrant tachycardia(VANRT).Methods:53 patients with paroxysmal supraventricular tachycardia(PSVT) were given incremental doses of adenosine intravenously during sinus rhythm before electrophysiological study.The adenosine test was repeated on a subset of 18 patients with AVNRT after radiofrequency catheter ablation.Results:Sudden increments of PR interval of more than 60 msec between two consecutive beats were observed in 26(83.9%) of 31 patients with typical AVNRT and 2(9.1%) of 22 patients with AVRT and AT(P<0.01),The maximal PR increment between 2 consecutive beats in the AVNRT group(105±45ms) was significantly greater than that in the AVRT and AT group[(20±13ms) (P<0.01),In postablation adenosine test,DAVNP was eliminated in all 8 patients who underwent slow pathway abolition that EPS showed the slow pathway disappeared and 4 of 10 patients who underwent slow pathway modification that EPS showed the slow pathway disappeared and 4 of 10 patients who underwent slow pathway modification that EPS whosed the slow pathway persisted.Six of 10 patients whw exhibited persistent duality showed a marked reduction in the number of beats conducted in the slow pathway after adenosine injection(P<0.01),COnclusions:Administration of adenosine during sinus rhythm may be a useful bedside test for diagnosis of DAVNP in high percentage of patients with typical AVNRT and additionally for evaluating the effects of radiofrequency ablation.
Surian Pinem
2014-01-01
Full Text Available A coupled neutronics thermal-hydraulics code NODAL3 has been developed based on the few-group neutron diffusion equation in 3-dimensional geometry for typical PWR static and transient analyses. The spatial variables are treated by using a polynomial nodal method while for the neutron dynamic solver the adiabatic and improved quasistatic methods are adopted. In this paper we report the benchmark calculation results of the code against the OECD/NEA CRP PWR rod ejection cases. The objective of this work is to determine the accuracy of NODAL3 code in analysing the reactivity initiated accident due to the control rod ejection. The NEACRP PWR rod ejection cases are chosen since many organizations participated in the NEA project using various methods as well as approximations, so that, in addition to the reference solutions, the calculation results of NODAL3 code can also be compared to other codes’ results. The transient parameters to be verified are time of power peak, power peak, final power, final average Doppler temperature, maximum fuel temperature, and final coolant temperature. The results of NODAL3 code agree well with the PHANTHER reference solutions in 1993 and 1997 (revised. Comparison with other validated codes, DYN3D/R and ANCK, shows also a satisfactory agreement.
Tunable Weyl Points in Periodically Driven Nodal Line Semimetals
Yan, Zhongbo; Wang, Zhong
2016-08-01
Weyl semimetals and nodal line semimetals are characterized by linear band touching at zero-dimensional points and one-dimensional lines, respectively. We predict that a circularly polarized light drives nodal line semimetals into Weyl semimetals. The Floquet Weyl points thus obtained are tunable by the incident light, which enables investigations of them in a highly controllable manner. The transition from nodal line semimetals to Weyl semimetals is accompanied by the emergence of a large and tunable anomalous Hall conductivity. Our predictions are experimentally testable by transport measurement in film samples or by pump-probe angle-resolved photoemission spectroscopy.
An Aural Learning Project: Assimilating Jazz Education Methods for Traditional Applied Pedagogy
Gamso, Nancy M.
2011-01-01
The Aural Learning Project (ALP) was developed to incorporate jazz method components into the author's classical practice and her applied woodwind lesson curriculum. The primary objective was to place a more focused pedagogical emphasis on listening and hearing than is traditionally used in the classical applied curriculum. The components of the…
Esquivies, Luis; Blackler, Alissa; Peran, Macarena; Rodriguez-Esteban, Concepcion; Izpisua Belmonte, Juan Carlos; Booker, Evan; Gray, Peter C.; Ahn, Chihoon; Kwiatkowski, Witek; Choe, Senyon
2014-01-01
Nodal, a member of the TGF-β superfamily, plays an important role in vertebrate and invertebrate early development. The biochemical study of Nodal and its signaling pathway has been a challenge, mainly because of difficulties in producing the protein in sufficient quantities. We have developed a library of stable, chemically refoldable Nodal/BMP2 chimeric ligands (NB2 library). Three chimeras, named NB250, NB260, and NB264, show Nodal-like signaling properties including dependence on the co-receptor Cripto and activation of the Smad2 pathway. NB250, like Nodal, alters heart looping during the establishment of embryonic left-right asymmetry, and both NB250 and NB260, as well as Nodal, induce chondrogenic differentiation of human adipose-derived stem cells. This Nodal-induced differentiation is shown to be more efficient than BPM2-induced differentiation. Interestingly, the crystal structure of NB250 shows a backbone scaffold similar to that of BMP2. Our results show that these chimeric ligands may have therapeutic implications in cartilage injuries. PMID:24311780
Rotational total skin and total nodal radiotherapy in mycosis fungoides
Bamberg, M.; Molls, M.; Langrock, J.; Muskalla, K.; Quast, U.
1987-04-01
The following report describes our technique of rotational total skin radiotherapy with electrons (TSER). We present stage related treatment results. Furthermore our first experiences with the combination of TSER and total nodal irradiation (TNI) are communicated.
Nodal aberration theory for wild-filed asymmetric optical systems
Chen, Yang; Cheng, Xuemin; Hao, Qun
2016-10-01
Nodal Aberration Theory (NAT) was used to calculate the zero field position in Full Field Display (FFD) for the given aberration term. Aiming at wide-filed non-rotational symmetric decentered optical systems, we have presented the nodal geography behavior of the family of third-order and fifth-order aberrations. Meanwhile, we have calculated the wavefront aberration expressions when one optical element in the system is tilted, which was not at the entrance pupil. By using a three-piece-cellphone lens example in optical design software CodeV, the nodal geography is testified under several situations; and the wavefront aberrations are calculated when the optical element is tilted. The properties of the nodal aberrations are analyzed by using Fringe Zernike coefficients, which are directly related with the wavefront aberration terms and usually obtained by real ray trace and wavefront surface fitting.
Torsionfree Sheaves over a Nodal Curve of Arithmetic Genus One
Usha N Bhosle; Indranil Biswas
2008-02-01
We classify all isomorphism classes of stable torsionfree sheaves on an irreducible nodal curve of arithmetic genus one defined over $\\mathbb{C}$. Let be a nodal curve of arithmetic genus one defined over $\\mathbb{R}$, with exactly one node, such that does not have any real points apart from the node. We classify all isomorphism classes of stable real algebraic torsionfree sheaves over of even rank. We also classify all isomorphism classes of real algebraic torsionfree sheaves over of rank one.
Nodal Solutions for a Nonlinear Fourth-Order Eigenvalue Problem
Ru Yun MA; Bevan THOMPSON
2008-01-01
We are concerned with determining the values of λ, for which there exist nodal solutions of the fourth-order boundary value problem y =λa(x)f(y),00 for all u ≠0. We give conditions on the ratio f (s)/s,at infinity and zero, that guarantee the existence of nodal solutions.The proof of our main results is based upon bifurcation techniques.
Power Series Method applied to Inverse Analysis in Chemical Kinetics Problem
Lopez-Sandoval, E.; Mello, A.; Godina-Nava, J. J.; Samana, A. R.
2012-01-01
Power Series Solution Method has been traditionally used to solve Ordinary and Partial Linear Differential Equations. However, despite their usefulness the application of this method has been limited to this particular kind of equations. In this work we use the method of power series to solve nonlinear partial differential equations. The method is applied to solve three versions of nonlinear time-dependent Burgers-Type differential equations in order to demonstrate its scope and applicability.
Filho, J. F. P. [Institute de Matematica, Estatistica e Fisica, Universidade Federal do Rio Grande, Av. Italia, s/n, 96203-900 Rio Grande, RS (Brazil); Barichello, L. B. [Institute de Matematica, Universidade Federal do Rio Grande do Sul, Av. Bento Goncalves, 9500, 91509-900 Porto Alegre, RS (Brazil)
2013-07-01
In this work, an analytical discrete ordinates method is used to solve a nodal formulation of a neutron transport problem in x, y-geometry. The proposed approach leads to an important reduction in the order of the associated eigenvalue systems, when combined with the classical level symmetric quadrature scheme. Auxiliary equations are proposed, as usually required for nodal methods, to express the unknown fluxes at the boundary introduced as additional unknowns in the integrated equations. Numerical results, for the problem defined by a two-dimensional region with a spatially constant and isotropically emitting source, are presented and compared with those available in the literature. (authors)
Nodal signalling and asymmetry of the nervous system.
Signore, Iskra A; Palma, Karina; Concha, Miguel L
2016-12-19
The role of Nodal signalling in nervous system asymmetry is still poorly understood. Here, we review and discuss how asymmetric Nodal signalling controls the ontogeny of nervous system asymmetry using a comparative developmental perspective. A detailed analysis of asymmetry in ascidians and fishes reveals a critical context-dependency of Nodal function and emphasizes that bilaterally paired and midline-unpaired structures/organs behave as different entities. We propose a conceptual framework to dissect the developmental function of Nodal as asymmetry inducer and laterality modulator in the nervous system, which can be used to study other types of body and visceral organ asymmetries. Using insights from developmental biology, we also present novel evolutionary hypotheses on how Nodal led the evolution of directional asymmetry in the brain, with a particular focus on the epithalamus. We intend this paper to provide a synthesis on how Nodal signalling controls left-right asymmetry of the nervous system.This article is part of the themed issue 'Provocative questions in left-right asymmetry'.
Formal methods applied to industrial complex systems implementation of the B method
Boulanger, Jean-Louis
2014-01-01
This book presents real-world examples of formal techniques in an industrial context. It covers formal methods such as SCADE and/or the B Method, in various fields such as railways, aeronautics, and the automotive industry. The purpose of this book is to present a summary of experience on the use of "formal methods" (based on formal techniques such as proof, abstract interpretation and model-checking) in industrial examples of complex systems, based on the experience of people currently involved in the creation and assessment of safety critical system software. The involvement of people from
The Renormalization-Group Method Applied to Asymptotic Analysis of Vector Fields
Kunihiro, T
1996-01-01
The renormalization group method of Goldenfeld, Oono and their collaborators is applied to asymptotic analysis of vector fields. The method is formulated on the basis of the theory of envelopes, as was done for scalar fields. This formulation actually completes the discussion of the previous work for scalar equations. It is shown in a generic way that the method applied to equations with a bifurcation leads to the Landau-Stuart and the (time-dependent) Ginzburg-Landau equations. It is confirmed that this method is actually a powerful theory for the reduction of the dynamics as the reductive perturbation method is. Some examples for ordinary diferential equations, such as the forced Duffing, the Lotka-Volterra and the Lorenz equations, are worked out in this method: The time evolution of the solution of the Lotka-Volterra equation is explicitly given, while the center manifolds of the Lorenz equation are constructed in a simple way in the RG method.
Balch, Charles M.; Gershenwald, Jeffrey E.; Soong, Seng-jaw; Thompson, John F.; Ding, Shouluan; Byrd, David R.; Cascinelli, Natale; Cochran, Alistair J.; Coit, Daniel G.; Eggermont, Alexander M.; Johnson, Timothy; Kirkwood, John M.; Leong, Stanley P.; McMasters, Kelly M.; Mihm, Martin C.; Morton, Donald L.; Ross, Merrick I.; Sondak, Vernon K.
2010-01-01
Purpose To determine the survival rates and independent predictors of survival using a contemporary international cohort of patients with stage III melanoma. Patients and Methods Complete clinicopathologic and follow-up data were available for 2,313 patients with stage III disease in an updated and expanded American Joint Committee on Cancer (AJCC) melanoma staging database. Kaplan-Meier and Cox multivariate survival analyses were performed. Results Among all 2,313 patients with stage III disease, 81% had micrometastases, and 19% had clinically detectable macrometastases. The 5-year overall survival was 63%; it was 67% for patients with nodal micrometastases, and it was 43% for those with nodal macrometastases (P Multivariate analysis demonstrated that in patients with nodal micrometastases, number of tumor-containing lymph nodes, primary tumor thickness, patient age, ulceration, and anatomic site of the primary independently predicted survival (all P < .01). When added to the model, primary tumor mitotic rate was the second-most powerful predictor of survival after the number of tumor-containing nodes. In contrast, for patients with nodal macrometastases, the number of tumor-containing nodes, primary ulceration, and patient age independently predicted survival (P < .01). Conclusion In this multi-institutional analysis, we demonstrated remarkable heterogeneity of prognosis among patients with stage III melanoma, especially among those with nodal micrometastases. These results should be incorporated into the design and interpretation of future clinical trials involving patients with stage III melanoma. PMID:20368546
An Empirical Study of Applying Associative Method in College English Vocabulary Learning
Zhang, Min
2014-01-01
Vocabulary is the basis of any language learning. To many Chinese non-English majors it is difficult to memorize English words. This paper applied associative method in presenting new words to them. It is found that associative method did receive a better result both in short-term and long-term retention of English words. Compared with the…
Extreme Wind Calculation Applying Spectral Correction Method – Test and Validation
Rathmann, Ole Steen; Hansen, Brian Ohrbeck; Larsén, Xiaoli Guo
2016-01-01
We present a test and validation of extreme wind calculation applying the Spectral Correction (SC) method as implemented in a DTU Wind Condition Software. This method can do with a short-term(~1 year) local measured wind data series in combination with a long-term (10-20 years) reference modelled...
The Wigner method applied to the photodissociation of CH3I
Henriksen, Niels Engholm
1985-01-01
The Wigner method is applied to the Shapiro-Bersohn model of the photodissociation of CH3I. The partial cross sections obtained by this semiclassical method are in very good agreement with results of exact quantum calculations. It is also shown that a harmonic approximation to the vibrational...
Radeva, Veselka S.
Several interactive methods, applied in the astronomy education during creation of the project about a colony in the Space, are presented. The methods Pyramid, Brainstorm, Snow-slip (Snowball) and Aquarium give the opportunity for schooler to understand and learn well a large packet of astronomical knowledge.
New large amplitude oscillatory elongation method applied on elastomeric PDMS networks
Bejenariu, Anca Gabriela; Rasmussen, Henrik K.; Skov, Anne Ladegaard;
The reversed deformation measurements give important information about the entropic state of the sample and about the behaviour of the polymer inside it. Even though there exist important stretching methods studies through rheometry [5], to our knowledge this is the first elongational method...... applied on elastomers for measuring the elastic recovery through oscillations at a constant strain....
Accurate simulation of MPPT methods performance when applied to commercial photovoltaic panels.
Cubas, Javier; Pindado, Santiago; Sanz-Andrés, Ángel
2015-01-01
A new, simple, and quick-calculation methodology to obtain a solar panel model, based on the manufacturers' datasheet, to perform MPPT simulations, is described. The method takes into account variations on the ambient conditions (sun irradiation and solar cells temperature) and allows fast MPPT methods comparison or their performance prediction when applied to a particular solar panel. The feasibility of the described methodology is checked with four different MPPT methods applied to a commercial solar panel, within a day, and under realistic ambient conditions.
Accurate Simulation of MPPT Methods Performance When Applied to Commercial Photovoltaic Panels
Javier Cubas
2015-01-01
Full Text Available A new, simple, and quick-calculation methodology to obtain a solar panel model, based on the manufacturers’ datasheet, to perform MPPT simulations, is described. The method takes into account variations on the ambient conditions (sun irradiation and solar cells temperature and allows fast MPPT methods comparison or their performance prediction when applied to a particular solar panel. The feasibility of the described methodology is checked with four different MPPT methods applied to a commercial solar panel, within a day, and under realistic ambient conditions.
The bi-potential method applied to the modeling of dynamic problems with friction
Feng, Z.-Q.; Joli, P.; Cros, J.-M.; Magnain, B.
2005-10-01
The bi-potential method has been successfully applied to the modeling of frictional contact problems in static cases. This paper presents an extension of this method for dynamic analysis of impact problems with deformable bodies. A first order algorithm is applied to the numerical integration of the time-discretized equation of motion. Using the Object-Oriented Programming (OOP) techniques in C++ and OpenGL graphical support, a finite element code including pre/postprocessor FER/Impact is developed. The numerical results show that, at the present stage of development, this approach is robust and efficient in terms of numerical stability and precision compared with the penalty method.
Shaitelman, Simona F., E-mail: sfshaitelman@mdanderson.org [Division of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Tereffe, Welela [Division of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Dogan, Basak E. [Department of Diagnostic Radiology, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Hess, Kenneth R. [Department of Biostatistics, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Caudle, Abigail S. [Department of Surgical Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Valero, Vicente [Department of Breast Medical Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Stauder, Michael C. [Division of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Krishnamurthy, Savitri [Department of Pathology, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Candelaria, Rosalind P. [Department of Diagnostic Radiology, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Strom, Eric A.; Woodward, Wendy A. [Division of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Hunt, Kelly K. [Department of Surgical Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Buchholz, Thomas A. [Division of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Whitman, Gary J. [Department of Diagnostic Radiology, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States)
2015-09-01
Purpose: We sought to determine the rate at which regional nodal ultrasonography would increase the nodal disease stage in patients with triple-negative breast cancer (TNBC) beyond the clinical stage determined by physical examination and mammography alone, and significantly affect the treatments delivered to these patients. Methods and Materials: We retrospectively reviewed the charts of women with stages I to III TNBC who underwent physical examination, mammography, breast and regional nodal ultrasonography with needle biopsy of abnormal nodes, and definitive local-regional treatment at our institution between 2004 and 2011. The stages of these patients' disease with and without ultrasonography of the regional nodal basins were compared using the Pearson χ{sup 2} test. Definitive treatments of patients whose nodal disease was upstaged on the basis of ultrasonographic findings were compared to those of patients whose disease stage remained the same. Results: A total of 572 women met the study requirements. In 111 (19.4%) of these patients, regional nodal ultrasonography with needle biopsy resulted in an increase in disease stage from the original stage by physical examination and mammography alone. Significantly higher percentages of patients whose nodal disease was upstaged by ultrasonographic findings compared to that in patients whose disease was not upstaged underwent neoadjuvant systemic therapy (91.9% and 51.2%, respectively; P<.0001), axillary lymph node dissection (99.1% and 34.5%, respectively; P<.0001), and radiation to the regional nodal basins (88.2% and 29.1%, respectively; P<.0001). Conclusions: Regional nodal ultrasonography in TNBC frequently changes the initial clinical stage and plays an important role in treatment planning.
The genetics of nodal marginal zone lymphoma.
Spina, Valeria; Khiabanian, Hossein; Messina, Monica; Monti, Sara; Cascione, Luciano; Bruscaggin, Alessio; Spaccarotella, Elisa; Holmes, Antony B; Arcaini, Luca; Lucioni, Marco; Tabbò, Fabrizio; Zairis, Sakellarios; Diop, Fary; Cerri, Michaela; Chiaretti, Sabina; Marasca, Roberto; Ponzoni, Maurilio; Deaglio, Silvia; Ramponi, Antonio; Tiacci, Enrico; Pasqualucci, Laura; Paulli, Marco; Falini, Brunangelo; Inghirami, Giorgio; Bertoni, Francesco; Foà, Robin; Rabadan, Raul; Gaidano, Gianluca; Rossi, Davide
2016-09-08
Nodal marginal zone lymphoma (NMZL) is a rare, indolent B-cell tumor that is distinguished from splenic marginal zone lymphoma (SMZL) by the different pattern of dissemination. NMZL still lacks distinct markers and remains orphan of specific cancer gene lesions. By combining whole-exome sequencing, targeted sequencing of tumor-related genes, whole-transcriptome sequencing, and high-resolution single nucleotide polymorphism array analysis, we aimed at disclosing the pathways that are molecularly deregulated in NMZL and we compare the molecular profile of NMZL with that of SMZL. These analyses identified a distinctive pattern of nonsilent somatic lesions in NMZL. In 35 NMZL patients, 41 genes were found recurrently affected in ≥3 (9%) cases, including highly prevalent molecular lesions of MLL2 (also known as KMT2D; 34%), PTPRD (20%), NOTCH2 (20%), and KLF2 (17%). Mutations of PTPRD, a receptor-type protein tyrosine phosphatase regulating cell growth, were enriched in NMZL across mature B-cell tumors, functionally caused the loss of the phosphatase activity of PTPRD, and were associated with cell-cycle transcriptional program deregulation and increased proliferation index in NMZL. Although NMZL shared with SMZL a common mutation profile, NMZL harbored PTPRD lesions that were otherwise absent in SMZL. Collectively, these findings provide new insights into the genetics of NMZL, identify PTPRD lesions as a novel marker for this lymphoma across mature B-cell tumors, and support the distinction of NMZL as an independent clinicopathologic entity within the current lymphoma classification. © 2016 by The American Society of Hematology.
MICROPROPAGATION OF ADULT TREE OF PTEROCARPUS MARSUPIUM ROXB. USING NODAL EXPLANTS
Shipra JAISWAL; Meena CHOUDHARY; Sarita ARYA; Tarun KANT
2015-01-01
Attempts were made for in vitro propagation of Pterocarpus marsupium Roxb., belonging to family Fabaceae, an economically important multipurpose tree. The tree is scared with noval antidiabetic properties. The tree shows poor seed germination capacity (30%) due to hard seed coat and conventional vegetative regeneration methods are a complete failure. Therefore, the propagation of this tree by tissue culture techniques is an urgent need and well justified. Nodal segments containing axillary bu...
Kevin Casey
2014-03-01
Full Text Available Purpose: To quantify the dosimetric impact of interfractional shoulder motion on targets in the low neck for head and neck patients treated with volume modulated arc therapy (VMAT.Methods: Three patients with head and neck cancer were selected. All three required treatment to nodal regions in the low neck in addition to the primary tumor site. The patients were immobilized during simulation and treatment with a custom thermoplastic mask covering the head and shoulders. One VMAT plan was created for each patient utilizing two full 360° arcs and a second plan was created consisting of two superior VMAT arcs matched to an inferior static AP supraclavicular field. A CT-on-rails alignment verification was performed weekly during each patient’s treatment course. The weekly CT images were registered to the simulation CT and the target contours were deformed and applied to the weekly CT. The two VMAT plans were copied to the weekly CT datasets and recalculated to obtain the dose to the deformed low neck contours.Results: The average observed shoulder position shift in any single dimension relative to simulation was 2.5 mm. The maximum shoulder shift observed in a single dimension was 25.7 mm. Low neck target mean doses, normalized to simulation and averaged across all weekly recalculations were 0.996, 0.991, and 1.033 (Full VMAT plan and 0.986, 0.995, and 0.990 (Half-Beam VMAT plan for the three patients, respectively. The maximum observed deviation in target mean dose for any individual weekly recalculation was 6.5%, occurring with the Full VMAT plan for Patient 3.Conclusion: Interfractional variation in dose to low neck nodal regions was quantified for three head and neck patients treated with VMAT. Mean dose was 3.3% higher than planned for one patient using a Full VMAT plan. A Half-Beam technique is likely a safer choice when treating the supraclavicular region with VMAT.-------------------------------------------Cite this article as: Casey K
Active Problem Solving and Applied Research Methods in a Graduate Course on Numerical Methods
Maase, Eric L.; High, Karen A.
2008-01-01
"Chemical Engineering Modeling" is a first-semester graduate course traditionally taught in a lecture format at Oklahoma State University. The course as taught by the author for the past seven years focuses on numerical and mathematical methods as necessary skills for incoming graduate students. Recent changes to the course have included Visual…
Convergence analysis for general linear methods applied to stiff delay differential equations
无
2002-01-01
For Runge-Kutta methods applied to stiff delay differential equations (DDEs), the concept of D-convergence was proposed, which is an extension to that of B-convergence in ordinary differential equations (ODEs). In this paper, D-convergence of general linear methods is discussed and the previous related results are improved. Some order results to determine D-convergence of the methods are obtained.
Ando, Yoshinobu; Eguchi, Yuya; Mizukawa, Makoto
In this research, we proposed and evaluated a management method of college mechatronics education. We applied the project management to college mechatronics education. We practiced our management method to the seminar “Microcomputer Seminar” for 3rd grade students who belong to Department of Electrical Engineering, Shibaura Institute of Technology. We succeeded in management of Microcomputer Seminar in 2006. We obtained the good evaluation for our management method by means of questionnaire.
Herbig, Alexander
2016-02-12
An ab-initio electronic structure method for substitutionally disordered real materials is developed within a pseudopotential density functional theory approach. The method is validated against exact diagonalization and for simple disordered CuZn alloys. The developed method is applied to iron-based superconductors. In particular, band renormalization effects due to various chemical substitutions in BaFe{sub 2}As{sub 2} are investigated and their Cooper pair breaking effects are compared.
Das, T.; Figueira de Morisson Faria, C.
2016-08-01
We analyze the imprint of nodal planes in high-order-harmonic spectra from aligned diatomic molecules in intense laser fields whose components exhibit orthogonal polarizations. We show that the typical suppression in the spectra associated to nodal planes is distorted, and that this distortion can be employed to map the electron's angle of return to its parent ion. This investigation is performed semianalytically at the single-molecule response and single-active orbital level, using the strong-field approximation and the steepest descent method. We show that the velocity form of the dipole operator is superior to the length form in providing information about this distortion. However, both forms introduce artifacts that are absent in the actual momentum-space wave function. Furthermore, elliptically polarized fields lead to larger distortions in comparison to two-color orthogonally polarized fields. These features are investigated in detail for O2, whose highest occupied molecular orbital provides two orthogonal nodal planes.
Cluster detection methods applied to the Upper Cape Cod cancer data
Ozonoff David
2005-09-01
Full Text Available Abstract Background A variety of statistical methods have been suggested to assess the degree and/or the location of spatial clustering of disease cases. However, there is relatively little in the literature devoted to comparison and critique of different methods. Most of the available comparative studies rely on simulated data rather than real data sets. Methods We have chosen three methods currently used for examining spatial disease patterns: the M-statistic of Bonetti and Pagano; the Generalized Additive Model (GAM method as applied by Webster; and Kulldorff's spatial scan statistic. We apply these statistics to analyze breast cancer data from the Upper Cape Cancer Incidence Study using three different latency assumptions. Results The three different latency assumptions produced three different spatial patterns of cases and controls. For 20 year latency, all three methods generally concur. However, for 15 year latency and no latency assumptions, the methods produce different results when testing for global clustering. Conclusion The comparative analyses of real data sets by different statistical methods provides insight into directions for further research. We suggest a research program designed around examining real data sets to guide focused investigation of relevant features using simulated data, for the purpose of understanding how to interpret statistical methods applied to epidemiological data with a spatial component.
Atkins, H. L.; Shu, Chi-Wang
2001-01-01
The explicit stability constraint of the discontinuous Galerkin method applied to the diffusion operator decreases dramatically as the order of the method is increased. Block Jacobi and block Gauss-Seidel preconditioner operators are examined for their effectiveness at accelerating convergence. A Fourier analysis for methods of order 2 through 6 reveals that both preconditioner operators bound the eigenvalues of the discrete spatial operator. Additionally, in one dimension, the eigenvalues are grouped into two or three regions that are invariant with order of the method. Local relaxation methods are constructed that rapidly damp high frequencies for arbitrarily large time step.
Boland, M. R.
2017-07-31
Optimal evaluation and management of the axilla following neoadjuvant chemotherapy(NAC) in patients with node-positive breast cancer remains controversial. The aim of this study wasto examine the impact of receptor phenotype in patients with nodal metastases who undergo NAC to seewhether this approach can identify those who may be suitable for conservative axillary management.Methods: Between 2009 and 2014, all patients with breast cancer and biopsy-proven nodal diseasewho received NAC were identied from prospectively developed databases. Details of patients who hadaxillary lymph node dissection (ALND) following NAC were recorded and rates of pathological completeresponse (pCR) were evaluated for receptor phenotype.
An uncertainty analysis of the PVT gauging method applied to sub-critical cryogenic propellant tanks
Van Dresar, Neil T. [NASA Glenn Research Center, Cleveland, OH (United States)
2004-08-01
The PVT (pressure, volume, temperature) method of liquid quantity gauging in low-gravity is based on gas law calculations assuming conservation of pressurant gas within the propellant tank and the pressurant supply bottle. There is interest in applying this method to cryogenic propellant tanks since the method requires minimal additional hardware or instrumentation. To use PVT with cryogenic fluids, a non-condensable pressurant gas (helium) is required. With cryogens, there will be a significant amount of propellant vapor mixed with the pressurant gas in the tank ullage. This condition, along with the high sensitivity of propellant vapor pressure to temperature, makes the PVT method susceptible to substantially greater measurement uncertainty than is the case with less volatile propellants. A conventional uncertainty analysis is applied to example cases of liquid hydrogen and liquid oxygen tanks. It appears that the PVT method may be feasible for liquid oxygen. Acceptable accuracy will be more difficult to obtain with liquid hydrogen. (Author)
Tsutsumi, Yasumasa; Nomoto, Takuya; Ikeda, Hiroaki; Machida, Kazushige
2016-12-01
We propose a spectroscopic method to identify the nodal gap structure in unconventional superconductors. This method is best suited for locating the horizontal line node and for pinpointing the isolated point nodes by measuring polar angle (θ ) resolved zero-energy density of states N (θ ) . This is measured by specific heat or thermal conductivity at low temperatures under a magnetic field. We examine a variety of uniaxially symmetric nodal structures, including point and/or line nodes with linear and quadratic dispersions, by solving the Eilenberger equation in vortex states. It is found that (a) the maxima of N (θ ) continuously shift from the antinodal to the nodal direction (θn) as a field increases accompanying the oscillation pattern reversal at low and high fields. Furthermore, (b) local minima emerge next to θn on both sides, except for the case of the linear point node. These features are robust and detectable experimentally. Experimental results of N (θ ) performed on several superconductors, UPd2Al3,URu2Si2,CuxBi2Se3 , and UPt3, are examined and commented on in light of the present theory.
Applying Activity Based Costing (ABC Method to Calculate Cost Price in Hospital and Remedy Services
A Dabiri
2012-04-01
Full Text Available Background: Activity Based Costing (ABC is one of the new methods began appearing as a costing methodology in the 1990. It calculates cost price by determining the usage of resources. In this study, ABC method was used for calculating cost price of remedial services in hospitals.Methods: To apply ABC method, Shahid Faghihi Hospital was selected. First, hospital units were divided into three main departments: administrative, diagnostic, and hospitalized. Second, activity centers were defined by the activity analysis method. Third, costs of administrative activity centers were allocated into diagnostic and operational departments based on the cost driver. Finally, with regard to the usage of cost objectives from services of activity centers, the cost price of medical services was calculated.Results: The cost price from ABC method significantly differs from tariff method. In addition, high amount of indirect costs in the hospital indicates that capacities of resources are not used properly.Conclusion: Cost price of remedial services with tariff method is not properly calculated when compared with ABC method. ABC calculates cost price by applying suitable mechanisms but tariff method is based on the fixed price. In addition, ABC represents useful information about the amount and combination of cost price services.
Mark, W. D.
1982-01-01
A transfer function method for predicting the dynamic responses of gear systems with more than one gear mesh is developed and applied to the NASA Lewis four-square gear fatigue test apparatus. Methods for computing bearing-support force spectra and temporal histories of the total force transmitted by a gear mesh, the force transmitted by a single pair of teeth, and the maximum root stress in a single tooth are developed. Dynamic effects arising from other gear meshes in the system are included. A profile modification design method to minimize the vibration excitation arising from a pair of meshing gears is reviewed and extended. Families of tooth loading functions required for such designs are developed and examined for potential excitation of individual tooth vibrations. The profile modification design method is applied to a pair of test gears.
ORBAN Magdalena
2015-06-01
Full Text Available The paper presents a comparative graphical and analytical study concerning the possibility of applying methods of transforming the projection – rotation and change of projection planes - for determination of spatial image of some machine parts whose edges or plane faces form imposed angles with the projection planes. An analysis of the existing relation between the two methods respectively with the axonometric representation realized by the coordinate’s method is also performed, highlighting the advantages presented by each of the considered methods. In both cases, the double rotation, respectively the double change of projection planes will be applied, equivalent to an intuitive axonometric representation which will meet, at the same time, some concrete requirements of a project.
Cunnane, Marybeth; Kyriazidis, Natalia; Kamani, Dipti; Juliano, Amy F; Kelly, Hillary R; Curtin, Hugh D; Barber, Samuel R; Randolph, Gregory W
2016-11-30
To evaluate the effectiveness, reproducibility, and usability of our proposed nodal nomenclature and classification system employed for several years in our high-volume thyroid cancer unit, for the adequate localization and mapping of lymph nodes in thyroid cancer patients with extensive nodal disease. Retrospective review. Thirty-three thyroid cancer patients with extensive nodal disease treated from January 2004 to May 2013 were included in our study. Preoperative ultrasound and computed tomography scans of these patients were reanalyzed by blinded radiologists to investigate the feasibility for the assignment of abnormal lymph nodes to compartments defined in our proposed nodal classification system and to identify areas of difficulty in the assignment. Analysis of nodal localization revealed a discrepancy in compartment agreement between the two radiologists in the assignment of abnormal nodes in nine patients (9/33, 27%). In six patients (6/33, 18%), discrepancy existed in labeling paratracheal and pretracheal nodes. In three patients (3/33, 9%), disagreement arose in the classification of retrocarotid nodes into lateral versus central compartment. A further refinement of the definition of key borderline regions of the pretracheal versus paratracheal and retrocarotid regions of our classification improved the agreement and demonstrated a complete concordance (100%) amongst the reviewing radiologists. The proposed nodal classification system, derived specifically for differentiated thyroid carcinoma, with readily identifiable anatomic boundaries on imaging and at surgery, facilitates communication among multidisciplinary physicians and aids in creating a uniform and reproducible radiographic nodal map to guide surgical therapy. 4 Laryngoscope, 2016. © 2016 The American Laryngological, Rhinological and Otological Society, Inc.
Dyka, Zoya
2011-01-01
Securing communication channels is especially needed in wireless environments. But applying cipher mechanisms in software is limited by the calculation and energy resources of the mobile devices. If hardware is applied to realize cryptographic operations cost becomes an issue. In this paper we describe an approach which tackles all these three points. We implemented a hardware accelerator for polynomial multiplication in extended Galois fields (GF) applying Karatsuba's method iteratively. With this approach the area consumption is reduced to 2.1 mm^2 in comparison to. 6.2 mm^2 for the standard application of Karatsuba's method i.e. for recursive application. Our approach also reduces the energy consumption to 60 per cent of the original approach. The price we have to pay for these achievement is the increased execution time. In our implementation a polynomial multiplication takes 3 clock cycles whereas the recurisve Karatsuba approach needs only one clock cycle. But considering area, energy and calculation sp...
Functional mathematical model of dual pathway AV nodal conduction.
Climent, A M; Guillem, M S; Zhang, Y; Millet, J; Mazgalev, T N
2011-04-01
Dual atrioventricular (AV) nodal pathway physiology is described as two different wave fronts that propagate from the atria to the His bundle: one with a longer effective refractory period [fast pathway (FP)] and a second with a shorter effective refractory period [slow pathway (SP)]. By using His electrogram alternance, we have developed a mathematical model of AV conduction that incorporates dual AV nodal pathway physiology. Experiments were performed on five rabbit atrial-AV nodal preparations to develop and test the presented model. His electrogram alternances from the inferior margin of the His bundle were used to identify fast and slow wave front propagations. The ability to predict AV conduction time and the interaction between FP and SP wave fronts have been analyzed during regular and irregular atrial rhythms (e.g., atrial fibrillation). In addition, the role of dual AV nodal pathway wave fronts in the generation of Wenckebach periodicities has been illustrated. Finally, AV node ablative modifications have been evaluated. The model accurately reproduced interactions between FP and SP during regular and irregular atrial pacing protocols. In all experiments, specificity and sensitivity higher than 85% were obtained in the prediction of the pathway responsible for conduction. It has been shown that, during atrial fibrillation, the SP ablation significantly increased the mean HH interval (204 ± 39 vs. 274 ± 50 ms, P AV node mechanisms and should be considered as a step forward in the studies of AV nodal conduction.
AV nodal dual pathway electrophysiology and Wenckebach periodicity.
Zhang, Youhua; Mazgalev, Todor N
2011-11-01
The precise mechanism(s) governing the phenomenon of AV nodal Wenckebach periodicity is not fully elucidated. Currently 2 hypotheses, the decremental conduction and the Rosenbluethian step-delay, are most frequently used. We have provided new evidence that, in addition, dual pathway (DPW) electrophysiology is directly involved in the manifestation of AV nodal Wenckebach phenomenon. AV nodal cellular action potentials (APs) were recorded from 6 rabbit AV node preparations during standard A1A2 and incremental pacing protocols. His electrogram alternans, a validated index of DPW electrophysiology, was used to monitor fast (FP) and slow (SP) pathway conduction. The data were collected in intact AV nodes, as well as after SP ablation. In all studied hearts the Wenckebach cycle started with FP propagation, followed by transition to SP until its ultimate block. During this process complex cellular APs were observed, with decremental foot formations reflecting the fading FP and second depolarizations produced by the SP. In addition, the AV node cells exhibited a progressive loss in maximal diastolic membrane potential (MDP) due to incomplete repolarization. The pause created with the blocked Wenckebach beat was associated with restoration of MDP and reinitiation of the conduction cycle via the FP wavefront. DPW electrophysiology is dynamically involved in the development of AV nodal Wenckebach periodicity. In the intact AV node, the cycle starts with FP that is progressively weakened and then replaced by SP propagation, until block occurs. AV nodal SP modification did not eliminate Wenckebach periodicity but strongly affected its paradigm. © 2011 Wiley Periodicals, Inc.
Several methods applied to measuring residual stress in a known specimen
Prime, M.B.; Rangaswamy, P.; Daymond, M.R.; Abelin, T.G.
1998-09-01
In this study, a beam with a precisely known residual stress distribution provided a unique experimental opportunity. A plastically bent beam was carefully prepared in order to provide a specimen with a known residual stress profile. 21Cr-6Ni-9Mn austenitic stainless steel was obtained as 43 mm square forged stock. Several methods were used to determine the residual stresses, and the results were compared to the known values. Some subtleties of applying the various methods were exposed.
Applying systems-centered theory (SCT) and methods in organizational contexts: putting SCT to work.
Gantt, Susan P
2013-04-01
Though initially applied in psychotherapy, a theory of living human systems (TLHS) and its systems-centered practice (SCT) offer a comprehensive conceptual framework replete with operational definitions and methods that is applicable in a wide range of contexts. This article elaborates the application of SCT in organizations by first summarizing systems-centered theory, its constructs and methods, and then using case examples to illustrate how SCT has been used in organizational and coaching contexts.
Cox James D
2009-01-01
Full Text Available Abstract Background Controversy still exists regarding the long-term outcome of patients whose uninvolved lymph node stations are not prophylactically irradiated for non-small cell lung cancer (NSCLC treated with definitive radiotherapy. To determine the frequency of elective nodal failure (ENF and in-field failure (IFF, we examined a large cohort of patients with NSCLC staged with positron emission tomography (PET/computed tomography (CT and treated with 3-dimensional conformal radiotherapy (3D-CRT that excluded uninvolved lymph node stations. Methods We retrospectively reviewed the records of 115 patients with non-small cell lung cancer treated at our institution with definitive radiation therapy with or without concurrent chemotherapy (CHT. All patients were treated with 3D-CRT, including nodal regions determined by CT or PET to be disease involved. Concurrent platinum-based CHT was administered for locally advanced disease. Patients were analyzed in follow-up for survival, local regional recurrence, and distant metastases (DM. Results The median follow-up time was 18 months (3 to 44 months among all patients and 27 months (6 to 44 months among survivors. The median overall survival, 2-year actuarial overall survival and disease-free survival were 19 months, 38%, and 28%, respectively. The majority of patients died from DM, the overall rate of which was 36%. Of the 31 patients with local regional failure, 26 (22.6% had IFF, 5 (4.3% had ENF and 2 (1.7% had isolated ENF. For 88 patients with stage IIIA/B, the frequencies of IFF, any ENF, isolated ENF, and DM were 23 (26%, 3 (9%, 1 (1.1% and 36 (40.9%, respectively. The comparable rates for the 22 patients with early stage node-negative disease (stage IA/IB were 3 (13.6%, 1(4.5%, 0 (0%, and 5 (22.7%, respectively. Conclusion We observed only a 4.3% recurrence of any ENF and a 1.7% recurrence of isolated ENF in patients with NSCLC treated with definitive 3D-CRT without prophylactic irradiation of
The Effect Of The Applied Performance Methods On The Objective Of The Managers
Derya Kara
2009-09-01
Full Text Available Within the changing management concept, employees and employers have the constant feeling of keeping up with the changing environment. In this regard, performance evaluation activities are regarded as an indispensable element. Data obtained from the results of the performance evaluation activities, shed light on the development of the employees and enable the enterprises to stand in the fierce competitive environment. This study sets out to find out the effect of the applied performance methods on the objective of the managers. The population of the study comprises 182 hotel enterprises with five stars operating in Antalya, İzmir and Muğla with 2184 managers. Sample population was comprised of 578 managers. The results of the study suggest that the effect of the applied performance methods on the objective of the managers counts. The objective of the managers applying 360-degree performance evaluation method is found to be “finding out the training and development needs”, while the objective of the managers applying conventional performance evaluation methods is found to be “enhancing the existing performance”.
The POWHEG method applied to top pair production and decays at the ILC
Latunde-Dada, Oluseyi
2008-01-01
We study the effects of gluon radiation in top pair production and their decays for e+e- annihilation at the ILC. To achieve this we apply the POWHEG method and interface our results to the Monte Carlo event generator Herwig++. We consider a center-of-mass energy of 500GeV and compare decay correlations and bottom quark distributions before hadronization.
FU Jie; ZHOU Jia-yin; Vincent FH CHONG; James BK Khoo
2013-01-01
Background Elective radiation of lower neck is controversial for nasopharyngeal carcinoma (NPC) without lymph node metastasis (N0 disease).Tumor volume is an important prognostic indicator.The objective of this study is to explore the potential impact of tumor volume on the indication of the lower neck irradiation for N0-NPC,by a qualitative evaluation of the relationship between tumor volume and nodal metastasis.Methods Magnetic resonance (MR) images of 99 consecutive patients with NPC who underwent treatment were retrospectively reviewed.Primary tumor volumes of NPC were semi-automatically measured,nodal metastases were N-classified and neck level involvements were examined.Distributions of tumor volumes among N-category-based groups and distributions of N-categories among tumor volume-based groups were analyzed,respectively.Results The numbers of patients with N0 to N3 disease were 12,39,32,and 16,respectively.The volumes of primary tumor were from 3.3 to 89.6 ml,with a median of 17.1 ml.For patients with nodal metastasis,tumor volume did not increase significantly with the advancing of N-category (P ＞0.05).No significant difference was found for the distribution of N1,N2,and N3 categories among tumor volume-based groups (P ＞0.05).Nevertheless patients with nodal metastasis had significantly larger tumor volumes than those without metastasis (P ＜0.05).Patients with larger tumor volumes were associated with an increased incidence of nodal metastasis.Conclusions Certain positive correlations existed between tumor volume and the presence of nodal metastasis.The tumor volume (＞10 ml) is a potential indicator for the lower neck irradiation for N0-NPC.
H Narendra
2010-01-01
Full Text Available Context: The pattern of nodal spread in oral cancers is largely predictable and treatment of neck can be tailored with this knowledge. Most studies available on the pattern are from the western world and for early cancers of the tongue and floor of the mouth. Aims: The present study was aimed to evaluate the prevalence and pattern of nodal metastasis in patients with pathologic T4 (pT4 buccal/alveolar cancers. Settings and Design: Medical records of the patients with pT4 primary buccal and alveolar squamous cell carcinomas treated by single-stage resection of primary tumor and neck dissection at Gujarat Cancer and Research Institute (GCRI, Ahmedabad, a regional cancer center in India, during September 2004 to August 2006, were analyzed for nodal involvement. Materials and Methods: The study included 127 patients with pT4 buccal/alveolar cancer. Data pertaining to clinical nodal status, histologic grade, pT and pN status (TNM classification of malignant tumors, UICC, 6th edition, 2002, total number of nodes removed, and those involved by tumor, and levels of nodal involvement were recorded. Statistical analysis was performed using the Chi-square test. Results: Fifty percent of the patients did not have nodal metastasis on final histopathology. Occult metastasis rate was 23%. All of these occurred in levels I to III. Among those with clinically palpable nodes, level V involvement was seen only in 4% of the patients with pT4 buccal cancer and 3% of the patients with alveolar cancer. Conclusions: Elective treatment of the neck in the form of selective neck dissection of levels I to III is needed for T4 cancers of gingivobuccal complex due to a high rate of occult metastasis. Selected patients with clinically involved nodes could be well served by a selective neck dissection incorporating levels I to III or IV.
Vergeer, M. R.; Doornaert, P. A. H.; de Bree, R.; Leemans, C. R.; Slotman, B. J.; Langendijk, J. A.
2011-01-01
Background: This study describes the results of elective irradiation in the N0 neck and tries to identify prognostic factors for regional recurrence. Materials and methods: Between 1985 and 2000, 785 cN0 or pN0 necks were treated with elective nodal irradiation in 619 head and neck squamous cell
A nodal collocation approximation for the multidimensional P{sub L} equations
Capilla, M.; Talavera, C.F.; Ginestar, D. [Departamento de Matematica Aplicada. Universidad Politecnica de Valencia. Camino de Vera 14. E-46022 Valencia (Spain); Verdu, G. [Departamento de Ingenieria Quimica y Nuclear. Universidad Politecnica de Valencia. Camino de Vera 14, E-46022 Valencia (Spain)
2008-07-01
We develop a nodal collocation method for the P{sub L} equations, focusing on the eigenvalue problem known as the Lambda Modes transport problem. This method approximates the initial differential eigenvalue problem by a generalized algebraic eigenvalue problem, from which the k-effective and the stationary neutron flux distribution of the system can be computed, being able also to obtain the subcritical eigenvalues and their corresponding Eigenmodes. The method presented here generalizes the method for 1D geometries presented in a previous work to be able to treat multidimensional problems. (authors)
A computational study of nodal-based tetrahedral element behavior.
Gullerud, Arne S.
2010-09-01
This report explores the behavior of nodal-based tetrahedral elements on six sample problems, and compares their solution to that of a corresponding hexahedral mesh. The problems demonstrate that while certain aspects of the solution field for the nodal-based tetrahedrons provide good quality results, the pressure field tends to be of poor quality. Results appear to be strongly affected by the connectivity of the tetrahedral elements. Simulations that rely on the pressure field, such as those which use material models that are dependent on the pressure (e.g. equation-of-state models), can generate erroneous results. Remeshing can also be strongly affected by these issues. The nodal-based test elements as they currently stand need to be used with caution to ensure that their numerical deficiencies do not adversely affect critical values of interest.
Distant nodal metastasis: is it always an unresectable disease?
Celotti, Andrea; Molfino, Sarah; Baggi, Paolo; Tarasconi, Antonio; Baronio, Gianluca; Arru, Luca; Gheza, Federico; Tiberio, Guido; Portolani, Nazario
2017-01-01
This article aims at analyzing the published literature concerning the treatment of patients with gastric cancer and distant nodal metastases, actually considered metastatic disease. A systematic search was undertaken using Medline, Embase, Cochrane and Web-of-Science libraries. No specific restriction on year of publication was used; preference was given to English papers. Both clinical series and literature reviews were selected. Only 11 papers address the issue of surgery for nodal basins outside the D2 dissection area. From these papers, in selected cases extended surgery may prove useful in prolonging survival, when a comprehensive therapeutic pathway including chemotherapy is scheduled. In conclusion, in presence of nodal metastases outside the loco-regional nodes, surgery may be considered for metastatic nodes in stations 13 and 16, in selected cases. PMID:28217751
Final Trial Report of Sentinel-Node Biopsy versus Nodal Observation in Melanoma
Morton, D.L.; Thompson, J.F.; Cochran, A.J.; Mozzillo, N.; Nieweg, O.E.; Roses, D.F.; Hoekstra, H.J.; Karakousis, C.P.; Puleo, C.A.; Coventry, B.J.; Kashani-Sabet, M.; Smithers, B.M.; Paul, E.; Kraybill, W.G.; McKinnon, J.G.; Wang, H.-J.; Elashoff, R.; Faries, M.B.
2014-01-01
Background Sentinel-node biopsy, a minimally invasive procedure for regional melanoma staging, was evaluated in a phase 3 trial. Methods We evaluated outcomes in 2001 patients with primary cutaneous melanomas randomly assigned to undergo wide excision and nodal observation, with lymphadenectomy for nodal relapse (observation group), or wide excision and sentinel-node biopsy, with immediate lymphadenectomy for nodal metastases detected on biopsy (biopsy group). Results No significant treatment-related difference in the 10-year melanoma-specific survival rate was seen in the overall study population (20.8% with and 79.2% without nodal metastases). Mean (±SE) 10-year disease-free survival rates were significantly improved in the biopsy group, as compared with the observation group, among patients with intermediate-thickness melanomas, defined as 1.20 to 3.50 mm (71.3±1.8% vs. 64.7±2.3%; hazard ratio for recurrence or metastasis, 0.76; P = 0.01), and those with thick melanomas, defined as >3.50 mm (50.7±4.0% vs. 40.5±4.7%; hazard ratio, 0.70; P = 0.03). Among patients with intermediate-thickness melanomas, the 10-year melanoma-specific survival rate was 62.1±4.8% among those with metastasis versus 85.1±1.5% for those without metastasis (hazard ratio for death from melanoma, 3.09; P<0.001); among patients with thick melanomas, the respective rates were 48.0±7.0% and 64.6±4.9% (hazard ratio, 1.75; P = 0.03). Biopsy-based management improved the 10-year rate of distant disease–free survival (hazard ratio for distant metastasis, 0.62; P = 0.02) and the 10-year rate of melanoma-specific survival (hazard ratio for death from melanoma, 0.56; P = 0.006) for patients with intermediate-thickness melanomas and nodal metastases. Accelerated-failure-time latent-subgroup analysis was performed to account for the fact that nodal status was initially known only in the biopsy group, and a significant treatment benefit persisted. Conclusions Biopsy-based staging of
March, M C; Feroz, F; Hobson, M P
2012-01-01
We present a comparison of two methods for cosmological parameter inference from supernovae Ia lightcurves fitted with the SALT2 technique. The standard chi-square methodology and the recently proposed Bayesian hierarchical method (BHM) are each applied to identical sets of simulations based on the 3-year data release from the Supernova Legacy Survey (SNLS3), and also data from the Sloan Digital Sky Survey (SDSS), the Low Redshift sample and the Hubble Space Telescope (HST), assuming a concordance LCDM cosmology. For both methods, we find that the recovered values of the cosmological parameters, and the global nuisance parameters controlling the stretch and colour corrections to the supernovae lightcurves, suffer from small biasses. The magnitude of the biasses is similar in both cases, with the BHM yielding slightly more accurate results, in particular for cosmological parameters when applied to just the SNLS3 single survey data sets. Most notably, in this case, the biasses in the recovered matter density $\\...
Roca, Vidal; Kretz, Tobias; Lehmann, Karsten; Hofsäß, Ingmar
2014-01-01
Applying assignment methods to compute user-equilibrium route choice is very common in traffic planning. It is common sense that vehicular traffic arranges in a user-equilibrium based on generalized costs in which travel time is a major factor. Surprisingly travel time has not received much attention for the route choice of pedestrians. In microscopic simulations of pedestrians the vastly dominating paradigm for the computation of the preferred walking direction is set into the direction of the (spatially) shortest path. For situations where pedestrians have travel time as primary determinant for their walking behavior it would be desirable to also have an assignment method in pedestrian simulations. To apply existing (road traffic) assignment methods with simulations of pedestrians one has to reduce the nondenumerably many possible pedestrian trajectories to a small subset of routes which represent the main, relevant, and significantly distinguished routing alternatives. All except one of these routes will m...
Applying the Support Vector Machine Method to Matching IRAS and SDSS Catalogues
Chen Cao
2007-10-01
Full Text Available This paper presents results of applying a machine learning technique, the Support Vector Machine (SVM, to the astronomical problem of matching the Infra-Red Astronomical Satellite (IRAS and Sloan Digital Sky Survey (SDSS object catalogues. In this study, the IRAS catalogue has much larger positional uncertainties than those of the SDSS. A model was constructed by applying the supervised learning algorithm (SVM to a set of training data. Validation of the model shows a good identification performance (∼ 90% correct, better than that derived from classical cross-matching algorithms, such as the likelihood-ratio method used in previous studies.
Low, Jennifer E.; Whitfield Aslund, Melissa L. [Department of Chemistry and Chemical Engineering, Royal Military College of Canada, PO Box 17000 Station Forces, Kingston, ON, K7K 7B4 (Canada); Rutter, Allison [School of Environmental Studies, Rm 0626 Biosciences Complex, Queen' s University, 116 Barrie St., Kingston, ON, K7L 3N6 (Canada); Zeeb, Barbara A., E-mail: zeeb-b@rmc.ca [Department of Chemistry and Chemical Engineering, Royal Military College of Canada, PO Box 17000 Station Forces, Kingston, ON, K7K 7B4 (Canada)
2011-03-15
Two cultivation techniques (i-pruning and ii-nodal adventitious root encouragement) were investigated for their ability to increase PCB phytoextraction by Cucurbita pepo ssp pepo cv. Howden (pumpkin) plants in situ at a contaminated industrial site in Ontario (Aroclor 1248, mean soil [PCB] = 5.6 {mu}g g{sup -1}). Pruning was implemented to increase plant biomass close to the root where PCB concentration is known to be highest. This treatment was found to have no effect on final shoot biomass or PCB concentration. However, material pruned from the plant is not included in the final shoot biomass. The encouragement of nodal adventitious roots at stem nodes did significantly increase the PCB concentration in the primary stem, while not affecting shoot biomass. Both techniques are easily applied cultivation practices that may be implemented to decrease phytoextraction treatment time. - Research highlights: > Presence of nodal adventitious roots do increase phytoextraction efficiency. > Pruning may increase the biomass of pumpkin plants during phytoextraction. > [Aroclor 1248] decreases in plant tissue with increasing distance from the root. - The application of cultivation practices (pruning and nodal adventitious root encouragement) increases phytoextraction of PCBs in C. pepo.
J. Freixa
2012-01-01
Full Text Available Experimental results obtained at integral test facilities (ITFs are used in the validation process of system codes for the transient analyses of light water reactors (LWRs. The expertise and guidelines derived from this work are later applied to transient analyses of nuclear power plants (NPPs. However, the boundary conditions at the NPPs will always differ from those at the ITF, and hence, the soundness of the ITF model needs to be maximized. An unaltered ITF nodalization should prove to be able to simulate as many tests as possible, before any conclusion is derived to NPP analyses. The STARS group at the Paul Scherrer Institut (PSI actively participates in several international programs, where ITFs are being used (e.g., ROSA, PKL. Several tests carried out at the ROSA large-scale test facility operated by the Japan Atomic Energy Agency (JAEA have been simulated in recent years by using the United States Nuclear Regulatory Commission (US-NRC system code TRACE. In this paper, 5 different posttest analyses are presented, along with the evolution of the employed TRACE nodalization and the process followed to track the consistency of the nodalization modifications. The ROSA TRACE nodalization provided results in a reasonable agreement with all 5 experiments.
A.L.C. Darini
1999-09-01
Full Text Available In order to evaluate the resolving power of several typing methods to identify relatedness among Brazilian strains of Enterobacter cloacae, we selected twenty isolates from different patients on three wards of a University Hospital (Orthopedics, Nephrology, and Hematology. Traditional phenotyping methods applied to isolates included biotyping, antibiotic sensitivity, phage-typing, and O-serotyping. Plasmid profile analysis, ribotyping, and macrorestriction analysis by pulsed-field gel electrophoresis (PFGE were used as genotyping methods. Sero- and phage-typing were not useful since the majority of isolates could not be subtyped by these methods. Biotyping, antibiogram and plasmid profile permitted us to classify the samples into different groups depending on the method used, and consequently were not reliable. Ribotyping and PFGE were significantly correlated with the clinical epidemiological analysis. PFGE did not type strains containing nonspecific DNase. Ribotyping was the most discriminative method for typing Brazilian isolates of E. cloacae.
Applying Possibility Degree Method for Ranking Interval Numbers to Partnership Selection
LI Tong; ZHANG Qiang; ZHENG Tao
2008-01-01
A new interval number ranking approach is applied for assessment of priorities of the alternative partners, where the attribute values are given out as interval numbers while the weight of each criterion is still exact numerical value pattern. After aggregating with the weighted arithmetic averaging operator, the result is still in the form of interval number. To achieve the priorities of alternative partners we take the possibility method for ranking interval numbers into account which could derive priorities from inconsistent attribute values, thus eliminating the adjustment to the inconsistent attribute values. Moreover, this method is very simple and needs less calculation. An illustrative example is given out to demonstrate this smart method.
Adkison, Jarrod B.; McHaffie, Derek R.; Bentzen, Soren M.; Patel, Rakesh R.; Khuntia, Deepak [Department of Human Oncology, University of Wisconsin Carbone Cancer Center, School of Medicine and Public Health, Madison, WI (United States); Petereit, Daniel G. [Department of Radiation Oncology, John T. Vucurevich Regional Cancer Care Institute, Rapid City Regional Hospital, Rapid City, SD (United States); Hong, Theodore S.; Tome, Wolfgang [Department of Human Oncology, University of Wisconsin Carbone Cancer Center, School of Medicine and Public Health, Madison, WI (United States); Ritter, Mark A., E-mail: ritter@humonc.wisc.edu [Department of Human Oncology, University of Wisconsin Carbone Cancer Center, School of Medicine and Public Health, Madison, WI (United States)
2012-01-01
Purpose: Toxicity concerns have limited pelvic nodal prescriptions to doses that may be suboptimal for controlling microscopic disease. In a prospective trial, we tested whether image-guided intensity-modulated radiation therapy (IMRT) can safely deliver escalated nodal doses while treating the prostate with hypofractionated radiotherapy in 5 Vulgar-Fraction-One-Half weeks. Methods and Materials: Pelvic nodal and prostatic image-guided IMRT was delivered to 53 National Comprehensive Cancer Network (NCCN) high-risk patients to a nodal dose of 56 Gy in 2-Gy fractions with concomitant treatment of the prostate to 70 Gy in 28 fractions of 2.5 Gy, and 50 of 53 patients received androgen deprivation for a median duration of 12 months. Results: The median follow-up time was 25.4 months (range, 4.2-57.2). No early Grade 3 Radiation Therapy Oncology Group or Common Terminology Criteria for Adverse Events v.3.0 genitourinary (GU) or gastrointestinal (GI) toxicities were seen. The cumulative actuarial incidence of Grade 2 early GU toxicity (primarily alpha blocker initiation) was 38%. The rate was 32% for Grade 2 early GI toxicity. None of the dose-volume descriptors correlated with GU toxicity, and only the volume of bowel receiving {>=}30 Gy correlated with early GI toxicity (p = 0.029). Maximum late Grades 1, 2, and 3 GU toxicities were seen in 30%, 25%, and 2% of patients, respectively. Maximum late Grades 1 and 2 GI toxicities were seen in 30% and 8% (rectal bleeding requiring cautery) of patients, respectively. The estimated 3-year biochemical control (nadir + 2) was 81.2 {+-} 6.6%. No patient manifested pelvic nodal failure, whereas 2 experienced paraaortic nodal failure outside the field. The six other clinical failures were distant only. Conclusions: Pelvic IMRT nodal dose escalation to 56 Gy was delivered concurrently with 70 Gy of hypofractionated prostate radiotherapy in a convenient, resource-efficient, and well-tolerated 28-fraction schedule. Pelvic nodal dose
Anomalous Contagion and Renormalization in Dynamical Networks with Nodal Mobility
Manrique, Pedro D; Zheng, Minzhang; Xu, Chen; Hui, Pak Ming; Johnson, Neil F
2015-01-01
The common real-world feature of individuals migrating through a network -- either in real space or online -- significantly complicates understanding of network processes. Here we show that even though a network may appear static on average, underlying nodal mobility can dramatically distort outbreak profiles. Highly nonlinear dynamical regimes emerge in which increasing mobility either amplifies or suppresses outbreak severity. Predicted profiles mimic recent outbreaks of real-space contagion (social unrest) and online contagion (pro-ISIS support). We show that this nodal mobility can be renormalized in a precise way for a particular class of dynamical networks.
Oddness of least energy nodal solutions on radial domains
Christopher Grumiau
2010-07-01
Full Text Available In this article, we consider the Lane-Emden problem $$displaylines{ Delta u(x + |{u(x}mathclose|^{p-2}u(x=0, quad hbox{for } xinOmega,cr u(x=0, quad hbox{for } xinpartialOmega, }$$ where $2 < p < 2^{*}$ and $Omega$ is a ball or an annulus in $mathbb{R}^{N}$, $Ngeq 2$. We show that, for p close to 2, least energy nodal solutions are odd with respect to an hyperplane -- which is their nodal surface. The proof ingredients are a constrained implicit function theorem and the fact that the second eigenvalue is simple up to rotations.
Dmitriy Y. Anistratov; Marvin L. Adams; Todd S. Palmer; Kord S. Smith; Kevin Clarno; Hikaru Hiruta; Razvan Nes
2003-08-04
OAK (B204) Final Report, NERI Project: ''An Innovative Reactor Analysis Methodology Based on a Quasidiffusion Nodal Core Model'' The present generation of reactor analysis methods uses few-group nodal diffusion approximations to calculate full-core eigenvalues and power distributions. The cross sections, diffusion coefficients, and discontinuity factors (collectively called ''group constants'') in the nodal diffusion equations are parameterized as functions of many variables, ranging from the obvious (temperature, boron concentration, etc.) to the more obscure (spectral index, moderator temperature history, etc.). These group constants, and their variations as functions of the many variables, are calculated by assembly-level transport codes. The current methodology has two main weaknesses that this project addressed. The first weakness is the diffusion approximation in the full-core calculation; this can be significantly inaccurate at interfaces between different assemblies. This project used the nodal diffusion framework to implement nodal quasidiffusion equations, which can capture transport effects to an arbitrary degree of accuracy. The second weakness is in the parameterization of the group constants; current models do not always perform well, especially at interfaces between unlike assemblies. The project developed a theoretical foundation for parameterization and homogenization models and used that theory to devise improved models. The new models were extended to tabulate information that the nodal quasidiffusion equations can use to capture transport effects in full-core calculations.
Gang Chen
2015-07-01
Full Text Available A new control method for an electromagnetic unmanned robot applied to automotive testing (URAT and based on improved Smith predictor compensator, and considering a time delay, is proposed. The mechanical system structure and the control system structure are presented. The electromagnetic URAT adopts pulse width modulation (PWM control, while the displacement and the current doubles as a closed-loop control strategy. The coordinated control method of multiple manipulators for the electromagnetic URAT, e.g., a skilled human driver with intelligent decision-making ability is provided, and the improved Smith predictor compensator controller for the electromagnetic URAT considering a time delay is designed. Experiments are conducted using a Ford FOCUS automobile. Comparisons between the PID control method and the proposed method are conducted. Experimental results show that the proposed method can achieve the accurate tracking of the target vehicle’s speed and reduce the mileage derivation of autonomous driving, which meets the requirements of national test standards.
Shatilla, Y.A.M.; Henry, A.F.
1993-12-31
This document constitutes Volume 1 of the Final Report of a three-year study supported by the special Research Grant Program for Nuclear Energy Research set up by the US Department of Energy. The original motivation for the work was to provide a fast and accurate computer program for the analysis of transients in heavy water or graphite-moderated reactors being considered as candidates for the New Production Reactor. Thus, part of the funding was by way of pass-through money from the Savannah River Laboratory. With this intent in mind, a three-dimensional (Hex-Z), general-energy-group transient, nodal code was created, programmed, and tested. In order to improve accuracy, correction terms, called {open_quotes}discontinuity factors,{close_quotes} were incorporated into the nodal equations. Ideal values of these factors force the nodal equations to provide node-integrated reaction rates and leakage rates across nodal surfaces that match exactly those edited from a more exact reference calculation. Since the exact reference solution is needed to compute the ideal discontinuity factors, the fact that they result in exact nodal equations would be of little practical interest were it not that approximate discontinuity factors, found at a greatly reduced cost, often yield very accurate results. For example, for light-water reactors, discontinuity factors found from two-dimensional, fine-mesh, multigroup transport solutions for two-dimensional cuts of a fuel assembly provide very accurate predictions of three-dimensional, full-core power distributions. The present document (volume 1) deals primarily with the specification, programming and testing of the three-dimensional, Hex-Z computer program. The program solves both the static (eigenvalue) and transient, general-energy-group, nodal equations corrected by user-supplied discontinuity factors.
Langer, Stefan
2014-11-01
For unstructured finite volume methods an agglomeration multigrid with an implicit multistage Runge-Kutta method as a smoother is developed for solving the compressible Reynolds averaged Navier-Stokes (RANS) equations. The implicit Runge-Kutta method is interpreted as a preconditioned explicit Runge-Kutta method. The construction of the preconditioner is based on an approximate derivative. The linear systems are solved approximately with a symmetric Gauss-Seidel method. To significantly improve this solution method grid anisotropy is treated within the Gauss-Seidel iteration in such a way that the strong couplings in the linear system are resolved by tridiagonal systems constructed along these directions of strong coupling. The agglomeration strategy is adapted to this procedure by taking into account exactly these anisotropies in such a way that a directional coarsening is applied along these directions of strong coupling. Turbulence effects are included by a Spalart-Allmaras model, and the additional transport-type equation is approximately solved in a loosely coupled manner with the same method. For two-dimensional and three-dimensional numerical examples and a variety of differently generated meshes we show the wide range of applicability of the solution method. Finally, we exploit the GMRES method to determine approximate spectral information of the linearized RANS equations. This approximate spectral information is used to discuss and compare characteristics of multistage Runge-Kutta methods.
Extreme Wind Calculation Applying Spectral Correction Method – Test and Validation
Hansen, Brian Ohrbeck; Larsén, Xiaoli Guo; Kelly, Mark C.
This report presents a test and validation of extreme wind calculation applying the Spectral Correction method as implemented in the WAsP Engineering 4 software package. The test and validation is based on four sites located in Denmark, one site located in the Netherlands and one site located in ...... in the USA. Calculations have been carried out using wind data from on-site meteorological masts as well as long-term reference wind data.......This report presents a test and validation of extreme wind calculation applying the Spectral Correction method as implemented in the WAsP Engineering 4 software package. The test and validation is based on four sites located in Denmark, one site located in the Netherlands and one site located...
The role of applied epidemiology methods in the disaster management cycle.
Malilay, Josephine; Heumann, Michael; Perrotta, Dennis; Wolkin, Amy F; Schnall, Amy H; Podgornik, Michelle N; Cruz, Miguel A; Horney, Jennifer A; Zane, David; Roisman, Rachel; Greenspan, Joel R; Thoroughman, Doug; Anderson, Henry A; Wells, Eden V; Simms, Erin F
2014-11-01
Disaster epidemiology (i.e., applied epidemiology in disaster settings) presents a source of reliable and actionable information for decision-makers and stakeholders in the disaster management cycle. However, epidemiological methods have yet to be routinely integrated into disaster response and fully communicated to response leaders. We present a framework consisting of rapid needs assessments, health surveillance, tracking and registries, and epidemiological investigations, including risk factor and health outcome studies and evaluation of interventions, which can be practiced throughout the cycle. Applying each method can result in actionable information for planners and decision-makers responsible for preparedness, response, and recovery. Disaster epidemiology, once integrated into the disaster management cycle, can provide the evidence base to inform and enhance response capability within the public health infrastructure.
Hanene Rouabeh
2016-02-01
Full Text Available This Paper presents a new hybrid technique for digit recognition applied to the speed limit sign recognition task. The complete recognition system consists in the detection and recognition of the speed signs in RGB images. A pretreatment is applied to extract the pictogram from a detected circular road sign, and then the task discussed in this work is employed to recognize digit candidates. To realize a compromise between performances, reduced execution time and optimized memory resources, the developed method is based on a conjoint use of a Neural Network and a Decision Tree. A simple Network is employed firstly to classify the extracted candidates into three classes and secondly a small Decision Tree is charged to determine the exact information. This combination is used to reduce the size of the Network as well as the memory resources utilization. The evaluation of the technique and the comparison with existent methods show the effectiveness.
Statistical equations and methods applied to the precision muon (g-2) experiment at BNL
Bennett, G.W. [Brookhaven National Laboratory, Upton, NY 11973 (United States); Bousquet, B. [Department of Physics, University of Minnesota, Minneapolis, MN 55455 (United States); Brown, H.N.; Bunce, G. [Brookhaven National Laboratory, Upton, NY 11973 (United States); Carey, R.M. [Department of Physics, Boston University, Boston, MA 02215 (United States); Cushman, P. [Department of Physics, University of Minnesota, Minneapolis, MN 55455 (United States); Danby, G.T. [Brookhaven National Laboratory, Upton, NY 11973 (United States); Debevec, P.T. [Department of Physics, University of Illinois at Urbana-Champaign, Urbana, IL 55455 (United States); Deile, M.; Deng, H. [Physics Department, Yale University, New Haven, CT 06511 (United States); Deninger, W. [Department of Physics, University of Illinois at Urbana-Champaign, Urbana, IL 55455 (United States); Dhawan, S.K. [Physics Department, Yale University, New Haven, CT 06511 (United States); Druzhinin, V.P. [Budker Institute of Nuclear Physics, Novosibirsk (Russian Federation); Duong, L. [Department of Physics, University of Minnesota, Minneapolis, MN 55455 (United States); Efstathiadis, E. [Department of Physics, Boston University, Boston, MA 02215 (United States); Farley, F.J.M. [Physics Department, Yale University, New Haven, CT 06511 (United States); Fedotovich, G.V. [Budker Institute of Nuclear Physics, Novosibirsk (Russian Federation); Giron, S. [Department of Physics, University of Minnesota, Minneapolis, MN 55455 (United States); Gray, F. [Department of Physics, University of Illinois at Urbana-Champaign, Urbana, IL 55455 (United States); Grigoriev, D. [Budker Institute of Nuclear Physics, Novosibirsk (Russian Federation)] (and others)
2007-09-11
In the muon (g-2) experiment at Brookhaven National Laboratory, the spin precession frequency {omega}{sub a} is obtained from a standard {chi}{sup 2} minimization fit applied to the time distribution of decay electrons. The unusually high accuracy ({approx}0.5ppm) of the experiment puts stringent requirements on the quality of the fit and the level of understanding of the statistical properties of the fitted parameters. We discuss the properties of the fits and their implications on the derived value for {omega}{sub a}, including estimates of the effect of an imperfect fit function, methods of including additional external information to reduce the error, the effects of splitting the data into many smaller subsets of data, applying different weighting methods to the data using energy information, and various tests of data suitability.
Zambach, Sine; Madsen, Bodil Nistrup
2009-01-01
By applying formal terminological methods to model an ontology within the domain of enzyme inhibition, we aim to clarify concepts and to obtain consistency. Additionally, we propose a procedure for implementing this ontology in OWL with the aim of obtaining a strict structure which can form...... the basis for reasoning and further processing, and we compare a semi-formal terminological concept modeling approach with a formal Description Logic approach in OWL-DL....
Applied Ecosystem Analysis - - a Primer : EDT the Ecosystem Diagnosis and Treatment Method.
Lestelle, Lawrence C.; Mobrand, Lars E.
1996-05-01
The aim of this document is to inform and instruct the reader about an approach to ecosystem management that is based upon salmon as an indicator species. It is intended to provide natural resource management professionals with the background information needed to answer questions about why and how to apply the approach. The methods and tools the authors describe are continually updated and refined, so this primer should be treated as a first iteration of a sequentially revised manual.
Research methods applied to studies with active elderly: A literature review
Martins, L; Baptista, J.; Arezes, P.
2016-01-01
In almost every developed and in developing countries, the elderly population is increasing. It is assumed that environments, products and services must be appropriate and accessible to them as many people, regarding their characteristics, abilities and limitations. The purpose of this paper is to establish an outlook about the methods that are usually applied in research involving active elderly at the development stages of products designed for that speciﬁc segment of the Society.
Applying the POWHEG method to top pair production and decays at the ILC
Latunde-Dada, Oluseyi
2008-01-01
We study the effects of gluon radiation in top pair production and their decays for e+e- annihilation at the ILC. To achieve this we apply the POWHEG method and interface our results to the Monte Carlo event generator Herwig++. We consider a center-of-mass energy of \\sqrt{s}=500 GeV and compare decay correlations and bottom quark and anti-quark distributions before hadronization.
The Density Matrix Renormalization Group Method applied to Interaction Round a Face Hamiltonians
1996-01-01
Given a Hamiltonian with a continuous symmetry one can generally factorize that symmetry and consider the dynamics on invariant Hilbert spaces. In statistical mechanics this procedure is known as the vertex-IRF map, and in certain cases, like rotational invariant Hamiltonians, it can be implemented via group theoretical techniques. Using this map we translate the DMRG method, which applies to 1D vertex Hamiltonians, into a formulation adequate to study IRF Hamiltonians. The advantage of the I...
Extreme Wind Calculation Applying Spectral Correction Method – Test and Validation
Hansen, Brian Ohrbeck; Larsén, Xiaoli Guo; Kelly, Mark C; Rathmann, Ole Steen; Berg, Jacob; Bechmann, Andreas; Sempreviva, Anna Maria; Ejsing Jørgensen, Hans
2016-01-01
This report presents a test and validation of extreme wind calculation applying the Spectral Correction method as implemented in the WAsP Engineering 4 software package. The test and validation is based on four sites located in Denmark, one site located in the Netherlands and one site located in the USA. Calculations have been carried out using wind data from on-site meteorological masts as well as long-term reference wind data.
Method for operating an automobile with a combustion engine with applied ignition
Anderton, R.A.; Smith, R.R.; Tippler, R.
1982-01-28
A method is proposed to operate automobiles with combustion engines with applied ignition directly after the assembly on a petrol-mineral oil mixture; this prevents a spark plug fouling when the cars which hare just been completed are operated on short distances only. This petrol-mineral oil mixture should consist preferably of 95-98 ROZ petrol and mineral oil share of less than 5 vol.% preferably 0,5 vol.%.
How to Apply Communicative and Interactive Teaching Method to English Major Classroom
丁燕
2015-01-01
#%This paper focuses on introducing communicative and interactive teaching method (abbreviation: CITM).It points out definitely that under the condition of applying CITM,the English major classroom wil receive more achievement.The key way to push forward CITM is the transformation of the role of the teachers,whereas the objective-oriented course preparation and suitable organization of the classroom can effectively support the implementation of a communicative and interactive classroom.
Integrated rate-dependent and dual pathway AV nodal functions: principles and assessment framework.
Billette, Jacques; Tadros, Rafik
2014-01-15
The atrioventricular (AV) node conducts slowly and has a long refractory period. These features sustain the filtering of atrial impulses and hence are often modulated to optimize ventricular rate during supraventricular tachyarrhythmias. The AV node is also the site of a clinically common reentrant arrhythmia. Its function is assessed for a variety of purposes from its responses to a premature protocol (S1S2, test beats introduced at different cycle lengths) repeatedly performed at different basic rates and/or to an incremental pacing protocol (increasingly faster rates). Puzzlingly, resulting data and interpretation differ with protocols as well as with chosen recovery and refractory indexes, and are further complicated by the presence of built-in fast and slow pathways. This problem applies to endocavitary investigations of arrhythmias as well as to many experimental functional studies. This review supports an integrated framework of rate-dependent and dual pathway AV nodal function that can account for these puzzling characteristics. The framework was established from AV nodal responses to S1S2S3 protocols that, compared with standard S1S2 protocols, allow for an orderly quantitative dissociation of the different factors involved in changes in AV nodal conduction and refractory indexes under rate-dependent and dual pathway function. Although largely based on data from experimental studies, the proposed framework may well apply to the human AV node. In conclusion, the rate-dependent and dual pathway properties of the AV node can be integrated within a common functional framework the contribution of which to individual responses can be quantitatively determined with properly designed protocols and analytic tools.
Hanan, Lu; Qiushi, Li; Shaobin, Li
2016-12-01
This paper presents an integrated optimization design method in which uniform design, response surface methodology and genetic algorithm are used in combination. In detail, uniform design is used to select the experimental sampling points in the experimental domain and the system performance is evaluated by means of computational fluid dynamics to construct a database. After that, response surface methodology is employed to generate a surrogate mathematical model relating the optimization objective and the design variables. Subsequently, genetic algorithm is adopted and applied to the surrogate model to acquire the optimal solution in the case of satisfying some constraints. The method has been applied to the optimization design of an axisymmetric diverging duct, dealing with three design variables including one qualitative variable and two quantitative variables. The method of modeling and optimization design performs well in improving the duct aerodynamic performance and can be also applied to wider fields of mechanical design and seen as a useful tool for engineering designers, by reducing the design time and computation consumption.
Machine Learning Method Applied in Readout System of Superheated Droplet Detector
Liu, Yi; Sullivan, Clair Julia; d'Errico, Francesco
2017-07-01
Direct readability is one advantage of superheated droplet detectors in neutron dosimetry. Utilizing such a distinct characteristic, an imaging readout system analyzes image of the detector for neutron dose readout. To improve the accuracy and precision of algorithms in the imaging readout system, machine learning algorithms were developed. Deep learning neural network and support vector machine algorithms are applied and compared with generally used Hough transform and curvature analysis methods. The machine learning methods showed a much higher accuracy and better precision in recognizing circular gas bubbles.
A New Method to Determine the Thickness of Spiral Galaxies: Apply to M31
LI Meng; LUO Xin-Lian; PENG Qiu-He; ZOU Zhi-Gang
2000-01-01
A new method is presented to determine the thickness of spiral galaxies. Based on the rigorous solution of the Poisson equation for logarithmic density disturbance in three-dimensional spiral galaxies, we have derived an accurate dispersion relation for the stellar and gaseous disk with a finite thickness. From this relation, a new method is put forward here for determining the thickness of galaxies. We apply this way to M31 and get the thickness of about 0.7kpc, which is in good agreement with the previous results.
[New methods of treatment applied in the hospital of Sochi during the Great Patriotic War].
Artiukhov, S A
2013-05-01
During the Great Patriotic War 1941-1945 Sochi was turned into the largest hospital base in the south of the USSR. All told, 335 thousand wonded and seriously ill soldiers were treated in the hospitals of Sochi. During the war physicians applied many new, including, early unknown medical methods of treatment. Poor provision with medical equipment, instruments, bandages and medicines was made up for using of local resources. Adoption of new treatment methods based on the use of local medicines allowed the Sochi's physicians to save many lives during the war.
Juan C Díaz Martínez
2010-04-01
Full Text Available La taquicardia por reentrada nodal es la causa más común de taquicardia supraventricular paroxística; en aquellos pacientes en quienes el manejo farmacológico no es efectivo o deseado la ablación por radiofrecuencia es un excelente método terapéutico dada su alta tasa de curación. Aunque en términos generales dichos procedimientos son rápidos y seguros, se han descrito varias complicaciones entre las que sobresale el accidente cerebrovascular isquémico. Se presenta el caso de una paciente de 41 años con episodios de taquicardia por reentrada nodal a repetición, que fue llevada a ablación por radiofrecuencia. En el post-operatorio inmediato se evidenció déficit neurológico focal con isquemia en el territorio de la arteria cerebral media derecha, tras lo cual se realizó angiografía con intento de angioplastia y abxicimab y posteriormente infusión local de activador de plasminógeno tisular (rtPA con adecuado resultado clínico y angiográfico.Atrioventricular nodal reentry tachycardia is the most common type of paroxismal supraventricular tachycardia. In those patients in whom drug therapy is not effective or not desired, radio frequency ablation is an excellent therapeutic method. Although overall these procedures are fast and safe, several complications among which ischemic stroke stands out, have been reported. We present the case of a 41 year old female patient with repetitive episodes of tachycardia due to nodal reentry who was treated with radiofrequency ablation. Immediately after the procedure she presented focal neurologic deficit consistent with ischemic stroke in the right medial cerebral artery territory. Angiography with angioplastia and abxicimab was performed and then tissue plasminogen activator (rtPA was locally infused, with appropriate clinical and angiographic outcome.
A review of studies applying environmental impact assessment methods on fruit production systems.
Cerutti, Alessandro K; Bruun, Sander; Beccaro, Gabriele L; Bounous, Giancarlo
2011-10-01
Although many aspects of environmental accounting methodologies in food production have already been investigated, the application of environmental indicators in the fruit sector is still rare and no consensus can be found on the preferred method. On the contrary, widely diverging approaches have been taken to several aspects of the analyses, such as data collection, handling of scaling issues, and goal and scope definition. This paper reviews studies assessing the sustainability or environmental impacts of fruit production under different conditions and identifies aspects of fruit production that are of environmental importance. Four environmental assessment methods which may be applied to assess fruit production systems are evaluated, namely Life Cycle Assessment, Ecological Footprint Analysis, Emergy Analysis and Energy Balance. In the 22 peer-reviewed journal articles and two conference articles applying one of these methods in the fruit sector that were included in this review, a total of 26 applications of environmental impact assessment methods are described. These applications differ concerning e.g. overall objective, set of environmental issues considered, definition of system boundaries and calculation algorithms. Due to the relatively high variability in study cases and approaches, it was not possible to identify any one method as being better than the others. However, remarks on methodologies and suggestions for standardisation are given and the environmental burdens of fruit systems are highlighted.
Development of a tracking method for augmented reality applied to nuclear plant maintenance work
Shimoda, Hiroshi; Maeshima, Masayuki; Nakai, Toshinori; Bian, Zhiqiang; Ishii, Hirotake; Yoshikawa, Hidekazu [Kyoto University, Kyoto (Japan)
2005-11-15
In this paper, a plant maintenance support method is described, which employs the state-of-the-art information technology, Augmented Reality (AR), in order to improve efficiency of NPP maintenance work and to prevent from human error. Although AR has a great possibility to support various works in real world, it is difficult to apply it to actual work support because the tracking method is the bottleneck for the practical use. In this study, a bar code marker tracking method is proposed to apply AR system for a maintenance work support in NPP field. The proposed method calculates the users position and orientation in real time by two long markers, which are captured by the user-mounted camera. The markers can be easily pasted on the pipes in plant field, and they can be easily recognized in long distance in order to reduce the number of pasted markers in the work field. Experiments were conducted in a laboratory and plant field to evaluate the proposed method. The results show that (1) fast and stable tracking can be realized, (2) position error in camera view is less than 1%, which is almost perfect under the limitation of camera resolution, and (3) it is relatively difficult to catch two markers in one camera view especially in short distance.
Garcia, Diego; Moro, Claudia Maria Cabral; Cicogna, Paulo Eduardo; Carvalho, Deborah Ribeiro
2013-01-01
Clinical guidelines are documents that assist healthcare professionals, facilitating and standardizing diagnosis, management, and treatment in specific areas. Computerized guidelines as decision support systems (DSS) attempt to increase the performance of tasks and facilitate the use of guidelines. Most DSS are not integrated into the electronic health record (EHR), ordering some degree of rework especially related to data collection. This study's objective was to present a method for integrating clinical guidelines into the EHR. The study developed first a way to identify data and rules contained in the guidelines, and then incorporate rules into an archetype-based EHR. The proposed method tested was anemia treatment in the Chronic Kidney Disease Guideline. The phases of the method are: data and rules identification; archetypes elaboration; rules definition and inclusion in inference engine; and DSS-EHR integration and validation. The main feature of the proposed method is that it is generic and can be applied toany type of guideline.
Modeling methods for mixture-of-mixtures experiments applied to a tablet formulation problem.
Piepel, G F
1999-01-01
During the past few years, statistical methods for the experimental design, modeling, and optimization of mixture experiments have been widely applied to drug formulation problems. Different methods are required for mixture-of-mixtures (MoM) experiments in which a formulation is a mixture of two or more "major" components, each of which is a mixture of one or more "minor" components. Two types of MoM experiments are briefly described. A tablet formulation optimization example from a 1997 article in this journal is used to illustrate one type of MoM experiment and corresponding empirical modeling methods. Literature references that discuss other methods for MoM experiments are also provided.
Brown-Dymkoski, Eric; Kasimov, Nurlybek; Vasilyev, Oleg V.
2014-04-01
In order to introduce solid obstacles into flows, several different methods are used, including volume penalization methods which prescribe appropriate boundary conditions by applying local forcing to the constitutive equations. One well known method is Brinkman penalization, which models solid obstacles as porous media. While it has been adapted for compressible, incompressible, viscous and inviscid flows, it is limited in the types of boundary conditions that it imposes, as are most volume penalization methods. Typically, approaches are limited to Dirichlet boundary conditions. In this paper, Brinkman penalization is extended for generalized Neumann and Robin boundary conditions by introducing hyperbolic penalization terms with characteristics pointing inward on solid obstacles. This Characteristic-Based Volume Penalization (CBVP) method is a comprehensive approach to conditions on immersed boundaries, providing for homogeneous and inhomogeneous Dirichlet, Neumann, and Robin boundary conditions on hyperbolic and parabolic equations. This CBVP method can be used to impose boundary conditions for both integrated and non-integrated variables in a systematic manner that parallels the prescription of exact boundary conditions. Furthermore, the method does not depend upon a physical model, as with porous media approach for Brinkman penalization, and is therefore flexible for various physical regimes and general evolutionary equations. Here, the method is applied to scalar diffusion and to direct numerical simulation of compressible, viscous flows. With the Navier-Stokes equations, both homogeneous and inhomogeneous Neumann boundary conditions are demonstrated through external flow around an adiabatic and heated cylinder. Theoretical and numerical examination shows that the error from penalized Neumann and Robin boundary conditions can be rigorously controlled through an a priori penalization parameter η. The error on a transient boundary is found to converge as O
Note on the nodal line of the p-Laplacian
Abdel R. El Amrouss
2006-09-01
Full Text Available In this paper, we prove that the length of the nodal line of the eigenfunctions associated to the second eigenvalue of the problem $$ -Delta_p u = lambda ho (x |u|^{p-2}u quad hbox{in } Omega $$ with the Dirichlet conditions is not bounded uniformly with respect to the weight.
Composition law and nodal genus-2 curves in P$^{2}$
Katz, S; Ruan, Y; Katz, Sheldon; Qin, Zhenbo; Ruan, Yongbin
1996-01-01
Recently, there has been great interest in the application of composition laws to problems in enumerative geometry. Using the moduli space of stable maps, we compute the number of irreducible, reduced, nodal, degree-d genus-2 plane curves whose normalization has a fixed complex structure and which pass through 3d - 2 general points in \\Bbb P^2.
Extra nodal growth as a prognostic factor in malignant melanoma
Koopal, SA; Tiebosch, ATMG; Daryanani, D; Plukker, JTM; Hoekstra, HJ
Aim. Extra nodal growth (ENG) in lymph-node metastases may be an additional. indicator for poor prognosis and increased Loco-regional recurrence in patients with a cutaneous malignant melanoma (CMM). Most studies analyzing prognostic factors tack a proper definition or description of the
Beliaev, J.; Trunov, N.; Tschekin, I. [OKB Gidropress (Russian Federation); Luther, W. [GRS Garching (Germany); Spolitak, S. [RNC-KI (Russian Federation)
1995-12-31
Currently the ATHLET code is widely applied for modelling of several Power Plants of WWER type with horizontal steam generators. A main drawback of all these applications is the insufficient verification of the models for the steam generator. This paper presents the nodalization schemes for the secondary side of the steam generator, the results of stationary calculations, and preliminary comparisons to experimental data. The consideration of circulation in the water inventory of the secondary side is proved to be necessary. (orig.). 3 refs.
Tumelero, Fernanda, E-mail: fernanda.tumelero@yahoo.com.br [Universidade Federal do Rio Grande do Sul (UFRGS), Porto Alegre, RS (Brazil). Programa de Pos-Graduacao em Engenharia Mecanica; Petersen, Claudio Z.; Goncalves, Glenio A.; Lazzari, Luana, E-mail: claudiopeteren@yahoo.com.br, E-mail: gleniogoncalves@yahoo.com.br, E-mail: luana-lazzari@hotmail.com [Universidade Federal de Pelotas (DME/UFPEL), Capao do Leao, RS (Brazil). Instituto de Fisica e Matematica
2015-07-01
In this work, we present a solution of the Neutron Point Kinetics Equations with temperature feedback effects applying the Polynomial Approach Method. For the solution, we consider one and six groups of delayed neutrons precursors with temperature feedback effects and constant reactivity. The main idea is to expand the neutron density, delayed neutron precursors and temperature as a power series considering the reactivity as an arbitrary function of the time in a relatively short time interval around an ordinary point. In the first interval one applies the initial conditions of the problem and the analytical continuation is used to determine the solutions of the next intervals. With the application of the Polynomial Approximation Method it is possible to overcome the stiffness problem of the equations. In such a way, one varies the time step size of the Polynomial Approach Method and performs an analysis about the precision and computational time. Moreover, we compare the method with different types of approaches (linear, quadratic and cubic) of the power series. The answer of neutron density and temperature obtained by numerical simulations with linear approximation are compared with results in the literature. (author)
Should methods of correction for multiple comparisons be applied in pharmacovigilance?
Lorenza Scotti
2015-12-01
Full Text Available Purpose. In pharmacovigilance, spontaneous reporting databases are devoted to the early detection of adverse event ‘signals’ of marketed drugs. A common limitation of these systems is the wide number of concurrently investigated associations, implying a high probability of generating positive signals simply by chance. However it is not clear if the application of methods aimed to adjust for the multiple testing problems are needed when at least some of the drug-outcome relationship under study are known. To this aim we applied a robust estimation method for the FDR (rFDR particularly suitable in the pharmacovigilance context. Methods. We exploited the data available for the SAFEGUARD project to apply the rFDR estimation methods to detect potential false positive signals of adverse reactions attributable to the use of non-insulin blood glucose lowering drugs. Specifically, the number of signals generated from the conventional disproportionality measures and after the application of the rFDR adjustment method was compared. Results. Among the 311 evaluable pairs (i.e., drug-event pairs with at least one adverse event report, 106 (34% signals were considered as significant from the conventional analysis. Among them 1 resulted in false positive signals according to rFDR method. Conclusions. The results of this study seem to suggest that when a restricted number of drug-outcome pairs is considered and warnings about some of them are known, multiple comparisons methods for recognizing false positive signals are not so useful as suggested by theoretical considerations.
Coexistent Types of Atrioventricular Nodal Re-Entrant Tachycardia
Marine, Joseph E.; Latchamsetty, Rakesh; Zografos, Theodoros; Tanawuttiwat, Tanyanan; Sheldon, Seth H.; Buxton, Alfred E.; Calkins, Hugh; Morady, Fred; Josephson, Mark E.
2015-01-01
Background— There is evidence that atypical fast–slow and typical atrioventricular nodal re-entrant tachycardia (AVNRT) do not use the same limb for fast conduction, but no data exist on patients who have presented with both typical and atypical forms of this tachycardia. We compared conduction intervals during typical and atypical AVNRT that occurred in the same patient. Methods and Results— In 20 of 1299 patients with AVNRT, both typical and atypical AVNRT were induced at electrophysiology study by pacing maneuvers and autonomic stimulation or occurred spontaneously. The mean age of the patients was 47.6±10.9 years (range, 32–75 years), and 11 patients (55%) were women. Tachycardia cycle lengths were 368.0±43.1 and 365.8±41.1 ms, and earliest retrograde activation was recorded at the coronary sinus ostium in 60% and 65% of patients with typical and atypical AVNRT, respectively. Thirteen patients (65%) displayed atypical AVNRT with fast–slow characteristics. By comparing conduction intervals during slow–fast and fast–slow AVNRT in the same patient, fast pathway conduction times during the 2 types of AVNRT were calculated. The mean difference between retrograde fast pathway conduction during slow–fast AVNRT and anterograde fast pathway conduction during fast–slow AVNRT was 41.8±39.7 ms and was significantly different when compared with the estimated between-measurement error (P=0.0055). Conclusions— Our data provide further evidence that typical slow–fast and atypical fast–slow AVNRT use different anatomic pathways for fast conduction. PMID:26155802
Orbital nodal surfaces: Topological challenges for density functionals
Aschebrock, Thilo; Armiento, Rickard; Kümmel, Stephan
2017-06-01
Nodal surfaces of orbitals, in particular of the highest occupied one, play a special role in Kohn-Sham density-functional theory. The exact Kohn-Sham exchange potential, for example, shows a protruding ridge along such nodal surfaces, leading to the counterintuitive feature of a potential that goes to different asymptotic limits in different directions. We show here that nodal surfaces can heavily affect the potential of semilocal density-functional approximations. For the functional derivatives of the Armiento-Kümmel (AK13) [Phys. Rev. Lett. 111, 036402 (2013), 10.1103/PhysRevLett.111.036402] and Becke88 [Phys. Rev. A 38, 3098 (1988), 10.1103/PhysRevA.38.3098] energy functionals, i.e., the corresponding semilocal exchange potentials, as well as the Becke-Johnson [J. Chem. Phys. 124, 221101 (2006), 10.1063/1.2213970] and van Leeuwen-Baerends (LB94) [Phys. Rev. A 49, 2421 (1994), 10.1103/PhysRevA.49.2421] model potentials, we explicitly demonstrate exponential divergences in the vicinity of nodal surfaces. We further point out that many other semilocal potentials have similar features. Such divergences pose a challenge for the convergence of numerical solutions of the Kohn-Sham equations. We prove that for exchange functionals of the generalized gradient approximation (GGA) form, enforcing correct asymptotic behavior of the potential or energy density necessarily leads to irregular behavior on or near orbital nodal surfaces. We formulate constraints on the GGA exchange enhancement factor for avoiding such divergences.
Sennhenn, B; Giese, K; Plamann, K; Harendt, N; Kölmel, K
1993-01-01
Spectroscopic techniques are reported on which allow to study in vivo the penetration behaviour of topically applied light-absorbing drugs into human skin. Remittance spectroscopy, a purely optical method, provides a good tool in both, skin adaptation by use of a remote viewing head coupled to the spectrometer via optical fibres, and adequate sensitivity for the detection of small amounts of the applied drugs. The measuring depth in the skin is determined by the wavelength-dependent optical penetration depth, which itself depends on light absorption and light scattering. In the UV-spectral region the optical penetration depth is of the order of the thickness of the stratum corneum (UV-A) or of only a superficial part of it (UV-B, UV-C). Fluorescence spectroscopy, another optical method, offers two kinds of drug detection, a direct one in case of self-fluorescent drugs or an indirect one being based on the light absorption of the drug, which may give rise to a screening of the self-fluorescence of the skin itself or of an applied marker. The measuring depth is comparable to that achieved with remittance spectroscopy. A third method is photothermal spectroscopy which is determined by thermal properties of the skin in addition to optical properties. Photothermal spectroscopy is unique in that it allows depth profiles of drug concentration to be measured non-invasively, as the photothermal measuring depth can be changed by varying the modulation frequency of the intensity-modulated incident light. Results of measurements demonstrating the potentials of these spectroscopic methods are presented.
Novel change detection methods for multi-date digital imagery applied to South Florida vegetation
Byron, Jonathan Roy
Remote sensing using multidate imagery allows for change detection and the analysis of important landscape processes over time. Multidate image analysis has been used to map, measure, monitor and model important changes related to topics including deforestation, loss of wetlands, drought and flooding, and urban change. Existing change detection methods have proven themselves valuable, but are limited in terms of the patterns they can detect, the need for analyst intervention, and ease of interpretation. As the volume of remotely sensed data increases and the price of data and computing facilities decreases, new techniques are needed for the rapid automated or semi-automated identification of change patterns. This research presents a number of novel methods for analyzing and visualizing change in remotely sensed data sets. One approach includes the application of parametric measures (standard deviation, range, slope) to a time series. A second approach involves the visualization of data transformed into the temporal shape domain. The third approach involves the classification of temporal patterns by neural networks. The novel techniques were proven using synthetic data, then applied to anniversary AVHRR NDVI composite images of South Florida from 1989 through 1993. For the Florida data, the results from the novel methods were compared with the results of standard methods including an unsupervised classification, principal components analysis, and write function memory insertion. A comparison of results indicates that the novel methods do uncover information that is different from, but consistent with, the standard methods. The novel methods are able to detect specific change patterns that the standard methods cannot. The novel methods are easier to interpret than the standard methods, and can contribute to the interpretation of the standard methods.
Non-destructive research methods applied on materials for the new generation of nuclear reactors
Bartošová, I.; Slugeň, V.; Veterníková, J.; Sojak, S.; Petriska, M.; Bouhaddane, A.
2014-06-01
The paper is aimed on non-destructive experimental techniques applied on materials for the new generation of nuclear reactors (GEN IV). With the development of these reactors, also materials have to be developed in order to guarantee high standard properties needed for construction. These properties are high temperature resistance, radiation resistance and resistance to other negative effects. Nevertheless the changes in their mechanical properties should be only minimal. Materials, that fulfil these requirements, are analysed in this work. The ferritic-martensitic (FM) steels and ODS steels are studied in details. Microstructural defects, which can occur in structural materials and can be also accumulated during irradiation due to neutron flux or alpha, beta and gamma radiation, were analysed using different spectroscopic methods as positron annihilation spectroscopy and Barkhausen noise, which were applied for measurements of three different FM steels (T91, P91 and E97) as well as one ODS steel (ODS Eurofer).
Contextual filtering method applied to sub-bands of interferometric image decomposition
Belhadj-Aissa, S.; Hocine, F.; Boughacha, M. S.; Belhadj-Aissa, M.
2016-10-01
The precision and accuracy of Digital elevation model and deformation measurement, from SAR interferometry (InSAR/DInSAR) depend mainly on the quality of the interferogram. However, the phase noise, which is mainly due to decorrelation between the images and the speckle, makes the step of phase unwrapping most delicate. In this paper, we propose a filtering method that combines the techniques of decomposition into sub-bands and nonlinear local weights. The Spectral / Contextual filter that we propose, inspired from to Goldstein filter is applied to the sub-bands from the wavelet decomposition. To validate the results, we applied to interferometric products tandem pair ERS1/ERS2 taken in the region of Algiers Algeria.
Studying the properties of Variational Data Assimilation Methods by Applying a Set of Test-Examples
Thomsen, Per Grove; Zlatev, Zahari
2007-01-01
) and the storage needed. This is why it might be appropriate to apply some splitting procedure in the efforts to reduce the computational work. Five test-examples have been created. Different numerical aspects of the data assimilation methods and the interplay between the major computational parts of any data...... assimilation method (numerical algorithms for solving differential equations, splitting procedures and optimization algorithms) have been studied by using these tests. The presentation will include results from testing carried out in the study.......he variational data assimilation methods can successfully be used in different fields of science and engineering. An attempt to utilize available sets of observations in the efforts to improve (i) the models used to study different phenomena (ii) the model results is systematically carried out when...
Zhu, Zheng; Katzgraber, Helmut G.
2014-03-01
We study the thermodynamic properties of the two-dimensional Edwards-Anderson Ising spin-glass model on a square lattice using the tensor renormalization group method based on a higher-order singular-value decomposition. Our estimates of the internal energy per spin agree very well with high-precision parallel tempering Monte Carlo studies, thus illustrating that the method can, in principle, be applied to frustrated magnetic systems. In particular, we discuss the necessary tuning of parameters for convergence, memory requirements, efficiency for different types of disorder, as well as advantages and limitations in comparison to conventional multicanonical and Monte Carlo methods. Extensions to higher space dimensions, as well as applications to spin glasses in a field are explored.
Shimizu Kentaro
2011-06-01
Full Text Available Abstract Background Statistical methods for ranking differentially expressed genes (DEGs from gene expression data should be evaluated with regard to high sensitivity, specificity, and reproducibility. In our previous studies, we evaluated eight gene ranking methods applied to only Affymetrix GeneChip data. A more general evaluation that also includes other microarray platforms, such as the Agilent or Illumina systems, is desirable for determining which methods are suitable for each platform and which method has better inter-platform reproducibility. Results We compared the eight gene ranking methods using the MicroArray Quality Control (MAQC datasets produced by five manufacturers: Affymetrix, Applied Biosystems, Agilent, GE Healthcare, and Illumina. The area under the curve (AUC was used as a measure for both sensitivity and specificity. Although the highest AUC values can vary with the definition of "true" DEGs, the best methods were, in most cases, either the weighted average difference (WAD, rank products (RP, or intensity-based moderated t statistic (ibmT. The percentages of overlapping genes (POGs across different test sites were mainly evaluated as a measure for both intra- and inter-platform reproducibility. The POG values for WAD were the highest overall, irrespective of the choice of microarray platform. The high intra- and inter-platform reproducibility of WAD was also observed at a higher biological function level. Conclusion These results for the five microarray platforms were consistent with our previous ones based on 36 real experimental datasets measured using the Affymetrix platform. Thus, recommendations made using the MAQC benchmark data might be universally applicable.
ADVANTAGES AND DISADVANTAGES OF APPLYING EVOLVED METHODS IN MANAGEMENT ACCOUNTING PRACTICE
SABOU FELICIA
2014-05-01
Full Text Available The evolved methods of management accounting have been developed with the purpose of removing the disadvantages of the classical methods, they are methods adapted to the new market conditions, which provide much more useful cost-related information so that the management of the company is able to take certain strategic decisions. Out of the category of evolved methods, the most used is the one of standard-costs due to the advantages that it presents, being used widely in calculating the production costs in some developed countries. The main advantages of the standard-cost method are: in-advance knowledge of the production costs and the measures that ensure compliance to these; with the help of the deviations calculated from the standard costs, one manages a systematic control over the costs, thus allowing the making of decision in due time, in as far as the elimination of the deviations and the improvement of the activity are concerned and it is a method of analysis, control and cost forecast; Although the advantages of using standards are significant, there are a few disadvantages to the employment of the standard-cost method: sometimes there can appear difficulties in establishing the deviations from the standard costs, the method does not allow an accurate calculation of the fixed costs. As a result of the study, we can observe the fact that the evolved methods of management accounting, as compared to the classical ones, present a series of advantages linked to a better analysis, control, and foreseeing of costs, whereas the main disadvantage is related to the large amount of work necessary for these methods to be applied.
ADVANTAGES AND DISADVANTAGES OF APPLYING EVOLVED METHODS IN MANAGEMENT ACCOUNTING PRACTICE
SABOU FELICIA
2014-05-01
Full Text Available The evolved methods of management accounting have been developed with the purpose of removing the disadvantages of the classical methods, they are methods adapted to the new market conditions, which provide much more useful cost-related information so that the management of the company is able to take certain strategic decisions. Out of the category of evolved methods, the most used is the one of standard-costs due to the advantages that it presents, being used widely in calculating the production costs in some developed countries. The main advantages of the standard-cost method are: in-advance knowledge of the production costs and the measures that ensure compliance to these; with the help of the deviations calculated from the standard costs, one manages a systematic control over the costs, thus allowing the making of decision in due time, in as far as the elimination of the deviations and the improvement of the activity are concerned and it is a method of analysis, control and cost forecast; Although the advantages of using standards are significant, there are a few disadvantages to the employment of the standard-cost method: sometimes there can appear difficulties in establishing the deviations from the standard costs, the method does not allow an accurate calculation of the fixed costs. As a result of the study, we can observe the fact that the evolved methods of management accounting, as compared to the classical ones, present a series of advantages linked to a better analysis, control, and foreseeing of costs, whereas the main disadvantage is related to the large amount of work necessary for these methods to be applied
River basin soil-vegetation condition assessment applying mathematic simulation methods
Mishchenko, Natalia; Trifonova, Tatiana; Shirkin, Leonid
2013-04-01
Meticulous attention paid nowadays to the problem of vegetation cover productivity changes is connected also to climate global transformation. At the same time ecosystems anthropogenic transformation, basically connected to the changes of land use structure and human impact on soil fertility, is developing to a great extent independently from climatic processes and can seriously influence vegetation cover productivity not only at the local and regional levels but also globally. Analysis results of land use structure and soil cover condition influence on river basin ecosystems productive potential is presented in the research. The analysis is carried out applying integrated characteristics of ecosystems functioning, space images processing results and mathematic simulation methods. The possibility of making permanent functional simulator defining connection between macroparameters of "phytocenosis-soil" system condition on the basis of basin approach is shown. Ecosystems of river catchment basins of various degrees located in European part of Russia were chosen as research objects. For the integrated assessment of ecosystems soil and vegetation conditions the following characteristics have been applied: 1. Soil-productional potential, characterizing the ability of natural and natural-anthropogenic ecosystem in certain soil-bioclimatic conditions for long term reproduction. This indicator allows for specific phytomass characteristics and ecosystem produce, humus content in soil and bioclimatic parameters. 2. Normalized difference vegetation index (NDVI) has been applied as an efficient, remotely defined, monitoring indicator characterizing spatio-temporal unsteadiness of soil-productional potential. To design mathematic simulator functional simulation methods and principles on the basis of regression, correlation and factor analysis have been applied in the research. Coefficients values defining in the designed static model of phytoproductivity distribution has been
Characteristics of an Extrusion Panel Made by Applying a Modified Curing Method
Haseog Kim
2016-05-01
Full Text Available CO2 emitted from building materials and the construction materials industry has reached about 67 million tons. Controls on the use of consumed fossil fuels and the reduction of emission gases are essential for the reduction of CO2 in the construction area as one reduces the second and third curing to emit CO2 in the construction materials industry. In this study, a new curing method was addressed by using a low energy curing admixture (LA in order to exclude autoclave curing. The new curing method was applied to make panels. Then, its physical properties, depending on the mixed amount of fiber, type of fiber and mixed ratio of fiber, were observed. The type of fiber did not appear to be a main factor that affected strength, while the LA mixing ratio and mixing amount of fiber appeared to be major factors affecting the strength. Applying the proposed new curing method can reduce carbon and restrain the use of fossil fuels through a reduction of the second and third curing processes, which emit CO2 in the construction materials industry. Therefore, it will be helpful to reduce global warming.
Liu, Bailing; Zhang, Fumin; Qu, Xinghua; Shi, Xiaojia
2016-02-18
Coordinate transformation plays an indispensable role in industrial measurements, including photogrammetry, geodesy, laser 3-D measurement and robotics. The widely applied methods of coordinate transformation are generally based on solving the equations of point clouds. Despite the high accuracy, this might result in no solution due to the use of ill conditioned matrices. In this paper, a novel coordinate transformation method is proposed, not based on the equation solution but based on the geometric transformation. We construct characteristic lines to represent the coordinate systems. According to the space geometry relation, the characteristic line scan is made to coincide by a series of rotations and translations. The transformation matrix can be obtained using matrix transformation theory. Experiments are designed to compare the proposed method with other methods. The results show that the proposed method has the same high accuracy, but the operation is more convenient and flexible. A multi-sensor combined measurement system is also presented to improve the position accuracy of a robot with the calibration of the robot kinematic parameters. Experimental verification shows that the position accuracy of robot manipulator is improved by 45.8% with the proposed method and robot calibration.
Bailing Liu
2016-02-01
Full Text Available Coordinate transformation plays an indispensable role in industrial measurements, including photogrammetry, geodesy, laser 3-D measurement and robotics. The widely applied methods of coordinate transformation are generally based on solving the equations of point clouds. Despite the high accuracy, this might result in no solution due to the use of ill conditioned matrices. In this paper, a novel coordinate transformation method is proposed, not based on the equation solution but based on the geometric transformation. We construct characteristic lines to represent the coordinate systems. According to the space geometry relation, the characteristic line scan is made to coincide by a series of rotations and translations. The transformation matrix can be obtained using matrix transformation theory. Experiments are designed to compare the proposed method with other methods. The results show that the proposed method has the same high accuracy, but the operation is more convenient and flexible. A multi-sensor combined measurement system is also presented to improve the position accuracy of a robot with the calibration of the robot kinematic parameters. Experimental verification shows that the position accuracy of robot manipulator is improved by 45.8% with the proposed method and robot calibration.
The Inverse System Method Applied to the Derivation of Power System Non—linear Control Laws
DonghaiLI; XuezhiJIANG; 等
1997-01-01
The differential geometric method has been applied to a series of power system non-linear control problems effectively.However a set of differential equations must be solved for obtaining the required diffeomorphic transformation.Therefore the derivation of control laws is very complicated.In fact because of the specificity of power system models the required diffeomorphic transformation may be obtained directly,so it is unnecessary to solve a set of differential equations.In addition inverse system method is equivalent to differential geometric method in reality and not limited to affine nonlinear systems,Its physical meaning is able to be viewed directly and its deduction needs only algebraic operation and derivation,so control laws can be obtained easily and the application to engineering is very convenient.Authors of this paper take steam valving control of power system as a typical case to be studied.It is demonstrated that the control law deduced by inverse system method is just the same as one by differential geometric method.The conclusion will simplify the control law derivations of steam valving,excitation,converter and static var compensator by differential geometric method and may be suited to similar control problems in other areas.
Applied an Efficient Site-directed Mutagenesis Method into Escherichia coli
Muqing Qiu
2011-03-01
Full Text Available A new technique for conducting site-directed mutagenesis was developed. This method allows the color selection of mutants through the simultaneous activation or deactivation of the α-peptide of ß-galactosidase. The method can efficiently create mutations at multiple sites simultaneously and can be used to perform multiple rounds of mutation on the same construct. In this paper, in order to develop an efficient site-directed mutagenesis method in vivo, the tests were tested by the following methods. The methods that the fragment knock-out ompR gene was constructed through overlapping PCR, digested by Notand SalⅠⅠ, ligated to plasmid pKOV were applied. The recombination plasmid was transformed into Escherichia coli WMC-001 strain, integrated into the genomic DNA through two step homologous recombination. The Escherichia coli WMC-001/ompR- mutant was obtained due to gene replacement. The fragment of the mutant ompR gene was amplified through overlapping PCR, cloned into pKOV vector. The recombinant plasmid was introduced into Escherichia coli WMC-001/ompR- mutant. The Escherichia coli WMC-001/ompR mutant was also obtained due to gene replacement. Results: The site-directed mutagenesis has been successfully constructed in the ompR gene by sequencing. Conclusion: The method is effective for construction of gene site-directed mutagenesis in vivo.
The overlapping plates method applied to CCD observations of 243 Ida
Owen, W. M., Jr.; Yeomans, D. K.
1994-01-01
The overlapping plates method has been applied to crossing-point Charge Coupled Device (CCD) observations of minor planet 243 Ida to produce absolute position measurements precise to better than 0.1 sec and differential position measurements precise to better than 0.06 sec. Although these observations numbered only 17 out of the 520 that produced the final ground-based Ida ephemeris for the Galileo spacecraft flyby, their inclusion decreased Ida's downtrack error from 78 to 60 km and its out-of-plane error from 58 to 44 km.
Monthly Monetary Planning for China via Applying Method of Constructing Objective Function
ZENG Jian-hua; YANG Xiao-guang; XU Shan-ying
2001-01-01
Many economic problems can be formulated as optimization problems. Econometricians have long devoted their efforts to construct the econometric equation systems, while the corresponding objective functions receive few attentions. In recent twenty years, some techniques to construct the objective functions with economic implications have been developed, which might have a potential in economic decision-making.In the paper we apply the method of constructing objective function to design an optimization model for monthly monetary planning of China. The real monthly data from 1991 to 1999 are used to evaluate the monthly economic situation. Our empirical experiment shows that the model gives a good short-term forecasting.
Cork-resin ablative insulation for complex surfaces and method for applying the same
Walker, H. M.; Sharpe, M. H.; Simpson, W. G. (Inventor)
1980-01-01
A method of applying cork-resin ablative insulation material to complex curved surfaces is disclosed. The material is prepared by mixing finely divided cork with a B-stage curable thermosetting resin, forming the resulting mixture into a block, B-stage curing the resin-containing block, and slicing the block into sheets. The B-stage cured sheet is shaped to conform to the surface being insulated, and further curing is then performed. Curing of the resins only to B-stage before shaping enables application of sheet material to complex curved surfaces and avoids limitations and disadvantages presented in handling of fully cured sheet material.
2nd EUROPEAN CONFERENCE ON ELECTROCHEMICAL METHODS APPLIED TO THE CONSERVATION OF ARTWORKS
Domenech Carbo, Mª Teresa; DOMENECH CARBO, ANTONIO
2014-01-01
This book is issued at the occasion of the 2nd European Conference on electrochemical methods applied to the conservation of artworks, held in Valencia, on 23th September, 2014. This Conference has been hosted by the Instituto Universitario de Restauración del Patrimonio of the Universitat Politècnica de València and has been organized under the auspices of the Ministerio de Ciencia e Innovación, the Universitat Politécnica de València, the Universitat de València and the Universisad de Grana...
Applying the dynamic cone penetrometer (DCP) design method to low volume roads
Paige-Green, P
2011-07-01
Full Text Available in one hand and assessing the ?cohesion?. At OMC (damp) the material can be squeezed into a ?sausage? that remains intact. In the very dry state (less than about 25% of OMC), the material is dusty and loose and has absolutely no cohesion. In the dry... state (about 50% of OMC), the material will have no cohesion P. Paige-Green / Applying the Dynamic Cone Penetrometer Design Method to Low Volume Roads 423 when squeezed into a sausage whereas in the moist state (about 75% of OMC), the material may just...
Making Design Decisions Visible: Applying the Case-Based Method in Designing Online Instruction
Heng Luo,
2011-01-01
Full Text Available The instructional intervention in this design case is a self-directed online tutorial that applies the case-based method to teach educators how to design and conduct entrepreneurship programs for elementary school students. In this article, the authors describe the major decisions made in each phase of the design and development process, explicate the rationales behind them, and demonstrate their effect on the production of the tutorial. Based on such analysis, the guidelines for designing case-based online instruction are summarized for the design case.
Method for applying a photoresist layer to a substrate having a preexisting topology
Morales, Alfredo M.; Gonzales, Marcela
2004-01-20
The present invention describes a method for preventing a photoresist layer from delaminating, peeling, away from the surface of a substrate that already contains an etched three dimensional structure such as a hole or a trench. The process comprises establishing a saturated vapor phase of the solvent media used to formulate the photoresist layer, above the surface of the coated substrate as the applied photoresist is heated in order to "cure" or drive off the retained solvent constituent within the layer. By controlling the rate and manner in which solvent is removed from the photoresist layer the layer is stabilized and kept from differentially shrinking and peeling away from the substrate.
Effect of various normalization methods on Applied Biosystems expression array system data
Keys David N
2006-12-01
Full Text Available Abstract Background DNA microarray technology provides a powerful tool for characterizing gene expression on a genome scale. While the technology has been widely used in discovery-based medical and basic biological research, its direct application in clinical practice and regulatory decision-making has been questioned. A few key issues, including the reproducibility, reliability, compatibility and standardization of microarray analysis and results, must be critically addressed before any routine usage of microarrays in clinical laboratory and regulated areas can occur. In this study we investigate some of these issues for the Applied Biosystems Human Genome Survey Microarrays. Results We analyzed the gene expression profiles of two samples: brain and universal human reference (UHR, a mixture of RNAs from 10 cancer cell lines, using the Applied Biosystems Human Genome Survey Microarrays. Five technical replicates in three different sites were performed on the same total RNA samples according to manufacturer's standard protocols. Five different methods, quantile, median, scale, VSN and cyclic loess were used to normalize AB microarray data within each site. 1,000 genes spanning a wide dynamic range in gene expression levels were selected for real-time PCR validation. Using the TaqMan® assays data set as the reference set, the performance of the five normalization methods was evaluated focusing on the following criteria: (1 Sensitivity and reproducibility in detection of expression; (2 Fold change correlation with real-time PCR data; (3 Sensitivity and specificity in detection of differential expression; (4 Reproducibility of differentially expressed gene lists. Conclusion Our results showed a high level of concordance between these normalization methods. This is true, regardless of whether signal, detection, variation, fold change measurements and reproducibility were interrogated. Furthermore, we used TaqMan® assays as a reference, to generate
Antibacterial activity of leaves and inter-nodal callus extracts of Mentha arvensis L
JohnsonM; WeselyEG; KavithaMS; UmaV
2011-01-01
Objective:To determine the anti-bacterial efficacy of chloroform, ethanol, ethyl acetate and water extracts of inter-nodal and leaves derived calli extracts from Mentha arvensis (M. arvensis) against Salmonella typhi(S. typhi), Streptococcus pyogenes(S. pyogenes), Proteus vulgaris(P. vulgaris) and Bacillus subtilis(B. subtilis). Methods: The inter-nodal and leaves segments of M. arvensis were cut into 0.5-0.7 cm in length and cultured on Murashige and Skoog solid medium supplemented with 3% sucrose, gelled with 0.7% agar and different concentration of 2, 4-Dichlorophenoxyacetie acid (2,4-D) either alone or in combinations. The preliminary phytochemical screening was performed by Brindha et al method. Antibacterial efficacy was performed by disc diffusion method and incubated for 24 h at 37 ℃. Results: Maximum percentage of callus formation (inter-nodal segments 84.3±0.78;leaves segments 93.8±1.27) was obtained on Murashige and Skoog’s basal medium supplemented with 3%sucrose and 1.5 mg/L of 2, 4-D. The ethanol extracts of leaves derived calli showed the maximum bio-efficacy than other solvents. The leaves and stem derived calli extracts on Proteus sp. showed that the plants can be used in the treatment of urinary tract infection associated with Proteus sp. Through the bacterial efficacy studies, it is confirmed that the in vitro raised calli tissue was more effective compared to in vivo tissue. Conclusions:The bio-efficacy study confirmed that the calli mediated tissues showed the maximum zone of inhibition. The present study paved a protocol to establish high potential cell lines by in vitro culture.
Monte Carlo method of radiative transfer applied to a turbulent flame modeling with LES
Zhang, Jin; Gicquel, Olivier; Veynante, Denis; Taine, Jean
2009-06-01
Radiative transfer plays an important role in the numerical simulation of turbulent combustion. However, for the reason that combustion and radiation are characterized by different time scales and different spatial and chemical treatments, the radiation effect is often neglected or roughly modelled. The coupling of a large eddy simulation combustion solver and a radiation solver through a dedicated language, CORBA, is investigated. Two formulations of Monte Carlo method (Forward Method and Emission Reciprocity Method) employed to resolve RTE have been compared in a one-dimensional flame test case using three-dimensional calculation grids with absorbing and emitting media in order to validate the Monte Carlo radiative solver and to choose the most efficient model for coupling. Then the results obtained using two different RTE solvers (Reciprocity Monte Carlo method and Discrete Ordinate Method) applied on a three-dimensional flame holder set-up with a correlated-k distribution model describing the real gas medium spectral radiative properties are compared not only in terms of the physical behavior of the flame, but also in computational performance (storage requirement, CPU time and parallelization efficiency). To cite this article: J. Zhang et al., C. R. Mecanique 337 (2009).
A Comparison of Parametric and Non-Parametric Methods Applied to a Likert Scale.
Mircioiu, Constantin; Atkinson, Jeffrey
2017-05-10
A trenchant and passionate dispute over the use of parametric versus non-parametric methods for the analysis of Likert scale ordinal data has raged for the past eight decades. The answer is not a simple "yes" or "no" but is related to hypotheses, objectives, risks, and paradigms. In this paper, we took a pragmatic approach. We applied both types of methods to the analysis of actual Likert data on responses from different professional subgroups of European pharmacists regarding competencies for practice. Results obtained show that with "large" (>15) numbers of responses and similar (but clearly not normal) distributions from different subgroups, parametric and non-parametric analyses give in almost all cases the same significant or non-significant results for inter-subgroup comparisons. Parametric methods were more discriminant in the cases of non-similar conclusions. Considering that the largest differences in opinions occurred in the upper part of the 4-point Likert scale (ranks 3 "very important" and 4 "essential"), a "score analysis" based on this part of the data was undertaken. This transformation of the ordinal Likert data into binary scores produced a graphical representation that was visually easier to understand as differences were accentuated. In conclusion, in this case of Likert ordinal data with high response rates, restraining the analysis to non-parametric methods leads to a loss of information. The addition of parametric methods, graphical analysis, analysis of subsets, and transformation of data leads to more in-depth analyses.
Brezina, Tadej; Graser, Anita; Leth, Ulrich
2017-04-01
Space, and in particular public space for movement and leisure, is a valuable and scarce resource, especially in today's growing urban centres. The distribution and absolute amount of urban space—especially the provision of sufficient pedestrian areas, such as sidewalks—is considered crucial for shaping living and mobility options as well as transport choices. Ubiquitous urban data collection and today's IT capabilities offer new possibilities for providing a relation-preserving overview and for keeping track of infrastructure changes. This paper presents three novel methods for estimating representative sidewalk widths and applies them to the official Viennese streetscape surface database. The first two methods use individual pedestrian area polygons and their geometrical representations of minimum circumscribing and maximum inscribing circles to derive a representative width of these individual surfaces. The third method utilizes aggregated pedestrian areas within the buffered street axis and results in a representative width for the corresponding road axis segment. Results are displayed as city-wide means in a 500 by 500 m grid and spatial autocorrelation based on Moran's I is studied. We also compare the results between methods as well as to previous research, existing databases and guideline requirements on sidewalk widths. Finally, we discuss possible applications of these methods for monitoring and regression analysis and suggest future methodological improvements for increased accuracy.
Knowledge-Based Trajectory Error Pattern Method Applied to an Active Force Control Scheme
Endra Pitowarno, Musa Mailah, Hishamuddin Jamaluddin
2012-08-01
Full Text Available The active force control (AFC method is known as a robust control scheme that dramatically enhances the performance of a robot arm particularly in compensating the disturbance effects. The main task of the AFC method is to estimate the inertia matrix in the feedback loop to provide the correct (motor torque required to cancel out these disturbances. Several intelligent control schemes have already been introduced to enhance the estimation methods of acquiring the inertia matrix such as those using neural network, iterative learning and fuzzy logic. In this paper, we propose an alternative scheme called Knowledge-Based Trajectory Error Pattern Method (KBTEPM to suppress the trajectory track error of the AFC scheme. The knowledge is developed from the trajectory track error characteristic based on the previous experimental results of the crude approximation method. It produces a unique, new and desirable error pattern when a trajectory command is forced. An experimental study was performed using simulation work on the AFC scheme with KBTEPM applied to a two-planar manipulator in which a set of rule-based algorithm is derived. A number of previous AFC schemes are also reviewed as benchmark. The simulation results show that the AFC-KBTEPM scheme successfully reduces the trajectory track error significantly even in the presence of the introduced disturbances.Key Words: Active force control, estimated inertia matrix, robot arm, trajectory error pattern, knowledge-based.
LEVEL SET METHOD FOR TOPOLOGICAL OPTIMIZATION APPLYING TO STRUCTURE,MECHANISM AND MATERIAL DESIGNS
Mei Yulin; Wang Xiaoming
2004-01-01
Based on a level set model,a topology optimization method has been suggested recently.It uses a level set to express the moving structural boundary,which can flexibly handle complex topological changes.By combining vector level set models with gradient projection technology,the level set method for topological optimization is extended to a topological optimization problem with multi-constraints,multi-materials and multi-load cases.Meanwhile,an appropriate nonlinear speed mapping is established in the tangential space of the active constraints for a fast convergence.Then the method is applied to structure designs,mechanism and material designs by a number of benchmark examples.Finally,in order to further improve computational efficiency and overcome the difficulty that the level set method cannot generate new material interfaces during the optimization process,the topological derivative analysis is incorporated into the level set method for topological optimization,and a topological derivative and level set algorithm for topological optimization is proposed.
The generalized method of moments as applied to the generalized gamma distribution
Ashkar, F.; Bobée, B.; Leroux, D.; Morisette, D.
1988-09-01
The generalized gamma (GG) distribution has a density function that can take on many possible forms commonly encountered in hydrologic applications. This fact has led many authors to study the properties of the distribution and to propose various estimation techniques (method of moments, mixed moments, maximum likelihood etc.). We discuss some of the most important properties of this flexible distribution and present a flexible method of parameter estimation, called the “generalized method of moments” (GMM) which combines any three moments of the GG distribution. The main advantage of this general method is that it has many of the previously proposed methods of estimation as special cases. We also give a general formula for the variance of the T-year event X T obtained by the GMM along with a general formula for the parameter estimates and also for the covariances and correlation coefficients between any pair of such estimates. By applying the GMM and carefully choosing the order of the moments that are used in the estimation one can significantly reduce the variance of T-year events for the range of return periods that are of interest.
Brezina, Tadej; Graser, Anita; Leth, Ulrich
2017-02-01
Space, and in particular public space for movement and leisure, is a valuable and scarce resource, especially in today's growing urban centres. The distribution and absolute amount of urban space—especially the provision of sufficient pedestrian areas, such as sidewalks—is considered crucial for shaping living and mobility options as well as transport choices. Ubiquitous urban data collection and today's IT capabilities offer new possibilities for providing a relation-preserving overview and for keeping track of infrastructure changes. This paper presents three novel methods for estimating representative sidewalk widths and applies them to the official Viennese streetscape surface database. The first two methods use individual pedestrian area polygons and their geometrical representations of minimum circumscribing and maximum inscribing circles to derive a representative width of these individual surfaces. The third method utilizes aggregated pedestrian areas within the buffered street axis and results in a representative width for the corresponding road axis segment. Results are displayed as city-wide means in a 500 by 500 m grid and spatial autocorrelation based on Moran's I is studied. We also compare the results between methods as well as to previous research, existing databases and guideline requirements on sidewalk widths. Finally, we discuss possible applications of these methods for monitoring and regression analysis and suggest future methodological improvements for increased accuracy.
Vortex methods with immersed lifting lines applied to LES of wind turbine wakes
Chatelain, Philippe; Bricteux, Laurent; Winckelmans, Gregoire; Koumoutsakos, Petros
2010-11-01
We present the coupling of a vortex particle-mesh method with immersed lifting lines. The method relies on the Lagrangian discretization of the Navier-Stokes equations in vorticity-velocity formulation. Advection is handled by the particles while the mesh allows the evaluation of the differential operators and the use of fast Poisson solvers. We use a Fourier-based fast Poisson solver which simultaneously allows unbounded directions and inlet/outlet boundaries. A lifting line approach models the vorticity sources in the flow. Its immersed treatment efficiently captures the development of vorticity from thin sheets into a three-dimensional field. We apply this approach to the simulation of a wind turbine wake at very high Reynolds number. The combined use of particles and multiscale subgrid models allows the capture of wake dynamics with minimal spurious diffusion and dispersion.
Matrix product states and variational methods applied to critical quantum field theory
Milsted, Ashley; Osborne, Tobias J
2013-01-01
We study the second-order quantum phase-transition of massive real scalar field theory with a quartic interaction in (1+1) dimensions on an infinite spatial lattice using matrix product states (MPS). We introduce and apply a naive variational conjugate gradient method, based on the time-dependent variational principle (TDVP) for imaginary time, to obtain approximate ground states, using a related ansatz for excitations to calculate the particle and soliton masses and to obtain the spectral density. We also estimate the central charge using finite-entanglement scaling. Our value for the critical parameter agrees well with recent Monte Carlo results, improving on an earlier study which used the related DMRG method, verifying that these techniques are well-suited to studying critical field systems. We also obtain critical exponents that agree, as expected, with those of the transverse Ising model. Additionally, we treat the special case of uniform product states (mean field theory) separately, showing that they ...
A preliminary analysis on metaheuristics methods applied to the Haplotype Inference Problem
Di Gaspero, Luca
2007-01-01
Haplotype Inference is a challenging problem in bioinformatics that consists in inferring the basic genetic constitution of diploid organisms on the basis of their genotype. This information allows researchers to perform association studies for the genetic variants involved in diseases and the individual responses to therapeutic agents. A notable approach to the problem is to encode it as a combinatorial problem (under certain hypotheses, such as the pure parsimony criterion) and to solve it using off-the-shelf combinatorial optimization techniques. The main methods applied to Haplotype Inference are either simple greedy heuristic or exact methods (Integer Linear Programming, Semidefinite Programming, SAT encoding) that, at present, are adequate only for moderate size instances. We believe that metaheuristic and hybrid approaches could provide a better scalability. Moreover, metaheuristics can be very easily combined with problem specific heuristics and they can also be integrated with tree-based search techn...
Islam, M. T.; Trevorah, R. M.; Appadoo, D. R. T.; Best, S. P.; Chantler, C. T.
2017-04-01
We present methodology for the first FTIR measurements of ferrocene using dilute wax solutions for dispersion and to preserve non-crystallinity; a new method for removal of channel spectra interference for high quality data; and a consistent approach for the robust estimation of a defined uncertainty for advanced structural χr2 analysis and mathematical hypothesis testing. While some of these issues have been investigated previously, the combination of novel approaches gives markedly improved results. Methods for addressing these in the presence of a modest signal and how to quantify the quality of the data irrespective of preprocessing for subsequent hypothesis testing are applied to the FTIR spectra of Ferrocene (Fc) and deuterated ferrocene (dFc, Fc-d10) collected at the THz/Far-IR beam-line of the Australian Synchrotron at operating temperatures of 7 K through 353 K.
A method for high purity sorting of rare cell subsets applied to TDC.
Kuka, Mirela; Ashwell, Jonathan D
2013-12-31
T(DC) are a recently described subset of polyclonal αβ T-cells with dendritic cell properties. Because of their low number in peripheral immune compartments, isolation and characterization of T(DC) with existing purification methods are technically challenging. Here we describe a customized gating strategy and a flow cytometry-based cell sorting protocol for isolation of T(DC). The protocol was developed because, despite very conservative gating for dead-cell and doublet exclusion, cells obtained with normal sorting procedures were enriched for T(DC) but not pure. Re-sorting the output of the first round of sorting results in highly pure T(DC). Cells obtained with this method are viable and can be used for in vitro characterization. Moreover, this double-round sorting strategy can be universally applied to the isolation of other rare cell subsets.
Islam, M T; Trevorah, R M; Appadoo, D R T; Best, S P; Chantler, C T
2017-04-15
We present methodology for the first FTIR measurements of ferrocene using dilute wax solutions for dispersion and to preserve non-crystallinity; a new method for removal of channel spectra interference for high quality data; and a consistent approach for the robust estimation of a defined uncertainty for advanced structural χr(2) analysis and mathematical hypothesis testing. While some of these issues have been investigated previously, the combination of novel approaches gives markedly improved results. Methods for addressing these in the presence of a modest signal and how to quantify the quality of the data irrespective of preprocessing for subsequent hypothesis testing are applied to the FTIR spectra of Ferrocene (Fc) and deuterated ferrocene (dFc, Fc-d10) collected at the THz/Far-IR beam-line of the Australian Synchrotron at operating temperatures of 7K through 353K.
Time Series Analysis Methods Applied to the Super-Kamiokande I Data
Ranucci, G
2005-01-01
The need to unravel modulations hidden in noisy time series of experimental data is a well known problem, traditionally attacked through a variety of methods, among which a popular tool is the so called Lomb-Scargle periodogram. Recently, for a class of problems in the solar neutrino field, it has been proposed an alternative maximum likelihood based approach, intended to overcome some intrinsic limitations affecting the Lomb-Scargle implementation. This work is focused to highlight the features of the likelihood methodology, introducing in particular an analytical approach to assess the quantitative significance of the potential modulation signals. As an example, the proposed method is applied to the time series of the measured values of the 8B neutrino flux released by the Super-Kamiokande collaboration, and the results compared with those of previous analysis performed on the same data sets. In appendix, for completeness, it is also examined in detail the relationship between the Lomb-Scargle and the likel...
Problem Solving Method and Change Management in Universities (Applied case-Jordan
Mohammad Alaya
2012-02-01
Full Text Available TQ/M is a style of management that has worked for several decades all over world and is receiving growing attention, new some colleges and universities are beginning to recognize that T.Q.M values are more compatible with higher education that many existing control originally coined by Feigenbaum (1983, also used in higher education want the service we provide to be the highest quality. The purpose of this study was to provide an over view of T.Q. M thou (problems solution method. It's feasibility for higher education and academic libraries, and the results of its implementation by colleges and universities. And the change management helps to control the success. Questionnaire was designed to measure the knowledge and perception of academic library directors, dependent heads. Each college has framework which named strategically planning concerned of the problem solving method. In the initial of the educational stage of a process, improvement program, quick results are often obtained because the solutions are obvious or someone has a brilliant idea... However long term, a systematic approach will yield the greatest benefits. In this research scientific method as applied used to constitute the improvement, on fact some control chart can be used effectively utilized is more than one step of the method, while process. Of improvement is the main goal, also in addition to management of changes are mapping to be as way of improve the process and to increase the satisfaction of the performing the process. Therefore the research is divided in to three parts First part care of problem solving method and how to utilize of it in colleges, second take the change management, third part applied among the stuffs of universities (120, who are dealing of education. The data analysis yields the following results: There was a significant differencing found among the respondents concerning their option (member of colleges staffs indicates that there is an evidence
Erin O Sills
Full Text Available Quasi-experimental methods increasingly are used to evaluate the impacts of conservation interventions by generating credible estimates of counterfactual baselines. These methods generally require large samples for statistical comparisons, presenting a challenge for evaluating innovative policies implemented within a few pioneering jurisdictions. Single jurisdictions often are studied using comparative methods, which rely on analysts' selection of best case comparisons. The synthetic control method (SCM offers one systematic and transparent way to select cases for comparison, from a sizeable pool, by focusing upon similarity in outcomes before the intervention. We explain SCM, then apply it to one local initiative to limit deforestation in the Brazilian Amazon. The municipality of Paragominas launched a multi-pronged local initiative in 2008 to maintain low deforestation while restoring economic production. This was a response to having been placed, due to high deforestation, on a federal "blacklist" that increased enforcement of forest regulations and restricted access to credit and output markets. The local initiative included mapping and monitoring of rural land plus promotion of economic alternatives compatible with low deforestation. The key motivation for the program may have been to reduce the costs of blacklisting. However its stated purpose was to limit deforestation, and thus we apply SCM to estimate what deforestation would have been in a (counterfactual scenario of no local initiative. We obtain a plausible estimate, in that deforestation patterns before the intervention were similar in Paragominas and the synthetic control, which suggests that after several years, the initiative did lower deforestation (significantly below the synthetic control in 2012. This demonstrates that SCM can yield helpful land-use counterfactuals for single units, with opportunities to integrate local and expert knowledge and to test innovations and
Sills, Erin O; Herrera, Diego; Kirkpatrick, A Justin; Brandão, Amintas; Dickson, Rebecca; Hall, Simon; Pattanayak, Subhrendu; Shoch, David; Vedoveto, Mariana; Young, Luisa; Pfaff, Alexander
2015-01-01
Quasi-experimental methods increasingly are used to evaluate the impacts of conservation interventions by generating credible estimates of counterfactual baselines. These methods generally require large samples for statistical comparisons, presenting a challenge for evaluating innovative policies implemented within a few pioneering jurisdictions. Single jurisdictions often are studied using comparative methods, which rely on analysts' selection of best case comparisons. The synthetic control method (SCM) offers one systematic and transparent way to select cases for comparison, from a sizeable pool, by focusing upon similarity in outcomes before the intervention. We explain SCM, then apply it to one local initiative to limit deforestation in the Brazilian Amazon. The municipality of Paragominas launched a multi-pronged local initiative in 2008 to maintain low deforestation while restoring economic production. This was a response to having been placed, due to high deforestation, on a federal "blacklist" that increased enforcement of forest regulations and restricted access to credit and output markets. The local initiative included mapping and monitoring of rural land plus promotion of economic alternatives compatible with low deforestation. The key motivation for the program may have been to reduce the costs of blacklisting. However its stated purpose was to limit deforestation, and thus we apply SCM to estimate what deforestation would have been in a (counterfactual) scenario of no local initiative. We obtain a plausible estimate, in that deforestation patterns before the intervention were similar in Paragominas and the synthetic control, which suggests that after several years, the initiative did lower deforestation (significantly below the synthetic control in 2012). This demonstrates that SCM can yield helpful land-use counterfactuals for single units, with opportunities to integrate local and expert knowledge and to test innovations and permutations on policies
Applying Mathematical Optimization Methods to an ACT-R Instance-Based Learning Model.
Nadia Said
Full Text Available Computational models of cognition provide an interface to connect advanced mathematical tools and methods to empirically supported theories of behavior in psychology, cognitive science, and neuroscience. In this article, we consider a computational model of instance-based learning, implemented in the ACT-R cognitive architecture. We propose an approach for obtaining mathematical reformulations of such cognitive models that improve their computational tractability. For the well-established Sugar Factory dynamic decision making task, we conduct a simulation study to analyze central model parameters. We show how mathematical optimization techniques can be applied to efficiently identify optimal parameter values with respect to different optimization goals. Beyond these methodological contributions, our analysis reveals the sensitivity of this particular task with respect to initial settings and yields new insights into how average human performance deviates from potential optimal performance. We conclude by discussing possible extensions of our approach as well as future steps towards applying more powerful derivative-based optimization methods.
Murillo Ferreira Dos Santos
2014-05-01
Full Text Available The industrial field is always considered a growing area, which leads some systems toimprove the techniques used on its manufacturing. By consequence of this concept, level systems became an important part of the whole system, showing that needs to be studied more specific to get the optimal controlled response. It's known that the good controlled response is gotten when the system is identified correctly. Then, the objective of this paper is to present a didactic project of modeling and identification method applied on a level system, which uses a didactic system with Foundation Fieldbus protocol developed by SMAR® enterprise, belonging to CEFET MG-Campus III –Leopoldina, Brazil. The experiments were implemented considering the least squares method to identify the system dynamic, which the results were obtained using the OPC toolbox from MATLAB/Simulink®to establish the communication between the computer and the system. The modeling and identification results were satisfactory, showing that the applied technic can be used to approximate the system's level dynamic to a second order transfer function.
Hayashi, K.
2014-12-01
. Engineers need more quantitative information. In order to apply geophysical methods to engineering design works, quantitative interpretation is very important. The presentation introduces several case studies from different countries around the world (Fig. 2) from the integrated and quantitative points of view.
V. I. Freyman
2015-11-01
Full Text Available Subject of Research.Representation features of education results for competence-based educational programs are analyzed. Solution importance of decoding and proficiency estimation for elements and components of discipline parts of competences is shown. The purpose and objectives of research are formulated. Methods. The paper deals with methods of mathematical logic, Boolean algebra, and parametrical analysis of complex diagnostic test results, that controls proficiency of some discipline competence elements. Results. The method of logical conditions analysis is created. It will give the possibility to formulate logical conditions for proficiency determination of each discipline competence element, controlled by complex diagnostic test. Normalized test result is divided into noncrossing zones; a logical condition about controlled elements proficiency is formulated for each of them. Summarized characteristics for test result zones are imposed. An example of logical conditions forming for diagnostic test with preset features is provided. Practical Relevance. The proposed method of logical conditions analysis is applied in the decoding algorithm of proficiency test diagnosis for discipline competence elements. It will give the possibility to automate the search procedure for elements with insufficient proficiency, and is also usable for estimation of education results of a discipline or a component of competence-based educational program.
A Novel Microaneurysms Detection Method Based on Local Applying of Markov Random Field.
Ganjee, Razieh; Azmi, Reza; Moghadam, Mohsen Ebrahimi
2016-03-01
Diabetic Retinopathy (DR) is one of the most common complications of long-term diabetes. It is a progressive disease and by damaging retina, it finally results in blindness of patients. Since Microaneurysms (MAs) appear as a first sign of DR in retina, early detection of this lesion is an essential step in automatic detection of DR. In this paper, a new MAs detection method is presented. The proposed approach consists of two main steps. In the first step, the MA candidates are detected based on local applying of Markov random field model (MRF). In the second step, these candidate regions are categorized to identify the correct MAs using 23 features based on shape, intensity and Gaussian distribution of MAs intensity. The proposed method is evaluated on DIARETDB1 which is a standard and publicly available database in this field. Evaluation of the proposed method on this database resulted in the average sensitivity of 0.82 for a confidence level of 75 as a ground truth. The results show that our method is able to detect the low contrast MAs with the background while its performance is still comparable to other state of the art approaches.
An IMU-to-Body Alignment Method Applied to Human Gait Analysis
Laura Susana Vargas-Valencia
2016-12-01
Full Text Available This paper presents a novel calibration procedure as a simple, yet powerful, method to place and align inertial sensors with body segments. The calibration can be easily replicated without the need of any additional tools. The proposed method is validated in three different applications: a computer mathematical simulation; a simplified joint composed of two semi-spheres interconnected by a universal goniometer; and a real gait test with five able-bodied subjects. Simulation results demonstrate that, after the calibration method is applied, the joint angles are correctly measured independently of previous sensor placement on the joint, thus validating the proposed procedure. In the cases of a simplified joint and a real gait test with human volunteers, the method also performs correctly, although secondary plane errors appear when compared with the simulation results. We believe that such errors are caused by limitations of the current inertial measurement unit (IMU technology and fusion algorithms. In conclusion, the presented calibration procedure is an interesting option to solve the alignment problem when using IMUs for gait analysis.
KRON's Method Applied to the Study of Electromagnetic Interference Occurring in Aerospace Systems
Leman, S.; Reineix, A.; Hoeppe, F.; Poiré, Y.; Mahoudi, M.; Démoulin, B.; Üstüner, F.; Rodriquez, V. P.
2012-05-01
In this paper, spacecraft and aircraft mock-ups are used to simulate the performance of KRON based tools applied to the simulation of large EMC systems. These tools aim to assist engineers in the design phase of complex systems. This is done by effectively evaluating the EM disturbances between antennas, electronic equipment, and Portable Electronic Devices found in large systems. We use a topological analysis of the system to model independent sub-volumes such as antennas, cables, equipments, PED and cavity walls. Each of these sub- volumes is modelled by an appropriate method which can be based on, for example, analytical expressions, transmission line theory or other numerical tools such as the full wave FDFD method. This representation associated with the electrical tensorial method of G.KRON leads to reasonable simulation times (typically a few minutes) and accurate results. Because equivalent sub-models are built separately, the main originality of this method is that each sub- volume can be easily replaced by another one without rebuilding the entire system. Comparisons between measurements and simulations will be also presented.
Brucellosis Prevention Program: Applying “Child to Family Health Education” Method
H. Allahverdipour
2010-04-01
Full Text Available Introduction & Objective: Pupils have efficient potential to increase community awareness and promoting community health through participating in the health education programs. Child to family health education program is one of the communicative strategies that was applied in this field trial study. Because of high prevalence of Brucellosis in Hamadan province, Iran, the aim of this study was promoting families’ knowledge and preventive behaviors about Brucellosis in the rural areas by using child to family health education method.Materials & Methods: In this nonequivalent control group design study three rural schools were chosen (one as intervention and two others as control. At first knowledge and behavior of families about Brucellosis were determined using a designed questionnaire. Then the families were educated through “child to family” procedure. At this stage the students gained information. Then they were instructed to teach their parents what they had learned. After 3 months following the last session of education, the level of knowledge and behavior changes of the families about Brucellosis were determined and analyzed by paired t-test.Results: The results showed significant improvement in the knowledge of the mothers. The knowledge of the mothers about the signs of Brucellosis disease in human increased from 1.81 to 3.79 ( t:-21.64 , sig:0.000 , and also the knowledge on the signs of Brucellosis in animals increased from 1.48 to 2.82 ( t:-10.60 , sig:0.000. Conclusion: Child to family health education program is one of the effective and available methods, which would be useful and effective in most communities, and also Students potential would be effective for applying in the health promotion programs.
Koivistoinen Teemu
2007-01-01
Full Text Available As we know, singular value decomposition (SVD is designed for computing singular values (SVs of a matrix. Then, if it is used for finding SVs of an -by-1 or 1-by- array with elements representing samples of a signal, it will return only one singular value that is not enough to express the whole signal. To overcome this problem, we designed a new kind of the feature extraction method which we call ''time-frequency moments singular value decomposition (TFM-SVD.'' In this new method, we use statistical features of time series as well as frequency series (Fourier transform of the signal. This information is then extracted into a certain matrix with a fixed structure and the SVs of that matrix are sought. This transform can be used as a preprocessing stage in pattern clustering methods. The results in using it indicate that the performance of a combined system including this transform and classifiers is comparable with the performance of using other feature extraction methods such as wavelet transforms. To evaluate TFM-SVD, we applied this new method and artificial neural networks (ANNs for ballistocardiogram (BCG data clustering to look for probable heart disease of six test subjects. BCG from the test subjects was recorded using a chair-like ballistocardiograph, developed in our project. This kind of device combined with automated recording and analysis would be suitable for use in many places, such as home, office, and so forth. The results show that the method has high performance and it is almost insensitive to BCG waveform latency or nonlinear disturbance.
Akhbardeh, Alireza; Junnila, Sakari; Koivuluoma, Mikko; Koivistoinen, Teemu; Värri, Alpo
2006-12-01
As we know, singular value decomposition (SVD) is designed for computing singular values (SVs) of a matrix. Then, if it is used for finding SVs of an [InlineEquation not available: see fulltext.]-by-1 or 1-by- [InlineEquation not available: see fulltext.] array with elements representing samples of a signal, it will return only one singular value that is not enough to express the whole signal. To overcome this problem, we designed a new kind of the feature extraction method which we call ''time-frequency moments singular value decomposition (TFM-SVD).'' In this new method, we use statistical features of time series as well as frequency series (Fourier transform of the signal). This information is then extracted into a certain matrix with a fixed structure and the SVs of that matrix are sought. This transform can be used as a preprocessing stage in pattern clustering methods. The results in using it indicate that the performance of a combined system including this transform and classifiers is comparable with the performance of using other feature extraction methods such as wavelet transforms. To evaluate TFM-SVD, we applied this new method and artificial neural networks (ANNs) for ballistocardiogram (BCG) data clustering to look for probable heart disease of six test subjects. BCG from the test subjects was recorded using a chair-like ballistocardiograph, developed in our project. This kind of device combined with automated recording and analysis would be suitable for use in many places, such as home, office, and so forth. The results show that the method has high performance and it is almost insensitive to BCG waveform latency or nonlinear disturbance.
Alpo Värri
2007-01-01
Full Text Available As we know, singular value decomposition (SVD is designed for computing singular values (SVs of a matrix. Then, if it is used for finding SVs of an m-by-1 or 1-by-m array with elements representing samples of a signal, it will return only one singular value that is not enough to express the whole signal. To overcome this problem, we designed a new kind of the feature extraction method which we call ‘‘time-frequency moments singular value decomposition (TFM-SVD.’’ In this new method, we use statistical features of time series as well as frequency series (Fourier transform of the signal. This information is then extracted into a certain matrix with a fixed structure and the SVs of that matrix are sought. This transform can be used as a preprocessing stage in pattern clustering methods. The results in using it indicate that the performance of a combined system including this transform and classifiers is comparable with the performance of using other feature extraction methods such as wavelet transforms. To evaluate TFM-SVD, we applied this new method and artificial neural networks (ANNs for ballistocardiogram (BCG data clustering to look for probable heart disease of six test subjects. BCG from the test subjects was recorded using a chair-like ballistocardiograph, developed in our project. This kind of device combined with automated recording and analysis would be suitable for use in many places, such as home, office, and so forth. The results show that the method has high performance and it is almost insensitive to BCG waveform latency or nonlinear disturbance.
Vesisenaho, T. [VTT Energy, Jyvaeskylae (Finland); Liukkonen, S. [VTT Manufacturing Technology, Espoo (Finland)
1997-12-01
The objective of this project is to apply whole-tree harvesting method to Finnish timber harvesting conditions in order to lower the harvesting costs of energy wood and timber in spruce-dominant final cuttings. In Finnish conditions timber harvesting is normally based on the log-length method. Because of small landings and the high level of thinning cuttings, whole-tree skidding methods cannot be utilised extensively. The share of stands which could be harvested with whole-tree skidding method showed up to be about 10 % of the total harvesting amount of 50 mill. m{sup 3}. The corresponding harvesting potential of energy wood is 0,25 Mtoe. The aim of the structural measurements made in this project was to get information about the effect of different hauling methods into the structural response of the tractor, and thus reveal the possible special requirements that the new whole-tree skidding places forest tractor design. Altogether 7 strain gauge based sensors were mounted into the rear frame structures and drive shafts of the forest tractor. Five strain gauges measured local strains in some critical details and two sensors measured the torque moments of the front and rear bogie drive shafts. Also the revolution speed of the rear drive shaft was recorded. Signal time histories, maximum peaks, Time at Level distributions and Rainflow distributions were gathered in different hauling modes. From these, maximum values, average stress levels and fatigue life estimates were calculated for each mode, and a comparison of the different methods from the structural point of view was performed
Anomalous contagion and renormalization in networks with nodal mobility
Manrique, Pedro D.; Qi, Hong; Zheng, Minzhang; Xu, Chen; Hui, Pak Ming; Johnson, Neil F.
2016-07-01
A common occurrence in everyday human activity is where people join, leave and possibly rejoin clusters of other individuals —whether this be online (e.g. social media communities) or in real space (e.g. popular meeting places such as cafes). In the steady state, the resulting interaction network would appear static over time if the identities of the nodes are ignored. Here we show that even in this static steady-state limit, a non-zero nodal mobility leads to a diverse set of outbreak profiles that is dramatically different from known forms, and yet matches well with recent real-world social outbreaks. We show how this complication of nodal mobility can be renormalized away for a particular class of networks.
Nodal failure index approach to groundwater remediation design
Lee, J.; Reeves, H.W.; Dowding, C.H.
2008-01-01
Computer simulations often are used to design and to optimize groundwater remediation systems. We present a new computationally efficient approach that calculates the reliability of remedial design at every location in a model domain with a single simulation. The estimated reliability and other model information are used to select a best remedial option for given site conditions, conceptual model, and available data. To evaluate design performance, we introduce the nodal failure index (NFI) to determine the number of nodal locations at which the probability of success is below the design requirement. The strength of the NFI approach is that selected areas of interest can be specified for analysis and the best remedial design determined for this target region. An example application of the NFI approach using a hypothetical model shows how the spatial distribution of reliability can be used for a decision support system in groundwater remediation design. ?? 2008 ASCE.
About Nodal Systems for Lagrange Interpolation on the Circle
E. Berriochoa
2012-01-01
Full Text Available We study the convergence of the Laurent polynomials of Lagrange interpolation on the unit circle for continuous functions satisfying a condition about their modulus of continuity. The novelty of the result is that now the nodal systems are more general than those constituted by the n roots of complex unimodular numbers and the class of functions is different from the usually studied. Moreover, some consequences for the Lagrange interpolation on [-1,1] and the Lagrange trigonometric interpolation are obtained.
Winchester, David E; Burkart, Thomas A; Choi, Calvin Y; McKillop, Matthew S; Beyth, Rebecca J; Dahm, Phillipp
2016-06-01
Training in quality improvement (QI) is a pillar of the next accreditation system of the Accreditation Committee on Graduate Medical Education and a growing expectation of physicians for maintenance of certification. Despite this, many postgraduate medical trainees are not receiving training in QI methods. We created the Fellows Applied Quality Training (FAQT) curriculum for cardiology fellows using both didactic and applied components with the goal of increasing confidence to participate in future QI projects. Fellows completed didactic training from the Institute for Healthcare Improvement's Open School and then designed and completed a project to improve quality of care or patient safety. Self-assessments were completed by the fellows before, during, and after the first year of the curriculum. The primary outcome for our curriculum was the median score reported by the fellows regarding their self-confidence to complete QI activities. Self-assessments were completed by 23 fellows. The majority of fellows (15 of 23, 65.2%) reported no prior formal QI training. Median score on baseline self-assessment was 3.0 (range, 1.85-4), which was significantly increased to 3.27 (range, 2.23-4; P = 0.004) on the final assessment. The distribution of scores reported by the fellows indicates that 30% were slightly confident at conducting QI activities on their own, which was reduced to 5% after completing the FAQT curriculum. An interim assessment was conducted after the fellows completed didactic training only; median scores were not different from the baseline (mean, 3.0; P = 0.51). After completion of the FAQT, cardiology fellows reported higher self-confidence to complete QI activities. The increase in self-confidence seemed to be limited to the applied component of the curriculum, with no significant change after the didactic component.
Jurado, Maria Jose; Teixido, Teresa; Martin, Elena; Segarra, Miguel; Segura, Carlos
2013-04-01
In the frame of the research conducted to develop efficient strategies for investigation of rock properties and fluids ahead of tunnel excavations the seismic interferometry method was applied to analyze the data acquired in boreholes instrumented with geophone strings. The results obtained confirmed that seismic interferometry provided an improved resolution of petrophysical properties to identify heterogeneities and geological structures ahead of the excavation. These features are beyond the resolution of other conventional geophysical methods but can be the cause severe problems in the excavation of tunnels. Geophone strings were used to record different types of seismic noise generated at the tunnel head during excavation with a tunnelling machine and also during the placement of the rings covering the tunnel excavation. In this study we show how tunnel construction activities have been characterized as source of seismic signal and used in our research as the seismic source signal for generating a 3D reflection seismic survey. The data was recorded in vertical water filled borehole with a borehole seismic string at a distance of 60 m from the tunnel trace. A reference pilot signal was obtained from seismograms acquired close the tunnel face excavation in order to obtain best signal-to-noise ratio to be used in the interferometry processing (Poletto et al., 2010). The seismic interferometry method (Claerbout 1968) was successfully applied to image the subsurface geological structure using the seismic wave field generated by tunneling (tunnelling machine and construction activities) recorded with geophone strings. This technique was applied simulating virtual shot records related to the number of receivers in the borehole with the seismic transmitted events, and processing the data as a reflection seismic survey. The pseudo reflective wave field was obtained by cross-correlation of the transmitted wave data. We applied the relationship between the transmission
Masè, Michela; Glass, Leon; Disertori, Marcello; Ravelli, Flavia
2012-11-15
The genesis of complex ventricular rhythms during atrial tachyarrhythmias in humans is not fully understood. To clarify the dynamics of atrioventricular (AV) conduction in response to a regular high-rate atrial activation, 29 episodes of spontaneous or pacing-induced atrial flutter (AFL), covering a wide range of atrial rates (cycle lengths from 145 to 270 ms), were analyzed in 10 patients. AV patterns were identified by applying firing sequence and surrogate data analysis to atrial and ventricular activation series, whereas modular simulation with a difference-equation AV node model was used to correlate the patterns with specific nodal properties. AV node response at high atrial rate was characterized by 1) AV patterns of decreasing conduction ratios at the shortening of atrial cycle length (from 236.3 ± 32.4 to 172.6 ± 17.8 ms) according to a Farey sequence ordering (conduction ratio from 0.34 ± 0.12 to 0.23 ± 0.06; P AV block patterns occurring during regular atrial tachyarrhythmias. The characterization of AV nodal function during different AFL forms constitutes an intermediate step toward the understanding of complex ventricular rhythms during atrial fibrillation.
Universal Components of Random Nodal Sets
Gayet, Damien; Welschinger, Jean-Yves
2016-11-01
We give, as L grows to infinity, an explicit lower bound of order {L^{n/m}} for the expected Betti numbers of the vanishing locus of a random linear combination of eigenvectors of P with eigenvalues below L. Here, P denotes an elliptic self-adjoint pseudo-differential operator of order {m > 0}, bounded from below and acting on the sections of a Riemannian line bundle over a smooth closed n-dimensional manifold M equipped with some Lebesgue measure. In fact, for every closed hypersurface {Σ} of R^n, we prove that there exists a positive constant {p_Σ} depending only on {Σ}, such that for every large enough L and every {x in M}, a component diffeomorphic to {Σ} appears with probability at least {p_Σ} in the vanishing locus of a random section and in the ball of radius {L^{-1/m}} centered at x. These results apply in particular to Laplace-Beltrami and Dirichlet-to-Neumann operators.
Wesely Edward Gnanaraj; Johnson MarimuthuAntonisamy; Mohanamathi RB
2012-01-01
Objective: To develop the reproducible in vitro propagation protocols for the medicinally important plants viz., Achyranthes aspera (A. aspera) L. and Achyranthes bidentata (A. bidentata) Blume using nodal segments as explants. Methods: Young shoots of A. aspera and A. bidentata were harvested and washed with running tap water and treated with 0.1% bavistin and rinsed twice with distilled water. Then the explants were surface sterilized with 0.1% (w/v) HgCl2 solutions for 1 min. After rinsing with sterile distilled water for 3-4 times, nodal segments were cut into smaller segments (1 cm) and used as the explants. The explants were placed horizontally as well as vertically on solid basal Murashige and Skoog (MS) medium supplemented with 3% sucrose, 0.6% (w/v) agar (Hi-Media, Mumbai) and different concentration and combination of 6-benzyl amino purine (BAP), kinetin (Kin), naphthalene acetic acid (NAA) and indole acetic acid (IAA) for direct regeneration.Results:Adventitious proliferation was obtained from A. aspera and A. bidentata nodal segments inoculated on MS basal medium with 3% sucrose and augmented with BAP and Kin with varied frequency. MS medium augmented with 3.0 mg/L of BAP showed the highest percentage (93.60±0.71) of shootlets formation for A. aspera and (94.70±0.53) percentages for A. bidentata. Maximum number of shoots/explants (10.60±0.36) for A. aspera and (9.50±0.56) for A. bidentata was observed in MS medium fortified with 5.0 mg/L of BAP. For A. aspera, maximum mean length (5.50±0.34) of shootlets was obtained in MS medium augmented with 3.0 mg/L of Kin and for A. bidentata (5.40±0.61) was observed in the very same concentration. The highest percentage, maximum number of rootlets/shootlet and mean length of rootlets were observed in 1/2 MS medium supplemented with 1.0 mg/L of IBA. Seventy percentages of plants were successfully established in polycups. Sixty eight percentages of plants were well established in the green house condition
A useful method to overcome the difficulties of applying silicone gel sheet on irregular surfaces.
Grella, Roberto; Nicoletti, Gianfranco; D'Ari, Antonio; Romanucci, Vincenza; Santoro, Mariangela; D'Andrea, Francesco
2015-04-01
To date, silicone gel and silicone occlusive plates are the most useful and effective treatment options for hypertrophic scars (surgical and traumatic). Use of silicone sheeting has also been demonstrated to be effective in the treatment of minor keloids in association with corticosteroid intralesional infiltration. In our practice, we encountered four problems: maceration, rashes, pruritus and infection. Not all patients are able to tolerate the cushion, especially children, and certain anatomical regions as the face and the upper chest are not easy to dress for obvious social, psychological and aesthetic reasons. In other anatomical regions, it is also difficult to obtain adequate compression and occlusion of the scar. To overcome such problems of applying silicone gel sheeting, we tested the use of liquid silicone gel (LSG) in the treatment of 18 linear hypertrophic scars (HS group) and 12 minor keloids (KS group) as an alternative to silicone gel sheeting or cushion. Objective parameters (volume, thickness and colour) and subjective symptoms such as pain and pruritus were examined. Evaluations were made when the therapy started and after 30, 90 and 180 days of follow-up. After 90 days of treatment with silicone gel alone (two applications daily), HS group showed a significant improvement in terms of volume decrease, reduced inflammation and redness and improved elasticity. In conclusion, on the basis of our clinical data, we find LSG to be a useful method to overcome the difficulties of applying silicone gel sheeting on irregular surface.
Bamberger, Katharine T
2016-03-01
The use of intensive longitudinal methods (ILM)-rapid in situ assessment at micro timescales-can be overlaid on RCTs and other study designs in applied family research. Particularly, when done as part of a multiple timescale design-in bursts over macro timescales-ILM can advance the study of the mechanisms and effects of family interventions and processes of family change. ILM confers measurement benefits in accurately assessing momentary and variable experiences and captures fine-grained dynamic pictures of time-ordered processes. Thus, ILM allows opportunities to investigate new research questions about intervention effects on within-subject (i.e., within-person, within-family) variability (i.e., dynamic constructs) and about the time-ordered change process that interventions induce in families and family members beginning with the first intervention session. This paper discusses the need and rationale for applying ILM to family intervention evaluation, new research questions that can be addressed with ILM, example research using ILM in the related fields of basic family research and the evaluation of individual-based interventions. Finally, the paper touches on practical challenges and considerations associated with ILM and points readers to resources for the application of ILM.
Lopes, Fernanda Cristina Rezende; Tannous, Katia; Rueda-Ordóñez, Yesid Javier
2016-11-01
This work aims the study of decomposition kinetics of guarana seed residue using thermogravimetric analyzer under synthetic air atmosphere applying heating rates of 5, 10, and 15°C/min, from room temperature to 900°C. Three thermal decomposition stages were identified: dehydration (25.1-160°C), oxidative pyrolysis (240-370°C), and combustion (350-650°C). The activation energies, reaction model, and pre-exponential factor were determined through four isoconversional methods, master plots, and linearization of the conversion rate equation, respectively. A scheme of two-consecutive reactions was applied validating the kinetic parameters of first-order reaction and two-dimensional diffusion models for the oxidative pyrolysis stage (149.57kJ/mol, 6.97×10(10)1/s) and for combustion stage (77.98kJ/mol, 98.611/s), respectively. The comparison between theoretical and experimental conversion and conversion rate showed good agreement with average deviation lower than 2%, indicating that these results could be used for modeling of guarana seed residue. Copyright © 2016 Elsevier Ltd. All rights reserved.
Albelda, J.; Denia, F. D.; Torres, M. I.; Fuenmayor, F. J.
2007-06-01
To carry out the acoustic analysis of dissipative silencers with uniform cross-section, the application of the mode matching method at the geometrical discontinuities is an attractive option from a computational point of view. The consideration of this methodology assumes, in general, that the modes associated with the transversal geometry of each element with uniform cross-section are known for the excitation frequencies considered in the analysis. The calculation of the transversal modes is not, however, a simple task when the acoustic system involves perforated elements and absorbent materials. The current work presents a modal approach to calculate the transversal modes and the corresponding axial wavenumbers for dissipative mufflers of uniform (but arbitrary) cross-section. The proposed technique is based on the division of the transversal section into subdomains and the subsequent use of a substructuring procedure with two sets of modes to improve the convergence. The former set of modes fulfils the condition of zero pressure at the common boundary between transversal subdomains while the latter satisfies the condition of zero derivative in the direction normal to the boundary. The approach leads to a versatile methodology with a moderate computational effort that can be applied to mufflers commonly found in real applications. To validate the procedure presented in this work, comparisons are provided with finite element predictions and results available in the literature, showing a good agreement. In addition, the procedure is applied to an example of practical interest.
González-Rodríguez, Maria Luisa; Mouram, Imane; Cózar-Bernal, Ma Jose; Villasmil, Sheila; Rabasco, Antonio M
2012-10-01
Niosomes formulated from different nonionic surfactants (Span® 60, Brij® 72, Span® 80, or Eumulgin® B 2) with cholesterol (CH) molar ratios of 3:1 or 4:1 with respect to surfactant were prepared with different sumatriptan amount (10 and 15 mg) and stearylamine (SA). Thin-film hydration method was employed to produce the vesicles, and the time lapsed to hydrate the lipid film (1 or 24 h) was introduced as variable. These factors were selected as variables and their levels were introduced into two L18 orthogonal arrays. The aim was to optimize the manufacturing conditions by applying Taguchi methodology. Response variables were vesicle size, zeta potential (Z), and drug entrapment. From Taguchi analysis, drug concentration and the time until the hydration were the most influencing parameters on size, being the niosomes made with Span® 80 the smallest vesicles. The presence of SA into the vesicles had a relevant influence on Z values. All the factors except the surfactant-CH ratio had an influence on the encapsulation. Formulations were optimized by applying the marginal means methodology. Results obtained showed a good correlation between mean and signal-to-noise ratio parameters, indicating the feasibility of the robust methodology to optimize this formulation. Also, the extrusion process exerted a positive influence on the drug entrapment.
An acceleration technique for the Gauss-Seidel method applied to symmetric linear systems
Jesús Cajigas
2014-06-01
Full Text Available A preconditioning technique to improve the convergence of the Gauss-Seidel method applied to symmetric linear systems while preserving symmetry is proposed. The preconditioner is of the form I + K and can be applied an arbitrary number of times. It is shown that under certain conditions the application of the preconditioner a finite number of steps reduces the matrix to a diagonal. A series of numerical experiments using matrices from spatial discretizations of partial differential equations demonstrates that both versions of the preconditioner, point and block version, exhibit lower iteration counts than its non-symmetric version. Resumen. Se propone una técnica de precondicionamiento para mejorar la convergencia del método Gauss-Seidel aplicado a sistemas lineales simétricos pero preservando simetría. El precondicionador es de la forma I + K y puede ser aplicado un número arbitrario de veces. Se demuestra que bajo ciertas condiciones la aplicación del precondicionador un número finito de pasos reduce la matriz del sistema precondicionado a una diagonal. Una serie de experimentos con matrices que provienen de la discretización de ecuaciones en derivadas parciales muestra que ambas versiones del precondicionador, por punto y por bloque, muestran un menor número de iteraciones en comparación con la versión que no preserva simetría.
A method of applying two-pump system in automatic transmissions for energy conservation
Peng Dong
2015-06-01
Full Text Available In order to improve the hydraulic efficiency, modern automatic transmissions tend to apply electric oil pump in their hydraulic system. The electric oil pump can support the mechanical oil pump for cooling, lubrication, and maintaining the line pressure at low engine speeds. In addition, the start–stop function can be realized by means of the electric oil pump; thus, the fuel consumption can be further reduced. This article proposes a method of applying two-pump system (one electric oil pump and one mechanical oil pump in automatic transmissions based on the forward driving simulation. A mathematical model for calculating the transmission power loss is developed. The power loss transfers to heat which requires oil flow for cooling and lubrication. A leakage model is developed to calculate the leakage of the hydraulic system. In order to satisfy the flow requirement, a flow-based control strategy for the electric oil pump is developed. Simulation results of different driving cycles show that there is a best combination of the size of electric oil pump and the size of mechanical oil pump with respect to the optimal energy conservation. Besides, the two-pump system can also satisfy the requirement of the start–stop function. This research is extremely valuable for the forward design of a two-pump system in automatic transmissions with respect to energy conservation and start–stop function.
Adaptive Ant Colony Clustering Method Applied to Finding Closely Communicating Community
Yan Liu
2012-02-01
Full Text Available The investigation of community structures in networks is an important issue in many domains and disciplines. Closely communicating community is different from the traditional community which emphasize particularly on structure or context. Our previous method played more emphasis on the feasibility that ant colony algorithm applied to community detection. However the essence of closely communicating community did not be described clearly. In this paper, the deﬁnition of closely communicating community is put forward ﬁrstly, the four features are described and corresponding methods are introduced to achieve the value of features between each pair. Meanwhile, pair propinquity and local propinquity are put forward and used to guide ants’ decision. Based on the previous work, the closely communicating community detection method is improved in four aspects of adaptive adjusting, which are entropy based weight modulation, combining historical paths and random wandering to select next coordination, the strategy of forcing unloading and the adaptive change of ant’s eyesight. The value selection of parameters is discussed in the portion of experiments, and the results also reveal the improvement of our algorithm in adaptive djusting.
Xin, Li; Wenxue, Hong; Jialin, Song; Jiannan, Kang
2005-01-01
We endeavor to provide a novel tool to evaluate environmental comfort level in Health Smart Home (HSH). HSH is regarded a good alternative for the independent life of elders and people with disability. Numerous intelligent devices, installed within a home environment, can provide the resident with continuous monitoring and comfortable environment. In this paper, a novel method of evaluating environmental comfort level is provided. An intelligent sensor is a fuzzy comfort sensor that can measure and fusion the environmental parameters. Based upon the results, it will further give a linguistic description about the environmental comfort level, in the manner of an expert system. The core of the sensor is multi-parameter information fusion. Similar to human behavior, the sensor makes all the evaluation about the surrounding environment's comfort level based on the symbolic measurement theory. We applied chart representation theory in multivariate analysis in the biomedical engineering field to complete the human comfortable sensor's linguistic concept creation. We achieved better performance when using this method to complete multi-parameter fusion and fuzziness. It is our belief that this method can be used in both biology intelligent sensing and many other areas, where the quantitative and qualitative information transform is needed.
Photonic simulation method applied to the study of structural color in Myxomycetes.
Dolinko, Andrés; Skigin, Diana; Inchaussandague, Marina; Carmaran, Cecilia
2012-07-02
We present a novel simulation method to investigate the multicolored effect of the Diachea leucopoda (Physarales order, Myxomycetes class), which is a microorganism that has a characteristic pointillistic iridescent appearance. It was shown that this appearance is of structural origin, and is produced within the peridium -protective layer that encloses the mass of spores-, which is basically a corrugated sheet of a transparent material. The main characteristics of the observed color were explained in terms of interference effects using a simple model of homogeneous planar slab. In this paper we apply a novel simulation method to investigate the electromagnetic response of such structure in more detail, i.e., taking into account the inhomogeneities of the biological material within the peridium and its curvature. We show that both features, which could not be considered within the simplified model, affect the observed color. The proposed method is of great potential for the study of biological structures, which present a high degree of complexity in the geometrical shapes as well as in the materials involved.
Quantum Monte Carlo method applied to non-Markovian barrier transmission
Hupin, Guillaume; Lacroix, Denis
2010-01-01
In nuclear fusion and fission, fluctuation and dissipation arise because of the coupling of collective degrees of freedom with internal excitations. Close to the barrier, quantum, statistical, and non-Markovian effects are expected to be important. In this work, a new approach based on quantum Monte Carlo addressing this problem is presented. The exact dynamics of a system coupled to an environment is replaced by a set of stochastic evolutions of the system density. The quantum Monte Carlo method is applied to systems with quadratic potentials. In all ranges of temperature and coupling, the stochastic method matches the exact evolution, showing that non-Markovian effects can be simulated accurately. A comparison with other theories, such as Nakajima-Zwanzig or time-convolutionless, shows that only the latter can be competitive if the expansion in terms of coupling constant is made at least to fourth order. A systematic study of the inverted parabola case is made at different temperatures and coupling constants. The asymptotic passing probability is estimated by different approaches including the Markovian limit. Large differences with an exact result are seen in the latter case or when only second order in the coupling strength is considered, as is generally assumed in nuclear transport models. In contrast, if fourth order in the coupling or quantum Monte Carlo method is used, a perfect agreement is obtained.
The boundary element method applied to viscous and vortex shedding flows around cylinders
Farrant, Tim
Studies are presented to further extend the use of the boundary element method (BEM) for the solution of viscous flows around bluff bodies, governed by the incompressible Navier-Stokes equations. Two distinct formulations are applied to various flows around cylindrical geometries for Reynolds numbers Tan (1994) and known herein as the global BEM, was coded to execute in parallel on multi-processor computers. Reductions in execution time were achieved and the method was employed to solve an oscillating cylinder problem. In this study, the displacement undergone by the body was very large but the Reynolds number was always Tan et al (1998). A validation for isolated and double circular cylinders in a uniform stream was performed against experimental evidence to demonstrate the method's stability and accuracy for laminar vortex shedding with geometries involving multiply connected domains. Finally, computational results for flows around four equispaced circular cylinders of equal diameter and two cylinders, one circular the other elliptical, are reported. Many of the concepts established for the flow around two cylinders of equal diameter were found to be useful in interpretation of these more complicated arrangements.
Resampling method for applying density-dependent habitat selection theory to wildlife surveys.
Olivia Tardy
Full Text Available Isodar theory can be used to evaluate fitness consequences of density-dependent habitat selection by animals. A typical habitat isodar is a regression curve plotting competitor densities in two adjacent habitats when individual fitness is equal. Despite the increasing use of habitat isodars, their application remains largely limited to areas composed of pairs of adjacent habitats that are defined a priori. We developed a resampling method that uses data from wildlife surveys to build isodars in heterogeneous landscapes without having to predefine habitat types. The method consists in randomly placing blocks over the survey area and dividing those blocks in two adjacent sub-blocks of the same size. Animal abundance is then estimated within the two sub-blocks. This process is done 100 times. Different functional forms of isodars can be investigated by relating animal abundance and differences in habitat features between sub-blocks. We applied this method to abundance data of raccoons and striped skunks, two of the main hosts of rabies virus in North America. Habitat selection by raccoons and striped skunks depended on both conspecific abundance and the difference in landscape composition and structure between sub-blocks. When conspecific abundance was low, raccoons and striped skunks favored areas with relatively high proportions of forests and anthropogenic features, respectively. Under high conspecific abundance, however, both species preferred areas with rather large corn-forest edge densities and corn field proportions. Based on random sampling techniques, we provide a robust method that is applicable to a broad range of species, including medium- to large-sized mammals with high mobility. The method is sufficiently flexible to incorporate multiple environmental covariates that can reflect key requirements of the focal species. We thus illustrate how isodar theory can be used with wildlife surveys to assess density-dependent habitat selection
Pillet, N; Caurier, E
2008-01-01
Applying a variational multiparticle-multihole configuration mixing method whose purpose is to include correlations beyond the mean field in a unified way without particle number and Pauli principle violations, we investigate pairing-like correlations in the ground states of $ ^{116}$Sn,$ ^{106}$Sn and $ ^{100}$Sn. The same effective nucleon-nucleon interaction namely, the D1S parameterization of the Gogny force is used to derive both the mean field and correlation components of nuclear wave functions. Calculations are performed using an axially symetric representation. The structure of correlated wave functions, their convergence with respect to the number of particle-hole excitations and the influence of correlations on single-particle level spectra and occupation probabilities are analyzed and compared with results obtained with the same two-body effective interaction from BCS, Hartree-Fock-Bogoliubov and particle number projected after variation BCS approaches. Calculations of nuclear radii and the first ...
Applying innovation method to assess english speaking performance on communication apprehension
Wang, Li-Jyu
2010-01-01
Full Text Available A growing number of research studies are now available to shed some light on ELT methods. Currently, educational portfolios are implemented in Science, Mathematics and Geography and also have become widely used in ELT. When the students prepared their own portfolios, they self-monitored their performances. The purpose of this study was to investigate the effects of self-monitoring and portfolios on college students’ English speaking performance. The participants involved in this study were 60 college students majoring in the Department of Applied Foreign Languages at one university of technology in Taiwan. In the study, descriptive statistics and t-tests were used to test the effects of using communication apprehension. In the portfolio group, the students’ communication apprehension was lowered. In conducting this study, the researcher hoped that this research could provide valuable perspective on the use of portfolios and self-monitoring
A Field Method for Backscatter Calibration Applied to NOAA's Reson 7125 Multibeam Echo-Sounders
Welton, Briana
Acoustic seafloor backscatter measurements made by multiple Reson multibeam echo-sounders (MBES) used for hydrographic survey are observed to be inconsistent, affecting the quality of data products and impeding large-scale processing efforts. A method to conduct a relative inter and intea sonar calibration in the field using dual frequency Reson 7125 MBES has been developed, tested, and evaluated to improve the consistency of backscatter measurements made from multiple MBES systems. The approach is unique in that it determines a set of corrections for power, gain, pulse length, and an angle dependent calibration term relative to a single Reson 7125 MBES calibrated in an acoustic test tank. These corrections for each MBES can then be applied during processing for any acquisition setting combination. This approach seeks to reduce the need for subjective and inefficient manual data or data product manipulation during post processing, providing a foundation for improved automated seafloor characterization using data from more than one MBES system.
Mario Arturo Ruiz Estrada
2013-12-01
Full Text Available This article explores the effectiveness of applying a new multidimensional graphical method to the teaching and learning of economics. In essence, the paper extends the significance of multi-dimensional graphs to study any economic phenomenon from a multidimensional perspective. The paper proposes the introduction of a new set of multidimensional coordinate spaces that should clearly and logically propose the effective visualization of complex and dynamic economic phenomena into the same graphical space and time with which the effectiveness of multidimensional graphs for real practical purposes can be evaluated. From the analyses carried out, which are based on both primary and secondary data sources, the article argues that multidimensional graphs can actually be evaluated to understand the degree of their effectiveness in visualizing economic problems affecting society on different levels.
Applied methods and techniques for mechatronic systems modelling, identification and control
Zhu, Quanmin; Cheng, Lei; Wang, Yongji; Zhao, Dongya
2014-01-01
Applied Methods and Techniques for Mechatronic Systems brings together the relevant studies in mechatronic systems with the latest research from interdisciplinary theoretical studies, computational algorithm development and exemplary applications. Readers can easily tailor the techniques in this book to accommodate their ad hoc applications. The clear structure of each paper, background - motivation - quantitative development (equations) - case studies/illustration/tutorial (curve, table, etc.) is also helpful. It is mainly aimed at graduate students, professors and academic researchers in related fields, but it will also be helpful to engineers and scientists from industry. Lei Liu is a lecturer at Huazhong University of Science and Technology (HUST), China; Quanmin Zhu is a professor at University of the West of England, UK; Lei Cheng is an associate professor at Wuhan University of Science and Technology, China; Yongji Wang is a professor at HUST; Dongya Zhao is an associate professor at China University o...
Comparison of gradient methods for gain tuning of a PD controller applied on a quadrotor system
Kim, Jinho; Wilkerson, Stephen A.; Gadsden, S. Andrew
2016-05-01
Many mechanical and electrical systems have utilized the proportional-integral-derivative (PID) control strategy. The concept of PID control is a classical approach but it is easy to implement and yields a very good tracking performance. Unmanned aerial vehicles (UAVs) are currently experiencing a significant growth in popularity. Due to the advantages of PID controllers, UAVs are implementing PID controllers for improved stability and performance. An important consideration for the system is the selection of PID gain values in order to achieve a safe flight and successful mission. There are a number of different algorithms that can be used for real-time tuning of gains. This paper presents two algorithms for gain tuning, and are based on the method of steepest descent and Newton's minimization of an objective function. This paper compares the results of applying these two gain tuning algorithms in conjunction with a PD controller on a quadrotor system.
Applying RP-FDM Technology to Produce Prototype Castings Using the Investment Casting Method
M. Macků
2012-09-01
Full Text Available The research focused on the production of prototype castings, which is mapped out starting from the drawing documentation up to theproduction of the casting itself. The FDM method was applied for the production of the 3D pattern. Its main objective was to find out whatdimensional changes happened during individual production stages, starting from the 3D pattern printing through a silicon mouldproduction, wax patterns casting, making shells, melting out wax from shells and drying, up to the production of the final casting itself.Five measurements of determined dimensions were made during the production, which were processed and evaluated mathematically.A determination of shrinkage and a proposal of measures to maintain the dimensional stability of the final casting so as to meetrequirements specified by a customer were the results.
Experimental design applied to spin coating of 2D colloidal crystal masks: a relevant method?
Colson, Pierre; Cloots, Rudi; Henrist, Catherine
2011-11-01
Monolayers of colloidal spheres are used as masks in nanosphere lithography (NSL) for the selective deposition of nanostructured layers. Several methods exist for the formation of self-organized particle monolayers, among which spin coating appears to be very promising. However, a spin coating process is defined by several parameters like several ramps, rotation speeds, and durations. All parameters influence the spreading and drying of the droplet containing the particles. Moreover, scientists are confronted with the formation of numerous defects in spin coated layers, limiting well-ordered areas to a few micrometers squared. So far, empiricism has mainly ruled the world of nanoparticle self-organization by spin coating, and much of the literature is experimentally based. Therefore, the development of experimental protocols to control the ordering of particles is a major goal for further progress in NSL. We applied experimental design to spin coating, to evaluate the efficiency of this method to extract and model the relationships between the experimental parameters and the degree of ordering in the particles monolayers. A set of experiments was generated by the MODDE software and applied to the spin coating of latex suspension (diameter 490 nm). We calculated the ordering by a homemade image analysis tool. The results of partial least squares (PLS) modeling show that the proposed mathematical model only fits data from strictly monolayers but is not predictive for new sets of parameters. We submitted the data to principal component analysis (PCA) that was able to explain 91% of the results when based on strictly monolayered samples. PCA shows that the ordering was positively correlated to the ramp time and negatively correlated to the first rotation speed. We obtain large defect-free domains with the best set of parameters tested in this study. This protocol leads to areas of 200 μm(2), which has never been reported so far.
Dogulu, Nilay; Solomatine, Dimitri; Lal Shrestha, Durga
2014-05-01
Within the context of flood forecasting, assessment of predictive uncertainty has become a necessity for most of the modelling studies in operational hydrology. There are several uncertainty analysis and/or prediction methods available in the literature; however, most of them rely on normality and homoscedasticity assumptions for model residuals occurring in reproducing the observed data. This study focuses on a statistical method analyzing model residuals without having any assumptions and based on a clustering approach: Uncertainty Estimation based on local Errors and Clustering (UNEEC). The aim of this work is to provide a comprehensive evaluation of the UNEEC method's performance in view of clustering approach employed within its methodology. This is done by analyzing normality of model residuals and comparing uncertainty analysis results (for 50% and 90% confidence level) with those obtained from uniform interval and quantile regression methods. An important part of the basis by which the methods are compared is analysis of data clusters representing different hydrometeorological conditions. The validation measures used are PICP, MPI, ARIL and NUE where necessary. A new validation measure linking prediction interval to the (hydrological) model quality - weighted mean prediction interval (WMPI) - is also proposed for comparing the methods more effectively. The case study is Brue catchment, located in the South West of England. A different parametrization of the method than its previous application in Shrestha and Solomatine (2008) is used, i.e. past error values in addition to discharge and effective rainfall is considered. The results show that UNEEC's notable characteristic in its methodology, i.e. applying clustering to data of predictors upon which catchment behaviour information is encapsulated, contributes increased accuracy of the method's results for varying flow conditions. Besides, classifying data so that extreme flow events are individually
Investigation of the effects caused by applying voltage in Layer-by-Layer self-assembly method
Omura Y.
2013-08-01
Full Text Available Recently, Layer-by-Layer (LbL self-assembly method under applied voltage (voltage-applied LbL attracts great attention. It is reported that the method enables more abundant film adsorption than conventional LbL method. However, a small proportion of experimental results about adsorption of polyelectrolytes by voltage-applied LbL have been reported. In this study, voltage-applied LbL method using weakly charged polyelectrolytes was examined. Poly (allylamine hydrochloride (PAH and Poly (ethylene imine (PEI as cationic solutions and Poly (acrylic acid (PAA as anionic solution were chosen. The pH of solutions was adjusted to several conditions and film of PAH/PAA and film of PEI/PAA were fabricated by voltage-applied LbL method. Change of adsorption behavior and film morphology by applying voltage depended on pH condition of solutions. When pH of PAH/PAA solutions was 3.9/3.8, respectively, the film adsorption was accelerated by applying voltage. Moreover, in this condition, the surface morphology remarkably changes and texture structure appears by applying voltage. Consequently, it was found that applying voltage in LbL method was effective in controlling film adsorption and the surface nano structure.
Vitor Souza Martins
2017-03-01
Full Text Available Satellite data provide the only viable means for extensive monitoring of remote and large freshwater systems, such as the Amazon floodplain lakes. However, an accurate atmospheric correction is required to retrieve water constituents based on surface water reflectance ( R W . In this paper, we assessed three atmospheric correction methods (Second Simulation of a Satellite Signal in the Solar Spectrum (6SV, ACOLITE and Sen2Cor applied to an image acquired by the MultiSpectral Instrument (MSI on-board of the European Space Agency’s Sentinel-2A platform using concurrent in-situ measurements over four Amazon floodplain lakes in Brazil. In addition, we evaluated the correction of forest adjacency effects based on the linear spectral unmixing model, and performed a temporal evaluation of atmospheric constituents from Multi-Angle Implementation of Atmospheric Correction (MAIAC products. The validation of MAIAC aerosol optical depth (AOD indicated satisfactory retrievals over the Amazon region, with a correlation coefficient (R of ~0.7 and 0.85 for Terra and Aqua products, respectively. The seasonal distribution of the cloud cover and AOD revealed a contrast between the first and second half of the year in the study area. Furthermore, simulation of top-of-atmosphere (TOA reflectance showed a critical contribution of atmospheric effects (>50% to all spectral bands, especially the deep blue (92%–96% and blue (84%–92% bands. The atmospheric correction results of the visible bands illustrate the limitation of the methods over dark lakes ( R W < 1%, and better match of the R W shape compared with in-situ measurements over turbid lakes, although the accuracy varied depending on the spectral bands and methods. Particularly above 705 nm, R W was highly affected by Amazon forest adjacency, and the proposed adjacency effect correction minimized the spectral distortions in R W (RMSE < 0.006. Finally, an extensive validation of the methods is required for
Balancing a U-Shaped Assembly Line by Applying Nested Partitions Method
Bhagwat, Nikhil V. [Iowa State Univ., Ames, IA (United States)
2005-01-01
In this study, we applied the Nested Partitions method to a U-line balancing problem and conducted experiments to evaluate the application. From the results, it is quite evident that the Nested Partitions method provided near optimal solutions (optimal in some cases). Besides, the execution time is quite short as compared to the Branch and Bound algorithm. However, for larger data sets, the algorithm took significantly longer times for execution. One of the reasons could be the way in which the random samples are generated. In the present study, a random sample is a solution in itself which requires assignment of tasks to various stations. The time taken to assign tasks to stations is directly proportional to the number of tasks. Thus, if the number of tasks increases, the time taken to generate random samples for the different regions also increases. The performance index for the Nested Partitions method in the present study was the number of stations in the random solutions (samples) generated. The total idle time for the samples can be used as another performance index. ULINO method is known to have used a combination of bounds to come up with good solutions. This approach of combining different performance indices can be used to evaluate the random samples and obtain even better solutions. Here, we used deterministic time values for the tasks. In industries where majority of tasks are performed manually, the stochastic version of the problem could be of vital importance. Experimenting with different objective functions (No. of stations was used in this study) could be of some significance to some industries where in the cost associated with creation of a new station is not the same. For such industries, the results obtained by using the present approach will not be of much value. Labor costs, task incompletion costs or a combination of those can be effectively used as alternate objective functions.
Vidaurre, D.; Rodríguez, E. E.; Bielza, C.; Larrañaga, P.; Rudomin, P.
2012-01-01
In the spinal cord of the anesthetized cat, spontaneous cord dorsum potentials (CDPs) appear synchronously along the lumbo-sacral segments. These CDPs have different shapes and magnitudes. Previous work has indicated that some CDPs appear to be specially associated with the activation of spinal pathways that lead to primary afferent depolarization and presynaptic inhibition. Visual detection and classification of these CDPs provides relevant information on the functional organization of the neural networks involved in the control of sensory information and allows the characterization of the changes produced by acute nerve and spinal lesions. We now present a novel feature extraction approach for signal classification, applied to CDP detection. The method is based on an intuitive procedure. We first remove by convolution the noise from the CDPs recorded in each given spinal segment. Then, we assign a coefficient for each main local maximum of the signal using its amplitude and distance to the most important maximum of the signal. These coefficients will be the input for the subsequent classification algorithm. In particular, we employ gradient boosting classification trees. This combination of approaches allows a faster and more accurate discrimination of CDPs than is obtained by other methods. PMID:22929924
A new feature extraction method for signal classification applied to cord dorsum potential detection
Vidaurre, D.; Rodríguez, E. E.; Bielza, C.; Larrañaga, P.; Rudomin, P.
2012-10-01
In the spinal cord of the anesthetized cat, spontaneous cord dorsum potentials (CDPs) appear synchronously along the lumbo-sacral segments. These CDPs have different shapes and magnitudes. Previous work has indicated that some CDPs appear to be specially associated with the activation of spinal pathways that lead to primary afferent depolarization and presynaptic inhibition. Visual detection and classification of these CDPs provides relevant information on the functional organization of the neural networks involved in the control of sensory information and allows the characterization of the changes produced by acute nerve and spinal lesions. We now present a novel feature extraction approach for signal classification, applied to CDP detection. The method is based on an intuitive procedure. We first remove by convolution the noise from the CDPs recorded in each given spinal segment. Then, we assign a coefficient for each main local maximum of the signal using its amplitude and distance to the most important maximum of the signal. These coefficients will be the input for the subsequent classification algorithm. In particular, we employ gradient boosting classification trees. This combination of approaches allows a faster and more accurate discrimination of CDPs than is obtained by other methods.
Kinetic energy partition method applied to ground state helium-like atoms.
Chen, Yu-Hsin; Chao, Sheng D
2017-03-28
We have used the recently developed kinetic energy partition (KEP) method to solve the quantum eigenvalue problems for helium-like atoms and obtain precise ground state energies and wave-functions. The key to treating properly the electron-electron (repulsive) Coulomb potential energies for the KEP method to be applied is to introduce a "negative mass" term into the partitioned kinetic energy. A Hartree-like product wave-function from the subsystem wave-functions is used to form the initial trial function, and the variational search for the optimized adiabatic parameters leads to a precise ground state energy. This new approach sheds new light on the all-important problem of solving many-electron Schrödinger equations and hopefully opens a new way to predictive quantum chemistry. The results presented here give very promising evidence that an effective one-electron model can be used to represent a many-electron system, in the spirit of density functional theory.
Quantum Monte-Carlo method applied to Non-Markovian barrier transmission
Hupin, G
2010-01-01
In nuclear fusion and fission, fluctuation and dissipation arise due to the coupling of collective degrees of freedom with internal excitations. Close to the barrier, both quantum, statistical and non-Markovian effects are expected to be important. In this work, a new approach based on quantum Monte-Carlo addressing this problem is presented. The exact dynamics of a system coupled to an environment is replaced by a set of stochastic evolutions of the system density. The quantum Monte-Carlo method is applied to systems with quadratic potentials. In all range of temperature and coupling, the stochastic method matches the exact evolution showing that non-Markovian effects can be simulated accurately. A comparison with other theories like Nakajima-Zwanzig or Time-ConvolutionLess ones shows that only the latter can be competitive if the expansion in terms of coupling constant is made at least to fourth order. A systematic study of the inverted parabola case is made at different temperatures and coupling constants....
Complex Variable Methods for 3D Applied Mathematics: 3D Twistors and the biharmonic equation
Shaw, William T
2010-01-01
In applied mathematics generally and fluid dynamics in particular, the role of complex variable methods is normally confined to two-dimensional motion and the association of points with complex numbers via the assignment w = x+i y. In this framework 2D potential flow can be treated through the use of holomorphic functions and biharmonic flow through a simple, but superficially non-holomorphic extension. This paper explains how to elevate the use of complex methods to three dimensions, using Penrose's theory of twistors as adapted to intrinsically 3D and non-relativistic problems by Hitchin. We first summarize the equations of 3D steady viscous fluid flow in their basic geometric form. We then explain the theory of twistors for 3D, resulting in complex holomorphic representations of solutions to harmonic and biharmonic problems. It is shown how this intrinsically holomorphic 3D approach reduces naturally to the well-known 2D situations when there is translational or rotational symmetry, and an example is given...
Din, Tengku Noor Daimah Tengku; Jamayet, Nafij; Rajion, Zainul Ahmad; Luddin, Norhayati; Abdullah, Johari Yap; Abdullah, Abdul Manaf; Yahya, Suzana
2016-12-01
Facial defects are either congenital or caused by trauma or cancer where most of them affect the person appearance. The emotional pressure and low self-esteem are problems commonly related to patient with facial defect. To overcome this problem, silicone prosthesis was designed to cover the defect part. This study describes the techniques in designing and fabrication for facial prosthesis applying computer aided method and manufacturing (CADCAM). The steps of fabricating the facial prosthesis were based on a patient case. The patient was diagnosed for Gorlin Gotz syndrome and came to Hospital Universiti Sains Malaysia (HUSM) for prosthesis. The 3D image of the patient was reconstructed from CT data using MIMICS software. Based on the 3D image, the intercanthal and zygomatic measurements of the patient were compared with available data in the database to find the suitable nose shape. The normal nose shape for the patient was retrieved from the nasal digital library. Mirror imaging technique was used to mirror the facial part. The final design of facial prosthesis including eye, nose and cheek was superimposed to see the result virtually. After the final design was confirmed, the mould design was created. The mould of nasal prosthesis was printed using Objet 3D printer. Silicone casting was done using the 3D print mould. The final prosthesis produced from the computer aided method was acceptable to be used for facial rehabilitation to provide better quality of life.
Karol, Jane
2007-01-01
This article describes a unique, innovative, and effective method of psychotherapy using horses to aid in the therapeutic process (Equine-facilitated Psychotherapy or EFP). The remarkable elements of the horse--power, grace, vulnerability, and a willingness to bear another--combine to form a fertile stage for psychotherapeutic exploration. Therapeutic programs using horses to work with various psychiatric presentations in children and adolescents have begun to receive attention over the past 10 years. However, few EFP programs utilize the expertise of masters and doctoral-level psychologists, clinical social workers, or psychiatrists. In contrast, the psychological practice described in this article, written and practiced by a doctoral-level clinician, applies the breadth and depth of psychological theory and practice developed over the last century to a distinctly compelling milieu. The method relies not only on the therapeutic relationship with the clinician, but is also fueled by the client's compelling attachment to the therapeutic horse. As both of these relationships progress, the child's inner world and interpersonal style come to the forefront and the EFP theater allows the clinician to explore the client's intrapersonal and interpersonal worlds on preverbal, nonverbal and verbal levels of experience.
Design method of electromagnetic field applied to Al-alloy electromagnetic casting
YANG Jing; DANG Jing-zhi; PENG You-gen; CHENG Jun
2006-01-01
The electromagnetic pump imposes the electromagnetic motive force (Lorentz force) on the liquid metal directly and makes it move along the definite direction by using the function of electric current and magnetic field in the conducting fluid.Compared with the traditional die casting, the system of counter-gravity casting can effectively control the speed of fillingto make Al-alloy liquid fill steadily by adjusting controlled-current. So the foundry defects can be decreased or avoided effectively by this system. Based on the theory of electromagnetic pump, the design method of electromagnetic field in electromagnetic pump was investigated emphatically. The rule of magnetic induction intensity B influenced by the divided electromagnet airgap's size was founded. Furthermore, the empirical formula of magnetic induction intensity B in a magnetic airgap for an open magnet in the saturated state was deduced by mathematics regression analysis. Counter-gravity casting applied to the Al-alloy electromagnetic filling was developed with this method. Besides, the electromagnetism filling counter-gravity casting process of the turbo-charge blade wheel was also fixed. The eligibility rate of blade wheel produced by such technique can be increased to 98%. The casts have compact structure and excellent capability.
Goal oriented soil mapping: applying modern methods supported by local knowledge: A review
Pereira, Paulo; Brevik, Eric; Oliva, Marc; Estebaranz, Ferran; Depellegrin, Daniel; Novara, Agata; Cerda, Artemi; Menshov, Oleksandr
2017-04-01
In the recent years the amount of soil data available increased importantly. This facilitated the production of better and accurate maps, important for sustainable land management (Pereira et al., 2017). Despite these advances, the human knowledge is extremely important to understand the natural characteristics of the landscape. The knowledge accumulated and transmitted generation after generation is priceless, and should be considered as a valuable data source for soil mapping and modelling. The local knowledge and wisdom can complement the new advances in soil analysis. In addition, farmers are the most interested in the participation and incorporation of their knowledge in the models, since they are the end-users of the study that soil scientists produce. Integration of local community's vision and understanding about nature is assumed to be an important step to the implementation of decision maker's policies. Despite this, many challenges appear regarding the integration of local and scientific knowledge, since in some cases there is no spatial correlation between folk and scientific classifications, which may be attributed to the different cultural variables that influence local soil classification. The objective of this work is to review how modern soil methods incorporated local knowledge in their models. References Pereira, P., Brevik, E., Oliva, M., Estebaranz, F., Depellegrin, D., Novara, A., Cerda, A., Menshov, O. (2017) Goal Oriented soil mapping: applying modern methods supported by local knowledge. In: Pereira, P., Brevik, E., Munoz-Rojas, M., Miller, B. (Eds.) Soil mapping and process modelling for sustainable land use management (Elsevier Publishing House) ISBN: 9780128052006
Trindade, Bruno Machado; Campos, Tarcisio Passos Ribeiro de, E-mail: campos@nuclear.ufmg.b [Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG (Brazil). Program of Post-Graduation in Sciences and Nuclear Techniques
2011-03-15
Objective: The present paper describes a procedure for conversion of computed tomography or magnetic resonance images into a three-dimensional voxel model for dosimetry purposes. Such model is a personalized representation of the patient that can be utilized in nuclear particle transport simulations by means of the MCNP (Monte Carlo N-Particle) code, reproducing the stochastic process of nuclear particles interaction with human tissues. Materials and Methods: The developed computational system - SISCODES - is a tool designed for 3D planning of radiotherapy or radiological procedures. Based on tomographic images of the patient, the treatment plan is modeled and simulated. Then, the absorbed doses are shown by means of isodose curves superimposed on the model. The SISCODES couples the three dimensional model with the MCNP5 code, simulating the protocol of exposure to ionizing radiation. Results: The SISCODES has been utilized by the NRI/CNPq in the creation of anthropomorphic and anthropometric voxel models which are coupled with the MCNP code for modeling brachytherapy and teletherapy applied to lung, pelvis, spine, head and neck tumors, among others. The current SISCODES modules are presented together with examples of cases of radiotherapy planning. Conclusion: The SISCODES provides a fast method to create personalized voxel models of any patient which can be used in stochastic simulations. The combination of the MCNP simulation with a personalized model of the patient increases the dosimetry accuracy in radiotherapy. (author)
Lanman, Douglas; Wetzstein, Gordon; Hirsch, Matthew; Heidrich, Wolfgang; Raskar, Ramesh
2012-03-01
This paper focuses on resolving long-standing limitations of parallax barriers by applying formal optimization methods. We consider two generalizations of conventional parallax barriers. First, we consider general two-layer architectures, supporting high-speed temporal variation with arbitrary opacities on each layer. Second, we consider general multi-layer architectures containing three or more light-attenuating layers. This line of research has led to two new attenuation-based displays. The High-Rank 3D (HR3D) display contains a stacked pair of LCD panels; rather than using heuristically-defined parallax barriers, both layers are jointly-optimized using low-rank light field factorization, resulting in increased brightness, refresh rate, and battery life for mobile applications. The Layered 3D display extends this approach to multi-layered displays composed of compact volumes of light-attenuating material. Such volumetric attenuators recreate a 4D light field when illuminated by a uniform backlight. We further introduce Polarization Fields as an optically-efficient and computationally efficient extension of Layered 3D to multi-layer LCDs. Together, these projects reveal new generalizations to parallax barrier concepts, enabled by the application of formal optimization methods to multi-layer attenuation-based designs in a manner that uniquely leverages the compressive nature of 3D scenes for display applications.
Cukier, Robert I
2011-01-28
Leucine zippers consist of alpha helical monomers dimerized (or oligomerized) into alpha superhelical structures known as coiled coils. Forming the correct interface of a dimer from its monomers requires an exploration of configuration space focused on the side chains of one monomer that must interdigitate with sites on the other monomer. The aim of this work is to generate good interfaces in short simulations starting from separated monomers. Methods are developed to accomplish this goal based on an extension of a previously introduced [Su and Cukier, J. Phys. Chem. B 113, 9595, (2009)] hamiltonian temperature replica exchange method (HTREM), which scales the hamiltonian in both potential and kinetic energies that was used for the simulation of dimer melting curves. The new method, HTREM_MS (MS designates mean square), focused on interface formation, adds restraints to the hamiltonians for all but the physical system, which is characterized by the normal molecular dynamics force field at the desired temperature. The restraints in the nonphysical systems serve to prevent the monomers from separating too far, and have the dual aims of enhancing the sampling of close in configurations and breaking unwanted correlations in the restrained systems. The method is applied to a 31-residue truncation of the 33-residue leucine zipper (GCN4-p1) of the yeast transcriptional activator GCN4. The monomers are initially separated by a distance that is beyond their capture length. HTREM simulations show that the monomers oscillate between dimerlike and monomerlike configurations, but do not form a stable interface. HTREM_MS simulations result in the dimer interface being faithfully reconstructed on a 2 ns time scale. A small number of systems (one physical and two restrained with modified potentials and higher effective temperatures) are sufficient. An in silico mutant that should not dimerize because it lacks charged residues that provide electrostatic stabilization of the dimer
Cukier, Robert I.
2011-01-01
Leucine zippers consist of alpha helical monomers dimerized (or oligomerized) into alpha superhelical structures known as coiled coils. Forming the correct interface of a dimer from its monomers requires an exploration of configuration space focused on the side chains of one monomer that must interdigitate with sites on the other monomer. The aim of this work is to generate good interfaces in short simulations starting from separated monomers. Methods are developed to accomplish this goal based on an extension of a previously introduced [Su and Cukier, J. Phys. Chem. B 113, 9595, (2009)] Hamiltonian temperature replica exchange method (HTREM), which scales the Hamiltonian in both potential and kinetic energies that was used for the simulation of dimer melting curves. The new method, HTREM_MS (MS designates mean square), focused on interface formation, adds restraints to the Hamiltonians for all but the physical system, which is characterized by the normal molecular dynamics force field at the desired temperature. The restraints in the nonphysical systems serve to prevent the monomers from separating too far, and have the dual aims of enhancing the sampling of close in configurations and breaking unwanted correlations in the restrained systems. The method is applied to a 31-residue truncation of the 33-residue leucine zipper (GCN4-p1) of the yeast transcriptional activator GCN4. The monomers are initially separated by a distance that is beyond their capture length. HTREM simulations show that the monomers oscillate between dimerlike and monomerlike configurations, but do not form a stable interface. HTREM_MS simulations result in the dimer interface being faithfully reconstructed on a 2 ns time scale. A small number of systems (one physical and two restrained with modified potentials and higher effective temperatures) are sufficient. An in silico mutant that should not dimerize because it lacks charged residues that provide electrostatic stabilization of the dimer
Zhang, Qiong; Shearer, Peter M.
2016-05-01
Understanding earthquake clustering in space and time is important but also challenging because of complexities in earthquake patterns and the large and diverse nature of earthquake catalogues. Swarms are of particular interest because they likely result from physical changes in the crust, such as slow slip or fluid flow. Both swarms and clusters resulting from aftershock sequences can span a wide range of spatial and temporal scales. Here we test and implement a new method to identify seismicity clusters of varying sizes and discriminate them from randomly occurring background seismicity. Our method searches for the closest neighbouring earthquakes in space and time and compares the number of neighbours to the background events in larger space/time windows. Applying our method to California's San Jacinto Fault Zone (SJFZ), we find a total of 89 swarm-like groups. These groups range in size from 0.14 to 7.23 km and last from 15 min to 22 d. The most striking spatial pattern is the larger fraction of swarms at the northern and southern ends of the SJFZ than its central segment, which may be related to more normal-faulting events at the two ends. In order to explore possible driving mechanisms, we study the spatial migration of events in swarms containing at least 20 events by fitting with both linear and diffusion migration models. Our results suggest that SJFZ swarms are better explained by fluid flow because their estimated linear migration velocities are far smaller than those of typical creep events while large values of best-fitting hydraulic diffusivity are found.
Shin, Y.; Lee, E.
2015-12-01
Under the influence of recent climate change, abnormal weather condition such as floods and droughts has issued frequently all over the world. The occurrence of abnormal weather in major crop production areas leads to soaring world grain prices because it influence the reduction of crop yield. Development of crop yield estimation method is important means to accommodate the global food crisis caused by abnormal weather. However, due to problems with the reliability of the seasonal climate prediction, application research on agricultural productivity has not been much progress yet. In this study, it is an object to develop long-term crop yield estimation method in major crop production countries worldwide using multi seasonal climate prediction data collected by APEC Climate Center. There are 6-month lead seasonal predictions produced by six state-of-the-art global coupled ocean-atmosphere models(MSC_CANCM3, MSC_CANCM4, NASA, NCEP, PNU, POAMA). First of all, we produce a customized climate data through temporal and spatial downscaling methods for use as a climatic input data to the global scale crop model. Next, we evaluate the uncertainty of climate prediction by applying multi seasonal climate prediction in the crop model. Because rice is the most important staple food crop in the Asia-Pacific region, we assess the reliability of the rice yields using seasonal climate prediction for main rice production countries. RMSE(Root Mean Squire Error) and TCC(Temporal Correlation Coefficient) analysis is performed in Asia-Pacific countries, major 14 rice production countries, to evaluate the reliability of the rice yield according to the climate prediction models. We compare the rice yield data obtained from FAOSTAT and estimated using the seasonal climate prediction data in Asia-Pacific countries. In addition, we show that the reliability of seasonal climate prediction according to the climate models in Asia-Pacific countries where rice cultivation is being carried out.
Ortega J, R.; Valle G, E. del [IPN-ESFM, 07738 Mexico D.F. (Mexico)]. e-mail: roj@correo.azc.uam.mx
2003-07-01
There are carried out charge and energy calculations deposited due to the interaction of electrons with a plate of a certain material, solving numerically the electron transport equation for the Boltzmann-Fokker-Planck approach of first order in plate geometry with a computer program denominated TEOD-NodExp (Transport of Electrons in Discreet Ordinates, Nodal Exponentials), using the proposed method by the Dr. J. E. Morel to carry out the discretization of the variable energy and several spatial discretization schemes, denominated exponentials nodal. It is used the Fokker-Planck equation since it represents an approach of the Boltzmann transport equation that is been worth whenever it is predominant the dispersion of small angles, that is to say, resulting dispersion in small dispersion angles and small losses of energy in the transport of charged particles. Such electrons could be those that they face with a braking plate in a device of thermonuclear fusion. In the present work its are considered electrons of 1 MeV that impact isotropically on an aluminum plate. They were considered three different thickness of plate that its were designated as problems 1, 2 and 3. In the calculations it was used the discrete ordinate method S{sub 4} with expansions of the dispersion cross sections until P{sub 3} order. They were considered 25 energy groups of uniform size between the minimum energy of 0.1 MeV and the maximum of 1.0 MeV; the one spatial intervals number it was considered variable and it was assigned the values of 10, 20 and 30. (Author)
Girardi, E.; Ruggieri, J.M. [CEA Cadarache, CEA/DEN/CAD/DER/SPRC/LEPH, 13 - Saint-Paul Lez Durance (France)
2003-07-01
The aim of this paper is to present the last developments made on a domain decomposition method applied to reactor core calculations. In this method, two kind of balance equation with two different numerical methods dealing with two different unknowns are coupled. In the first part the two balance transport equations (first order and second order one) are presented with the corresponding following numerical methods: Variational Nodal Method and Discrete Ordinate Nodal Method. In the second part, the Multi-Method/Multi-Domain algorithm is introduced by applying the Schwarz domain decomposition to the multigroup eigenvalue problem of the transport equation. The resulting algorithm is then provided. The projection operators used to coupled the two methods are detailed in the last part of the paper. Finally some preliminary numerical applications on benchmarks are given showing encouraging results. (authors)
Fujimoto, K.; Yanagisawa, T.; Uetsuhara, M.
Automated detection and tracking of faint objects in optical, or bearing-only, sensor imagery is a topic of immense interest in space surveillance. Robust methods in this realm will lead to better space situational awareness (SSA) while reducing the cost of sensors and optics. They are especially relevant in the search for high area-to-mass ratio (HAMR) objects, as their apparent brightness can change significantly over time. A track-before-detect (TBD) approach has been shown to be suitable for faint, low signal-to-noise ratio (SNR) images of resident space objects (RSOs). TBD does not rely upon the extraction of feature points within the image based on some thresholding criteria, but rather directly takes as input the intensity information from the image file. Not only is all of the available information from the image used, TBD avoids the computational intractability of the conventional feature-based line detection (i.e., "string of pearls") approach to track detection for low SNR data. Implementation of TBD rooted in finite set statistics (FISST) theory has been proposed recently by Vo, et al. Compared to other TBD methods applied so far to SSA, such as the stacking method or multi-pass multi-period denoising, the FISST approach is statistically rigorous and has been shown to be more computationally efficient, thus paving the path toward on-line processing. In this paper, we intend to apply a multi-Bernoulli filter to actual CCD imagery of RSOs. The multi-Bernoulli filter can explicitly account for the birth and death of multiple targets in a measurement arc. TBD is achieved via a sequential Monte Carlo implementation. Preliminary results with simulated single-target data indicate that a Bernoulli filter can successfully track and detect objects with measurement SNR as low as 2.4. Although the advent of fast-cadence scientific CMOS sensors have made the automation of faint object detection a realistic goal, it is nonetheless a difficult goal, as measurements
Gliding Box method applied to trace element distribution of a geochemical data set
Paz González, Antonio; Vidal Vázquez, Eva; Rosario García Moreno, M.; Paz Ferreiro, Jorge; Saa Requejo, Antonio; María Tarquis, Ana
2010-05-01
The application of fractal theory to process geochemical prospecting data can provide useful information for evaluating mineralization potential. A geochemical survey was carried out in the west area of Coruña province (NW Spain). Major elements and trace elements were determined by standard analytical techniques. It is well known that there are specific elements or arrays of elements, which are associated with specific types of mineralization. Arsenic has been used to evaluate the metallogenetic importance of the studied zone. Moreover, as can be considered as a pathfinder of Au, as these two elements are genetically associated. The main objective of this study was to use multifractal analysis to characterize the distribution of three trace elements, namely Au, As, and Sb. Concerning the local geology, the study area comprises predominantly acid rocks, mainly alkaline and calcalkaline granites, gneiss and migmatites. The most significant structural feature of this zone is the presence of a mylonitic band, with an approximate NE-SW orientation. The data set used in this study comprises 323 samples collected, with standard geochemical criteria, preferentially in the B horizon of the soil. Occasionally where this horizon was not present, samples were collected from the C horizon. Samples were taken in a rectilinear grid. The sampling lines were perpendicular to the NE-SW tectonic structures. Frequency distributions of the studied elements departed from normal. Coefficients of variation ranked as follows: Sb coefficients between Au, Sb, and As were found, even if these were low. The so-called ‘gliding box' algorithm (GB) proposed originally for lacunarity analysis has been extended to multifractal modelling and provides an alternative to the ‘box-counting' method for implementing multifractal analysis. The partitioning method applied in GB algorithm constructs samples by gliding a box of certain size (a) over the grid map in all possible directions. An "up
Lesellier, E; Mith, D; Dubrulle, I
2015-12-01
necessary, two-step gradient elution. The developed methods were then applied to real cosmetic samples to assess the method specificity, with regards to matrix interferences, and calibration curves were plotted to evaluate quantification. Besides, depending on the matrix and on the studied compounds, the importance of the detector type, UV or ELSD (evaporative light-scattering detection), and of the particle size of the stationary phase is discussed.
Kattan Michael W
2007-06-01
Full Text Available Abstract Introduction The clinical significance of a treatment effect demonstrated in a randomized trial is typically assessed by reference to differences in event rates at the group level. An alternative is to make individualized predictions for each patient based on a prediction model. This approach is growing in popularity, particularly for cancer. Despite its intuitive advantages, it remains plausible that some prediction models may do more harm than good. Here we present a novel method for determining whether predictions from a model should be used to apply the results of a randomized trial to individual patients, as opposed to using group level results. Methods We propose applying the prediction model to a data set from a randomized trial and examining the results of patients for whom the treatment arm recommended by a prediction model is congruent with allocation. These results are compared with the strategy of treating all patients through use of a net benefit function that incorporates both the number of patients treated and the outcome. We examined models developed using data sets regarding adjuvant chemotherapy for colorectal cancer and Dutasteride for benign prostatic hypertrophy. Results For adjuvant chemotherapy, we found that patients who would opt for chemotherapy even for small risk reductions, and, conversely, those who would require a very large risk reduction, would on average be harmed by using a prediction model; those with intermediate preferences would on average benefit by allowing such information to help their decision making. Use of prediction could, at worst, lead to the equivalent of an additional death or recurrence per 143 patients; at best it could lead to the equivalent of a reduction in the number of treatments of 25% without an increase in event rates. In the Dutasteride case, where the average benefit of treatment is more modest, there is a small benefit of prediction modelling, equivalent to a reduction of
D.D. Lestiani
2011-08-01
Full Text Available Urbanization and industrial growth have deteriorated air quality and are major cause to air pollution. Air pollution through fine and ultra-fine particles is a serious threat to human health. The source of air pollution must be known quantitatively by elemental characterization, in order to design the appropriate air quality management. The suitable methods for analysis the airborne particulate matter such as nuclear analytical techniques are hardly needed to solve the air pollution problem. The objectives of this study are to apply the nuclear analytical techniques to airborne particulate samples collected in Bandung, to assess the accuracy and to ensure the reliable of analytical results through the comparison of instrumental neutron activation analysis (INAA and particles induced X-ray emission (PIXE. Particle samples in the PM2.5 and PM2.5-10 ranges have been collected in Bandung twice a week for 24 hours using a Gent stacked filter unit. The result showed that generally there was a systematic difference between INAA and PIXE results, which the values obtained by PIXE were lower than values determined by INAA. INAA is generally more sensitive and reliable than PIXE for Na, Al, Cl, V, Mn, Fe, Br and I, therefore INAA data are preffered, while PIXE usually gives better precision than INAA for Mg, K, Ca, Ti and Zn. Nevertheless, both techniques provide reliable results and complement to each other. INAA is still a prospective method, while PIXE with the special capabilities is a promising tool that could contribute and complement the lack of NAA in determination of lead, sulphur and silicon. The combination of INAA and PIXE can advantageously be used in air pollution studies to extend the number of important elements measured as key elements in source apportionment.
An alternative method of gas boriding applied to the formation of borocarburized layer
Kulka, M., E-mail: michal.kulka@put.poznan.pl; Makuch, N.; Pertek, A.; Piasecki, A.
2012-10-15
The borocarburized layers were produced by tandem diffusion processes: carburizing followed by boriding. An alternative method of gas boriding was proposed. Two-stage gas boronizing in N{sub 2}-H{sub 2}-BCl{sub 3} atmosphere was applied to the formation of iron borides on a carburized substrate. This process consisted in two stages, which were alternately repeated: saturation by boron and diffusion annealing. The microstructure and microhardness of produced layer were compared to those-obtained in case of continuous gas boriding in H{sub 2}-BCl{sub 3} atmosphere, earlier used. The first objective of two-stage boronizing, consisting in acceleration of boron diffusion, has been efficiently implemented. Despite the lower temperature and shorter duration of boronizing, about 1.5 times larger iron borides' zone has been formed on carburized steel. Second objective, the absolute elimination of brittle FeB phase, has failed. However, the amount of FeB phase has been considerably limited. Longer diffusion annealing should provide the boride layer with single-phase microstructure, without FeB phase. - Highlights: Black-Right-Pointing-Pointer Alternative method of gas boriding in H{sub 2}-N{sub 2}-BCl{sub 3} atmosphere was proposed. Black-Right-Pointing-Pointer The process consisted in two stages: saturation by boron and diffusion annealing. Black-Right-Pointing-Pointer These stages of short duration were alternately repeated. Black-Right-Pointing-Pointer The acceleration of boron diffusion was efficiently implemented. Black-Right-Pointing-Pointer The amount of FeB phase in the boride zone was limited.
Applying Automated MR-Based Diagnostic Methods to the Memory Clinic: A Prospective Study.
Klöppel, Stefan; Peter, Jessica; Ludl, Anna; Pilatus, Anne; Maier, Sabrina; Mader, Irina; Heimbach, Bernhard; Frings, Lars; Egger, Karl; Dukart, Juergen; Schroeter, Matthias L; Perneczky, Robert; Häussermann, Peter; Vach, Werner; Urbach, Horst; Teipel, Stefan; Hüll, Michael; Abdulkadir, Ahmed
2015-01-01
Several studies have demonstrated that fully automated pattern recognition methods applied to structural magnetic resonance imaging (MRI) aid in the diagnosis of dementia, but these conclusions are based on highly preselected samples that significantly differ from that seen in a dementia clinic. At a single dementia clinic, we evaluated the ability of a linear support vector machine trained with completely unrelated data to differentiate between Alzheimer's disease (AD), frontotemporal dementia (FTD), Lewy body dementia, and healthy aging based on 3D-T1 weighted MRI data sets. Furthermore, we predicted progression to AD in subjects with mild cognitive impairment (MCI) at baseline and automatically quantified white matter hyperintensities from FLAIR-images. Separating additionally recruited healthy elderly from those with dementia was accurate with an area under the curve (AUC) of 0.97 (according to Fig. 4). Multi-class separation of patients with either AD or FTD from other included groups was good on the training set (AUC > 0.9) but substantially less accurate (AUC = 0.76 for AD, AUC = 0.78 for FTD) on 134 cases from the local clinic. Longitudinal data from 28 cases with MCI at baseline and appropriate follow-up data were available. The computer tool discriminated progressive from stable MCI with AUC = 0.73, compared to AUC = 0.80 for the training set. A relatively low accuracy by clinicians (AUC = 0.81) illustrates the difficulties of predicting conversion in this heterogeneous cohort. This first application of a MRI-based pattern recognition method to a routine sample demonstrates feasibility, but also illustrates that automated multi-class differential diagnoses have to be the focus of future methodological developments and application studies.
Stochastic Methods Applied to Power System Operations with Renewable Energy: A Review
Zhou, Z. [Argonne National Lab. (ANL), Argonne, IL (United States); Liu, C. [Argonne National Lab. (ANL), Argonne, IL (United States); Electric Reliability Council of Texas (ERCOT), Austin, TX (United States); Botterud, A. [Argonne National Lab. (ANL), Argonne, IL (United States)
2016-08-01
Renewable energy resources have been rapidly integrated into power systems in many parts of the world, contributing to a cleaner and more sustainable supply of electricity. Wind and solar resources also introduce new challenges for system operations and planning in terms of economics and reliability because of their variability and uncertainty. Operational strategies based on stochastic optimization have been developed recently to address these challenges. In general terms, these stochastic strategies either embed uncertainties into the scheduling formulations (e.g., the unit commitment [UC] problem) in probabilistic forms or develop more appropriate operating reserve strategies to take advantage of advanced forecasting techniques. Other approaches to address uncertainty are also proposed, where operational feasibility is ensured within an uncertainty set of forecasting intervals. In this report, a comprehensive review is conducted to present the state of the art through Spring 2015 in the area of stochastic methods applied to power system operations with high penetration of renewable energy. Chapters 1 and 2 give a brief introduction and overview of power system and electricity market operations, as well as the impact of renewable energy and how this impact is typically considered in modeling tools. Chapter 3 reviews relevant literature on operating reserves and specifically probabilistic methods to estimate the need for system reserve requirements. Chapter 4 looks at stochastic programming formulations of the UC and economic dispatch (ED) problems, highlighting benefits reported in the literature as well as recent industry developments. Chapter 5 briefly introduces alternative formulations of UC under uncertainty, such as robust, chance-constrained, and interval programming. Finally, in Chapter 6, we conclude with the main observations from our review and important directions for future work.
Concepts and Methods of Solid-State NMR Spectroscopy Applied to Biomembranes.
Molugu, Trivikram R; Lee, Soohyun; Brown, Michael F
2017-09-14
Concepts of solid-state NMR spectroscopy and applications to fluid membranes are reviewed in this paper. Membrane lipids with (2)H-labeled acyl chains or polar head groups are studied using (2)H NMR to yield knowledge of their atomistic structures in relation to equilibrium properties. This review demonstrates the principles and applications of solid-state NMR by unifying dipolar and quadrupolar interactions and highlights the unique features offered by solid-state (2)H NMR with experimental illustrations. For randomly oriented multilamellar lipids or aligned membranes, solid-state (2)H NMR enables direct measurement of residual quadrupolar couplings (RQCs) due to individual C-(2)H-labeled segments. The distribution of RQC values gives nearly complete profiles of the segmental order parameters SCD((i)) as a function of acyl segment position (i). Alternatively, one can measure residual dipolar couplings (RDCs) for natural abundance lipid samples to obtain segmental SCH order parameters. A theoretical mean-torque model provides acyl-packing profiles representing the cumulative chain extension along the normal to the aqueous interface. Equilibrium structural properties of fluid bilayers and various thermodynamic quantities can then be calculated, which describe the interactions with cholesterol, detergents, peptides, and integral membrane proteins and formation of lipid rafts. One can also obtain direct information for membrane-bound peptides or proteins by measuring RDCs using magic-angle spinning (MAS) in combination with dipolar recoupling methods. Solid-state NMR methods have been extensively applied to characterize model membranes and membrane-bound peptides and proteins, giving unique information on their conformations, orientations, and interactions in the natural liquid-crystalline state.
Das, T
2016-01-01
We analyze the imprint of nodal planes in high-order harmonic spectra from aligned diatomic molecules in intense laser fields whose components exhibit orthogonal polarizations. We show that the typical suppression in the spectra associated to nodal planes is distorted, and that this distortion can be employed to map the electron's angle of return to its parent ion. This investigation is performed semi-analytically at the single-molecule response and single-active orbital level, using the strong-field approximation and the steepest descent method. We show that the velocity form of the dipole operator is superior to the length form in providing information about this distortion. However, both forms introduce artifacts that are absent in the actual momentum-space wavefunction. Furthermore, elliptically polarized fields lead to larger distortions in comparison to two-color orthogonally polarized fields. These features are investigated in detail for $\\mathrm{O}_2$, whose highest occupied molecular orbital provides...
Hovhannisyan, V V; Strečka, J; Ananikian, N S
2016-03-02
The spin-1 Ising-Heisenberg diamond chain with the second-neighbor interaction between nodal spins is rigorously solved using the transfer-matrix method. In particular, exact results for the ground state, magnetization process and specific heat are presented and discussed. It is shown that further-neighbor interaction between nodal spins gives rise to three novel ground states with a translationally broken symmetry, but at the same time, does not increases the total number of intermediate plateaus in a zero-temperature magnetization curve compared with the simplified model without this interaction term. The zero-field specific heat displays interesting thermal dependencies with a single- or double-peak structure.
Multicriterial Hierarchy Methods Applied in Consumption Demand Analysis. The Case of Romania
Constantin Bob
2008-03-01
Full Text Available The basic information for computing the quantitative statistical indicators, that characterize the demand of industrial products and services are collected by the national statistics organizations, through a series of statistical surveys (most of them periodical and partial. The source for data we used in the present paper is an statistical investigation organized by the National Institute of Statistics, "Family budgets survey" that allows to collect information regarding the households composition, income, expenditure, consumption and other aspects of population living standard. In 2005, in Romania, a person spent monthly in average 391,2 RON, meaning about 115,1 Euros for purchasing the consumed food products and beverage, as well as non-foods products, services, investments and other taxes. 23% of this sum was spent for purchasing the consumed food products and beverages, 21.6% of the total sum was spent for purchasing non-food goods and 18,1% for payment of different services. There is a discrepancy between the different development regions in Romania, regarding total households expenditure composition. For this reason, in the present paper we applied statistical methods for ranking the various development regions in Romania, using the share of householdsí expenditure on categories of products and services as ranking criteria.
Thompson's renormalization group method applied to QCD at high energy scale
Nassif, Claudio; Silva, P R
2007-01-01
We use a renormalization group method to treat QCD-vacuum behavior specially closer to the regime of asymptotic freedom. QCD-vacuum behaves effectively like a "paramagnetic system" of a classical theory in the sense that virtual color charges (gluons) emerges in it as a spin effect of a paramagnetic material when a magnetic field aligns their microscopic magnetic dipoles. Due to that strong classical analogy with the paramagnetism of Landau's theory,we will be able to use a certain Landau effective action without temperature and phase transition for just representing QCD-vacuum behavior at higher energies as being magnetization of a paramagnetic material in the presence of a magnetic field $H$. This reasoning will allow us to apply Thompson's approach to such an action in order to extract an "effective susceptibility" ($\\chi>0$) of QCD-vacuum. It depends on logarithmic of energy scale $u$ to investigate hadronic matter. Consequently we are able to get an ``effective magnetic permeability" ($\\mu>1$) of such a ...
Seyedehfarzaneh Nojabaei
2014-01-01
Full Text Available Efficiency is becoming a pivotal aspect in each manufacturing system and scheduling plays a crucial role in sustaining it. The applicability of distributed computing to coordinate and execute jobs has been investigated in the past literature. Moreover, it is significant that even for sensitive industrial systems the only criterion of allocating jobs to appropriate machines is the FIFO policy. On the other flip, many researchers are of the opinion that the main reason behind failing to provide fairness in distributed systems is considering the only criterion of time stamp to judge upon and form the queue of jobs with the aim of allocating those jobs to the machines. In order to increase the efficiency of sensitive industrial system, this study takes into consideration of three criteria of each job including priority, time action and time stamp. The methodology adopted by this study is definition of job scheduler and positioning jobs in temporary queue and sorting via developing bubble sort. In sorting algorithm criterion of priority, time action should be considered besides time stamp to recognize the tense jobs for processing earlier. To evaluate this algorithm first a numerical test case (simulation is programmed and then the case study performing in order to optimize efficiency of applying this method in real manufacturing system. Eventually the results of this study provided evidence on that the rate of efficiency is increased.
New Methods for Timing Analysis of Transient Events, Applied to Fermi/GBM Magnetar Bursts
Huppenkothen, Daniela; Uttley, Phil; van der Horst, Alexander J; van der Klis, Michiel; Kouveliotou, Chryssa; Gogus, Ersin; Granot, Jonathan; Vaughan, Simon; Finger, Mark H
2013-01-01
In order to discern the physical nature of many gamma-ray sources in the sky, we must look not only in spectral and spatial dimensions, but also understand their temporal variability. However, timing analysis of sources with a highly transient nature, such as magnetar bursts, is difficult: standard Fourier techniques developed for long-term variability generally observed, for example, from AGN often do not apply. Here, we present newly developed timing methods applicable to transient events of all kinds, and show their successful application to magnetar bursts observed with Fermi/GBM. Magnetars are a prime subject for timing studies, thanks to the detection of quasi-periodicities in magnetar Giant Flares and their potential to help shed light on the structure of neutron stars. Using state-of-the art statistical techniques, we search for quasi-periodicities (QPOs) in a sample of bursts from Soft Gamma Repeater SGR J0501+4516 observed with Fermi/GBM and provide upper limits for potential QPO detections. Additio...
An Online Gravity Modeling Method Applied for High Precision Free-INS.
Wang, Jing; Yang, Gongliu; Li, Jing; Zhou, Xiao
2016-09-23
For real-time solution of inertial navigation system (INS), the high-degree spherical harmonic gravity model (SHM) is not applicable because of its time and space complexity, in which traditional normal gravity model (NGM) has been the dominant technique for gravity compensation. In this paper, a two-dimensional second-order polynomial model is derived from SHM according to the approximate linear characteristic of regional disturbing potential. Firstly, deflections of vertical (DOVs) on dense grids are calculated with SHM in an external computer. And then, the polynomial coefficients are obtained using these DOVs. To achieve global navigation, the coefficients and applicable region of polynomial model are both updated synchronously in above computer. Compared with high-degree SHM, the polynomial model takes less storage and computational time at the expense of minor precision. Meanwhile, the model is more accurate than NGM. Finally, numerical test and INS experiment show that the proposed method outperforms traditional gravity models applied for high precision free-INS.
Kinetics-based phase change approach for VOF method applied to boiling flow
Cifani, Paolo; Geurts, Bernard; Kuerten, Hans
2014-11-01
Direct numerical simulations of boiling flows are performed to better understand the interaction of boiling phenomena with turbulence. The multiphase flow is simulated by solving a single set of equations for the whole flow field according to the one-fluid formulation, using a VOF interface capturing method. Interface terms, related to surface tension, interphase mass transfer and latent heat, are added at the phase boundary. The mass transfer rate across the interface is derived from kinetic theory and subsequently coupled with the continuum representation of the flow field. The numerical model was implemented in OpenFOAM and validated against 3 cases: evaporation of a spherical uniformly heated droplet, growth of a spherical bubble in a superheated liquid and two dimensional film boiling. The computational model will be used to investigate the change in turbulence intensity in a fully developed channel flow due to interaction with boiling heat and mass transfer. In particular, we will focus on the influence of the vapor bubble volume fraction on enhancing heat and mass transfer. Furthermore, we will investigate kinetic energy spectra in order to identify the dynamics associated with the wakes of vapor bubbles. Department of Applied Mathematics, 7500 AE Enschede, NL.
Applying the Communicative Methodic in Learning Lithuanian as a Second Language
Vaida Buivydienė
2011-04-01
Full Text Available One of the strengths of European countries is their multilingual nature. That was stressed by the European Council during different international projects. Every citizen of Europe should be given the opportunity to learn languages life long, as languages open new perspectives in the modern world. Besides, learning languages brings tolerance and understanding to people from different cultures. The article presents the idea, based on the experience of foreign language teaching, that communicative method in learning languages should be applied also to Lithuanian as a foreign language teaching. According to international SOCRATES exchange programme, every year a lot of students and teachers from abroad come to Lithuanian Higher Schools (VGTU included. They should also be provided with opportunities to gain the best language learning, cultural and educational experience. Most of the students that came to VGTU pointed out Lithuanian language learning being one of the subjects to be chosen. That leads to organizing interesting and useful short-lasting Lithuanian language courses. The survey carried in VGTU and the analysis of the materials gathered leads to the conclusion that the communicative approach in language teaching is the best to cater the needs and interests of the learners to master the survival Lithuanian.
Nodal mantle cell lymphoma: A descriptive study from a tertiary care center in South India
Arun Roy
2013-01-01
Full Text Available Introduction: Mantle cell lymphoma (MCL is a type of B-cell non-Hodgkin lymphoma (NHL with distinctive morphologic, immunophenotypic and a characteristic cytogenetic abnormality, the t(11;14(q13;q32 and overexpression of cyclin D1. The common histologic features include effaced lymphoid architecture by a monomorphic lymphoid population with a vaguely nodular, diffuse or mantle zone growth pattern. The classic cytomorphologic features include small to medium sized lymphoid cells with irregular nuclear contours and scanty cytoplasm, closely resembling centrocytes. Materials and Methods: This retrospective study comprises 13 cases of MCL over a period of 5½ years in our department, comprising 4% of all nodal NHL diagnosed. All cases were diagnosed on lymph node biopsy. Results: The mean age of the presentation was 57 years. There was a male preponderance (M:F = 2.25:1. The disease was nodal in all cases. Most patients (84.5% had generalized lymphadenopathy and/or hepatosplenomegaly. Bone marrow involvement was seen in 81.8% of cases. Three cases showed a nodular pattern on lymph node biopsy while remaining ten had a diffuse pattern. Immunophenotyping showed positivity for CD20, CD5 and cyclin D1 and CD23 negativity. Conclusion: Despite certain morphological similarity to other low-grade/intermediate-grade lymphomas, MCL has a characteristic appearance of its own. Since it is more aggressive than other low-grade lymphomas it needs to be accurately diagnosed.
Lukl, J; Cíhalík, C
1992-01-01
A 55-year-old man was admitted to the intensive care unit on account of repeatedly occurring syncopes which developed at the peak of physical exertion. The attack was reproduced by exercise on a bicycle ergometer: the patient developed paroxysmal tachycardia with a narrow QRS and a frequency of 160/min leading after 20 sec. to severe hypotension and loss of consciousness. The same tachycardia caused by programmed atrial stimulation caused a drop of tension in the recumbent position by 30 mmHg and after more detailed analysis during electrophysiological examination it was evaluated as atrioventricular nodal reentrant tachycardia. By an electric discharge of 300 J administered by means of a stimulation electrode 7F USCI into the area of the AV node the retrograde conduction through the perinodal rapid pathways was completely interrupted and 1st. degree atrioventricular block developed. Repeated electrophysiological examination and exercise tests on a bicycle ergometer provided evidence of the disappearance of the retrograde pathway and the impossibility to elicit AVNRT. The authors express the view that the rapid perinodal pathway is interrupted in successful cases in both directions and the 1st. degree AV block is due to conduction along a slow pathway and not incidental slowing of conduction along the rapid pathway which is the generally accepted interpretation. Modification of the atrioventricular conduction by interruption of the rapid pathway by fulguration is according to data in the literature and the described patient a method which makes is possible to cure severe atrioventricular nodal reentrant tachycardias.
Kapp, D S; Kiet, T K; Chan, J K
2011-01-01
Background: The 2009 International Federation of Gynecologists and Obstetricians elected to substage patients with positive retroperitoneal lymph nodes as IIIC 1 (pelvic lymph node metastasis only) and IIIC 2 (paraaortic node metastasis with or with positive pelvic lymph nodes). We have investigated the discriminatory ability of subgrouping patients with retroperitoneal nodal involvement based on location, number, and ratio of positive nodes. Methods: For 1075 patients with stage IIIC endometrioid corpus cancer abstracted from the Surveillance, Epidemiology, and End Results databases for 2003–2007, Kaplan–Meier analyses, Cox proportional hazard models, and other quantitative measures were used to compare the prognostic discrimination for disease-specific survival (DSS) of nodal subgroupings. Results: In univariate analysis, the 3-year DSS were significantly different for subgroupings by location (IIIC 1 vs IIIC 2; 80.5% vs 67.0%, respectively, P=0.001), lymph node ratio (⩽23.2% vs >23.2% 80.8% vs 67.6% P5; 79.5, 75.4, 62.9%, P=0.016). The ratio of positive nodes showed superior discriminatory substaging in Cox models. Conclusion: Subgrouping of stage IIIC patients by the ratio of positive nodes, either as a dichotomized or continuous parameter, shows the strongest ability to discriminate the survival, controlling for other confounding factors. PMID:21915131
Park, C B; Dufort, D
2011-03-01
Nodal, a secreted signaling protein in the transforming growth factor-beta (TGF-β) superfamily, has established roles in vertebrate development. However, components of the Nodal signaling pathway are also expressed at the maternal-fetal interface and have been implicated in many processes of mammalian reproduction. Emerging evidence indicates that Nodal and its extracellular inhibitor Lefty are expressed in the uterus and complex interactions between the two proteins mediate menstruation, decidualization and embryo implantation. Furthermore, several studies have shown that Nodal from both fetal and maternal sources may regulate trophoblast cell fate and facilitate placentation as both embryonic and uterine-specific Nodal knockout mouse strains exhibit disrupted placenta morphology. Here we review the established and prospective roles of Nodal signaling in facilitating successful pregnancy, including recent evidence supporting a potential link to parturition and preterm birth.
Radiotherapy studies and extra-nodal non-Hodgkin lymphomas, progress and challenges
Specht, L
2012-01-01
for the more common extra-nodal organs, e.g. stomach, Waldeyer's ring, skin and brain, are fairly well known and show significant variation. A few randomised trials have been carried out testing the role of radiotherapy in these lymphomas. However, for most extra-nodal lymphomas, randomised trials have...... coverage of extra-nodal lymphomatous involvement with better sparing of normal tissues. The necessary radiation doses and volumes need to be defined for the different extra-nodal lymphoma entities. The challenge is to optimise the use of radiotherapy in the modern multimodality treatment of extra...... not been carried out, and treatment decisions are made on small patient series and extrapolations from nodal lymphomas. Hopefully, wide international collaboration will make controlled clinical trials possible in the less common extra-nodal lymphomas. Modern highly conformal radiotherapy allows better...
Topological Dirac nodal lines and surface charges in fcc alkaline earth metals
Hirayama, Motoaki; Okugawa, Ryo; Miyake, Takashi; Murakami, Shuichi
2017-01-01
In nodal-line semimetals, the gaps close along loops in k space, which are not at high-symmetry points. Typical mechanisms for the emergence of nodal lines involve mirror symmetry and the π Berry phase. Here we show via ab initio calculations that fcc calcium (Ca), strontium (Sr) and ytterbium (Yb) have topological nodal lines with the π Berry phase near the Fermi level, when spin-orbit interaction is neglected. In particular, Ca becomes a nodal-line semimetal at high pressure. Owing to nodal lines, the Zak phase becomes either π or 0, depending on the wavevector k, and the π Zak phase leads to surface polarization charge. Carriers eventually screen it, leaving behind large surface dipoles. In materials with nodal lines, both the large surface polarization charge and the emergent drumhead surface states enhance Rashba splitting when heavy adatoms are present, as we have shown to occur in Bi/Sr(111) and in Bi/Ag(111).
A Nodal-independent and tissue-intrinsic mechanism controls heart-looping chirality
Noël, Emily S.; Verhoeven, Manon; Lagendijk, Anne Karine; Tessadori, Federico; Smith, Kelly; Choorapoikayil, Suma; den Hertog, Jeroen; Bakkers, Jeroen
2013-11-01
Breaking left-right symmetry in bilateria is a major event during embryo development that is required for asymmetric organ position, directional organ looping and lateralized organ function in the adult. Asymmetric expression of Nodal-related genes is hypothesized to be the driving force behind regulation of organ laterality. Here we identify a Nodal-independent mechanism that drives asymmetric heart looping in zebrafish embryos. In a unique mutant defective for the Nodal-related southpaw gene, preferential dextral looping in the heart is maintained, whereas gut and brain asymmetries are randomized. As genetic and pharmacological inhibition of Nodal signalling does not abolish heart asymmetry, a yet undiscovered mechanism controls heart chirality. This mechanism is tissue intrinsic, as explanted hearts maintain ex vivo retain chiral looping behaviour and require actin polymerization and myosin II activity. We find that Nodal signalling regulates actin gene expression, supporting a model in which Nodal signalling amplifies this tissue-intrinsic mechanism of heart looping.
Applying Terzaghi's method of slope characterization to the recognition of Holocene land slippage
Rogers, J. David; Chung, Jae-won
2016-07-01
-placed trenches across headscarp grabens can provide more detailed structure of old landslides and are usually a cost-effective approach. Additional subsurface exploration can often be employed to characterize landslides. Small diameter borings are usually employed for geotechnical investigations but can easily be applied to landslides, depending on the mean particle size diameter (D50). Downhole logging of large diameter holes is the best method to evaluate complex subsurface conditions, such as dormant bedrock landslides.
Micropropagation of Costus speciosus (Koen. Sm. Using Nodal Segment Culture
Kshetrimayum PUNYARANI
2010-03-01
Full Text Available Nodal segments of Costus speciosus (Koen. Sm. containing single axillary buds were cultured on Murashige and Skoog medium (MS medium supplemented with plant growth regulators for inducing plantlets. For breaking of axillary bud dormancy, nodal segments were cultured on 40-70gl-1 sucrose or 1-13 �M adenine sulphate (AdS supplemented MS basal medium containing 5 �M 6-benzylaminopurine (BAP and 1�M ?-naphthalene acetic acid (NAA. The nodal segments cultured on 1-13 �M AdS, 5 �M BAP, 1 �M NAA and 50gl-1 sucrose showed simultaneous production of shoots and roots while those cultured on 5 �M BAP, 1 �M NAA and 40-70gl-1 sucrose produced shoots only. The most effective media for breaking axillary bud dormancy was 5 �M BAP, 1 �M NAA, 50 gl-1 sucrose and 10 �M AdS supplemented medium. The propagules from 40-70gl-1 sucrose produced roots in shoot multiplication medium, i.e.,10 �M AdS, 1 �M NAA, 50gl-1 sucrose and 3-11 �M BAP supplemented medium. The best response for shoot multiplication was on 10 �M AdS, 1 �M NAA, 50gl-1 sucrose and 7 �M BAP. The well-rooted shoots were hardened and transferred to the soil where they showed 95% survival rate. Results show that axillary bud can be used for micropropagation of Costus speciosus.
Nicalin and its binding partner Nomo are novel Nodal signaling antagonists
Haffner, Christof; Frauli, Mélanie; Topp, Stephanie; Irmler, Martin; Hofmann, Kay; Regula, Jörg T.; Bally-Cuif, Laure; Haass, Christian
2004-01-01
Nodals are signaling factors of the transforming growth factor-β (TGFβ) superfamily with a key role in vertebrate development. They control a variety of cell fate decisions required for the establishment of the embryonic body plan. We have identified two highly conserved transmembrane proteins, Nicalin and Nomo (Nodal modulator, previously known as pM5), as novel antagonists of Nodal signaling. Nicalin is distantly related to Nicastrin, a component of the Alzheimer's disease-associated γ-secr...
Zhao, Qian; Wang, Peng; Goel, Lalit
2014-01-01
and customer reliability requirements are correlated with energy and reserve prices. Therefore a new method should be developed to evaluate the impacts of PV power on customer reliability and system reserve deployment in the new environment. In this study, a method based on the pseudo-sequential Monte Carlo......Owing to the intermittent characteristic of solar radiation, power system reliability may be affected with high photovoltaic (PV) power penetration. To reduce large variation of PV power, additional system balancing reserve would be needed. In deregulated power systems, deployment of reserves...... simulation technique has been proposed to evaluate the reserve deployment and customers' nodal reliability with high PV power penetration. The proposed method can effectively model the chronological aspects and stochastic characteristics of PV power and system operation with high computation efficiency...
A novel sintering method to obtain fully dense gadolinia doped ceria by applying a direct current
Hao, Xiaoming; Liu, Yajie; Wang, Zhenhua; Qiao, Jinshuo; Sun, Kening
2012-07-01
A fully dense Ce0.8Gd0.2O1.9 (gadolinia doped ceria, GDC) is obtained by a novel using a sintering technique for several seconds at 545 °C by applying a direct current (DC) electrical field of 70 V cm-1. The onset applied field value of this phenomenon is 20 V cm-1, and the volume specific power dissipation for the onset of flash sintering is about ∼10 mW mm-3. Through contrast with the shrinkage strain of the conventional sintering as well as scanning electron microscopy (SEM) analysis, we conclude that GDC specimens are sintered to fully density under various applied fields. In addition, we demonstrate that the grain size of GDC is decreasing with the increase of applied field and the decrease of sintering temperature. Through calculation, we find that sintering of GDC can be explained by the Joule heating from the applied electrical field.
PySCIs: a user friendly Python tool to quickly applying Small Circle methods
Calvín, Pablo; José Villalaín, Juan; Casas, Antonio; Torres, Sara
2017-04-01
Small Circle (SC) methods are common tools in paleomagnetic working on synfolding paleomagnetic components. These methods have a twofold applicability. On one hand, the Small Circle Intersection (SCI) method allows obtaining the local remagnetization direction and on the other the SCs can be used to restitute the attitude of the sedimentary beds at the moment of the remagnetization acquisition. The bases of the SCI method are as follows. (i) The paleomagnetic direction for each site follows a path which draws a SC under progressive untinting of the beds; this SC links the paleomagnetic direction before and after the tectonic correction. (ii) Considering that the beds have been deformed only by tilting around the bedding strike, the remagnetization direction is placed upon the small circle of each site. (iii) The acquisition of the remagnetization was simultaneous for the analyzed rocks. Therefore, the remagnetization direction must to be placed upon the small circle for all sites and hence the all small circle must to intersect in one direction which corresponds with the remagnetization direction. Actually the method looks for the direction in the space closest to the set of SCs by means of A/n parameter (this is the sum of the angular distances between one direction and each SC normalized by the number of sites). Once the remagnetization direction is known, it is possible to calculate the paleomagnetic direction upon each SC closest to the calculated remagnetization direction, called as Best Fit Direction (BFD). After that the paleodip of the bed (i.e. the dip of the bed at the moment of the remagnetization event) can be calculated for each site (the paleodip is the angle measured over the SC between the BFD and the paleomagnetic direction after the complete bedding correction) and perform a palinspastic reconstruction of a region. We present pySCIs, a new python tool which allows applying this methodology in an easy way. The program has two different modules, py
Magnon nodal-line semimetals and drumhead surface states in anisotropic pyrochlore ferromagnets
Mook, Alexander; Mertig, Ingrid
2016-01-01
We introduce a new type of topological magnon matter: the magnonic pendant to electronic nodal-line semimetals. Magnon spectra of anisotropic pyrochlore ferromagnets feature twofold degeneracies of magnon bands along a closed loop in reciprocal space. These magnon nodal lines are topologically protected by the coexistence of inversion and time-reversal symmetry; they require the absence of spin-orbit interaction (no Dzyaloshinskii-Moriya interaction). We calculate the topological invariants of the nodal lines and show that details of the associated magnon drumhead surface states depend strongly on the termination of the surface. Magnon nodal-line semimetals complete the family of topological magnons in three-dimensional ferromagnetic materials.
Gunar Boye
2015-06-01
Full Text Available The axial heat transfer coefficient during flow boiling of n-hexane was measured using infrared thermography to determine the axial wall temperature in three geometrically similar annular gaps with different widths (s = 1.5 mm, s = 1 mm, s = 0.5 mm. During the design and evaluation process, the methods of statistical experimental design were applied. The following factors/parameters were varied: the heat flux q · = 30 − 190 kW / m 2 , the mass flux m · = 30 − 700 kg / m 2 s , the vapor quality x · = 0 . 2 − 0 . 7 , and the subcooled inlet temperature T U = 20 − 60 K . The test sections with gap widths of s = 1.5 mm and s = 1 mm had very similar heat transfer characteristics. The heat transfer coefficient increases significantly in the range of subcooled boiling, and after reaching a maximum at the transition to the saturated flow boiling, it drops almost monotonically with increasing vapor quality. With a gap width of 0.5 mm, however, the heat transfer coefficient in the range of saturated flow boiling first has a downward trend and then increases at higher vapor qualities. For each test section, two correlations between the heat transfer coefficient and the operating parameters have been created. The comparison also shows a clear trend of an increasing heat transfer coefficient with increasing heat flux for test sections s = 1.5 mm and s = 1.0 mm, but with increasing vapor quality, this trend is reversed for test section 0.5 mm.