Nonequilibrium flows with smooth particle applied mechanics
Kum, O.
1995-07-01
Smooth particle methods are relatively new methods for simulating solid and fluid flows through they have a 20-year history of solving complex hydrodynamic problems in astrophysics, such as colliding planets and stars, for which correct answers are unknown. The results presented in this thesis evaluate the adaptability or fitness of the method for typical hydrocode production problems. For finite hydrodynamic systems, boundary conditions are important. A reflective boundary condition with image particles is a good way to prevent a density anomaly at the boundary and to keep the fluxes continuous there. Boundary values of temperature and velocity can be separately controlled. The gradient algorithm, based on differentiating the smooth particle expression for (u{rho}) and (T{rho}), does not show numerical instabilities for the stress tensor and heat flux vector quantities which require second derivatives in space when Fourier`s heat-flow law and Newton`s viscous force law are used. Smooth particle methods show an interesting parallel linking to them to molecular dynamics. For the inviscid Euler equation, with an isentropic ideal gas equation of state, the smooth particle algorithm generates trajectories isomorphic to those generated by molecular dynamics. The shear moduli were evaluated based on molecular dynamics calculations for the three weighting functions, B spline, Lucy, and Cusp functions. The accuracy and applicability of the methods were estimated by comparing a set of smooth particle Rayleigh-Benard problems, all in the laminar regime, to corresponding highly-accurate grid-based numerical solutions of continuum equations. Both transient and stationary smooth particle solutions reproduce the grid-based data with velocity errors on the order of 5%. The smooth particle method still provides robust solutions at high Rayleigh number where grid-based methods fails.
Particle Filtering Applied to Musical Tempo Tracking
Macleod Malcolm D
2004-01-01
Full Text Available This paper explores the use of particle filters for beat tracking in musical audio examples. The aim is to estimate the time-varying tempo process and to find the time locations of beats, as defined by human perception. Two alternative algorithms are presented, one which performs Rao-Blackwellisation to produce an almost deterministic formulation while the second is a formulation which models tempo as a Brownian motion process. The algorithms have been tested on a large and varied database of examples and results are comparable with the current state of the art. The deterministic algorithm gives the better performance of the two algorithms.
Heavy-ion radiography applied to charged particle radiotherapy
The objectives of the heavy-ion radiography research program applied to the clinical cancer research program of charged particle radiotherapy have a twofold purpose: (1) to explore the manner in which heavy-ion radiography and CT reconstruction can provide improved tumor localization, treatment planning, and beam delivery for radiotherapy with accelerated heavy charged particles; and (2) to explore the usefulness of heavy-ion radiography in detecting, localizing, and sizing soft tissue cancers in the human body. The techniques and procedures developed for heavy-ion radiography should prove successful in support of charged particle radiotherapy
Reaction to fire of ETICS applied on wood particle board
Bonati Antonio
2016-01-01
Full Text Available As well known the ETICS are diffusely used both for energy saving and thermal insulation reasons. They have been applied recently in wood buildings and in regions of southern Europe too due to green building and sustainability reasons. ITC-CNR has tested a lot of building materials and developed good knowledge about reaction to fire since the 1980 and currently, ETICS fixed directly to particle wood panels have been investigated with several SBI tests. In the case study are presented the main factors that can influence the fire reaction results when applied on wood structure are highlighted: the thickness of the insulating material, the presence of accidental damage, the flame attack from the inside. From the results obtained by tests on samples prepared with simulated accidental damages and fire from inside, some considerations are made about the hazard due to this specific construction technology and others on limits of the type of actually used standards product classification.
APPLYING PARTICLE SWARM OPTIMIZATION TO JOB-SHOP SCHEDULING PROBLEM
Xia Weijun; Wu Zhiming; Zhang Wei; Yang Genke
2004-01-01
A new heuristic algorithm is proposed for the problem of finding the minimum makespan in the job-shop scheduling problem. The new algorithm is based on the principles of particle swarm optimization (PSO). PSO employs a collaborative population-based search, which is inspired by the social behavior of bird flocking. It combines local search (by self experience) and global search (by neighboring experience), possessing high search efficiency. Simulated annealing (SA) employs certain probability to avoid becoming trapped in a local optimum and the search process can be controlled by the cooling schedule. By reasonably combining these two different search algorithms, a general, fast and easily implemented hybrid optimization algorithm, named HPSO, is developed. The effectiveness and efficiency of the proposed PSO-based algorithm are demonstrated by applying it to some benchmark job-shop scheduling problems and comparing results with other algorithms in literature. Comparing results indicate that PSO-based algorithm is a viable and effective approach for the job-shop scheduling problem.
3D GEOMETRIC CHARACTERIZATION OF PARTICLES APPLIED TO TECHNICAL CLEANLINESS
Irene Vecchio
2012-11-01
Full Text Available During production of mechanical components, residual dirt collects on the surfaces, thus creating a contamination that affects the durability of the assembled products. Residual particles are currently analyzed based on microscopic 2d images. However, the particle's shape is decisive for the damage it can cause, yet can not be judged reliably from 2d data. Micro-computed tomography allows to capture the complex spatial structures of thousands of particles simultaneously. Now new methods to characterize three dimensional shapes are needed to establish 3d cleanliness analysis. In this work, unambiguously indicative geometric features are defined and it is investigated how they can yield a reliable classification in three typical classes: fibers, chips and granules. Finally, the efficiency of the proposed method is proved by analyzing samples of real dirt particles.
Applying Dispersive Changes to Lagrangian Particles in Groundwater Transport Models
Konikow, L.F.
2010-01-01
Method-of-characteristics groundwater transport models require that changes in concentrations computed within an Eulerian framework to account for dispersion be transferred to moving particles used to simulate advective transport. A new algorithm was developed to accomplish this transfer between nodal values and advecting particles more precisely and realistically compared to currently used methods. The new method scales the changes and adjustments of particle concentrations relative to limiting bounds of concentration values determined from the population of adjacent nodal values. The method precludes unrealistic undershoot or overshoot for concentrations of individual particles. In the new method, if dispersion causes cell concentrations to decrease during a time step, those particles in the cell having the highest concentration will decrease the most, and those with the lowest concentration will decrease the least. The converse is true if dispersion is causing concentrations to increase. Furthermore, if the initial concentration on a particle is outside the range of the adjacent nodal values, it will automatically be adjusted in the direction of the acceptable range of values. The new method is inherently mass conservative. ?? US Government 2010.
Kulkarni, Sandip, E-mail: sandip.d.kulkarni@gmail.com [Fischell Department of Bioengineering, University of Maryland at College Park, MD 20742 (United States); Ramaswamy, Bharath; Horton, Emily; Gangapuram, Sruthi [Fischell Department of Bioengineering, University of Maryland at College Park, MD 20742 (United States); Nacev, Alek [Weinberg Medical Physics, LLC (United States); Depireux, Didier [The Institute for Systems Research, University of Maryland at College Park, MD 20742 (United States); Otomagnetics, LLC (United States); Shimoji, Mika [Fischell Department of Bioengineering, University of Maryland at College Park, MD 20742 (United States); Otomagnetics, LLC (United States); Shapiro, Benjamin [Fischell Department of Bioengineering, University of Maryland at College Park, MD 20742 (United States); The Institute for Systems Research, University of Maryland at College Park, MD 20742 (United States); Otomagnetics, LLC (United States)
2015-11-01
This article presents a method to investigate how magnetic particle characteristics affect their motion inside tissues under the influence of an applied magnetic field. Particles are placed on top of freshly excised tissue samples, a calibrated magnetic field is applied by a magnet underneath each tissue sample, and we image and quantify particle penetration depth by quantitative metrics to assess how particle sizes, their surface coatings, and tissue resistance affect particle motion. Using this method, we tested available fluorescent particles from Chemicell of four sizes (100 nm, 300 nm, 500 nm, and 1 μm diameter) with four different coatings (starch, chitosan, lipid, and PEG/P) and quantified their motion through freshly excised rat liver, kidney, and brain tissues. In broad terms, we found that the applied magnetic field moved chitosan particles most effectively through all three tissue types (as compared to starch, lipid, and PEG/P coated particles). However, the relationship between particle properties and their resulting motion was found to be complex. Hence, it will likely require substantial further study to elucidate the nuances of transport mechanisms and to select and engineer optimal particle properties to enable the most effective transport through various tissue types under applied magnetic fields.
This article presents a method to investigate how magnetic particle characteristics affect their motion inside tissues under the influence of an applied magnetic field. Particles are placed on top of freshly excised tissue samples, a calibrated magnetic field is applied by a magnet underneath each tissue sample, and we image and quantify particle penetration depth by quantitative metrics to assess how particle sizes, their surface coatings, and tissue resistance affect particle motion. Using this method, we tested available fluorescent particles from Chemicell of four sizes (100 nm, 300 nm, 500 nm, and 1 μm diameter) with four different coatings (starch, chitosan, lipid, and PEG/P) and quantified their motion through freshly excised rat liver, kidney, and brain tissues. In broad terms, we found that the applied magnetic field moved chitosan particles most effectively through all three tissue types (as compared to starch, lipid, and PEG/P coated particles). However, the relationship between particle properties and their resulting motion was found to be complex. Hence, it will likely require substantial further study to elucidate the nuances of transport mechanisms and to select and engineer optimal particle properties to enable the most effective transport through various tissue types under applied magnetic fields
Criteria of classification applied to licensing of particle accelerators
This work aims to bring to discussion the proposal of a new classification model toward to generating ionizing radiation, specifically particle accelerators, considering two parameters: the size of these facilities and the level of energy they operate, emphasizing large accelerators, which typically operate at higher levels of energy. Also motivated by the fact that the Brazilian rules do not provide an adequate standard of licensing for this size of installation, this work will seek to revise the existing classification, where generators of ionizing radiation (including particle accelerators) are considered up to the level of energy of 50 MeV
Scanning tomographic particle image velocimetry applied to a turbulent jet
Casey, T. A.
2013-02-21
We introduce a modified tomographic PIV technique using four high-speed video cameras and a scanning pulsed laser-volume. By rapidly illuminating adjacent subvolumes onto separate video frames, we can resolve a larger total volume of velocity vectors, while retaining good spatial resolution. We demonstrate this technique by performing time-resolved measurements of the turbulent structure of a round jet, using up to 9 adjacent volume slices. In essence this technique resolves more velocity planes in the depth direction by maintaining optimal particle image density and limiting the number of ghost particles. The total measurement volumes contain between 1 ×106 and 3 ×106 velocity vectors calculated from up to 1500 reconstructed depthwise image planes, showing time-resolved evolution of the large-scale vortical structures for a turbulent jet of Re up to 10 000.
Particle Swarm Optimization Applied to the Economic Dispatch Problem
Rafik Labdani
2006-06-01
Full Text Available This paper presents solution of optimal power flow (OPF problem of a power system via a simple particle swarm optimization (PSO algorithm. The objective is to minimize the fuel cost and keep the power outputs of generators, bus voltages, shunt capacitors/reactors and transformers tap-setting in their secure limits.The effectiveness of PSO was compared to that of OPF by MATPOWER. The potential and superiority of PSO have been demonstrated through the results of IEEE 30-bus system
CAS CERN Accelerator School: Applied geodesy for particle accelerators
This specialized course addresses the many topics involved in the application of geodesy to large particle accelerators, though many of the techniques described are equally applicable to large construction projects and surveillance systems where the highest possible surveying accuracies are required. The course reflects the considerable experience gained over many years, not only at CERN but in projects all over the world. The methods described range from the latest approach using satellites to recent developments in conventional techniques. They include the global positioning system (GPS), its development, deployment and precision, the use of the Terrameter and the combination or comparison of its results with those of the GPS, the automation of instruments, the management of measurements and data, and the highly evolved treatment of the observations. (orig.)
Applying new solar particle event models to interplanetary satellite programs
Variability in the models and methods used for single event upset (SEU) calculations in microelectronic memory devices can lead to a range of possible upset rates. In order to compare the Adams 1986 interplanetary solar flare model to a new model proposed by scientists at the Jet Propulsion Laboratory (JPL92) the authors have calculated an array of upset rates using heavy ion and proton data for selected DRAM and SRAM memories and for Actel Field Programmable Gate Arrays (FPGAs). To make more general comparisons of the models the authors have produced a set of engineering curves of predicted upset rates versus hypothetical device cross-section parameters. The results show that use of this more realistic, although still conservative, JPL model can have significant benefits for satellite programs, especially those which must operate continuously during solar particle events. The benefits include more flexibility in model choice, a higher level of confidence in the environment, and potential cost savings by the calculation of less pessimistic SEU rates which allows designers to integrate commercial products into their spacecraft design with the use of Error Detection and Correction (EDAC) schemes
When the Schroedinger equation in quantum mechanics is replaced by the nonlinear Schroedinger equation to describe microscopic particles in nonlinear quantum systems, it has been verified that the nature of the particles differs considerably from those in quantum mechanics, where they are localized and have also wave-corpuscle duality due to the nonlinear interactions. In this case the influences of externally applied potentials in the nonlinear Schroedinger equation on the natures of the microscopic particles have been studied by a perturbation theory. The studied results show that the external potential can change the states of the microscopic particles, such as the positions, amplitude and wave forms, but cannot change the wave-corpuscle duality. In the meanwhile, we find further that the relationship between the external potential and change of positions of the particle satisfies the rule of motion of classical particles. Thus we know from this study that the kinetic energy term, (h2/2m)∇2φ, in the nonlinear Schroedinger equation can only make the microscopic particles have a wave feature, but the nonlinear interaction b|φ|2φ determines its corpuscle feature, their combination makes the microscopic particles have a wave-corpuscle duality, and the potential V(r→,t)φ changes only the positions, amplitude and wave form of the particles. Therefore the nonlinear interaction plays an important role in determination of the wave-corpuscle duality of microscopic particles in quantum theory.
A re-examination of symmetry/Group relationships as applied ot the elementary particles
The purpose of this investigation is to apply Group Theory to the elementary particles. Group Theory is a mathematical discipline used to predict the existence of elementary particles by physicists. Perhaps, the most famous application of Group Theory to the elementary particles was by Murray Gell-Mann in 1964. Gell-Mann used the theory to predict the existence and characteristics of the then undiscovered Omega Minus Particle. Group Theory relies heavily on symmetry relationships and expresses them in terms of geometry. Existence and the characteristics of a logical intuitable, but unobserved member of a group are given by extrapolation of the geometric relationships and characteristics of the known members of the group. In this study, the Delta, Sigma, Chi and Omega baryons are used to illustrate how physicists apply geometry and symmetrical relationships to predict new particles. The author's hypothesis is that by using the D3 crystal symmetry group and Gell-Mann's baryons, three new particles will be predicted. The results of my new symmetry predicts the Omega 2, Omega 3, and Chi 3. However, the Chi 3 does not have characteristics consistent with those of the other known group members
Mica track microfilters applied in a cascade particle fractionator at an industrial plant
Mica Track Microfilters (MTM) are produced by irradiation of mica discs with heavy ions. The air throughput dV/dt is investigated experimentally within the temperature range 20 deg C <= T <= 300 deg C and described by a simple formula. MTM are placed in a Cascade Particle Fractionator and applied in the filtration of hot and radioactive gas under industrial conditions. (author)
Use of magnetic particles to apply mechanical forces for bone tissue engineering purposes
Cartmell, S H; Keramane, A; Kirkham, G R; Verschueren, S B; Magnay, J L; El Haj, A J; Dobson, J [Institute of Science and Technology in Medicine, University of Keele, Thornburrow Drive, Hartshill, Stoke-on-Trent, Staffordshire ST4 7QB (United Kingdom)
2005-01-01
It is possible to influence osteoblast activity by the application of mechanical forces. There is potential in using these forces for tissue engineering applications in that cell matrix production may be upregulated, resulting in a functional tissue engineered construct created in a shorter culture time. We have been developing a novel technique for applying mechanical forces directly to the cell with the use of magnetic particles. Particles attached to the cell membrane can be manipulated using an external magnetic field thus applying forces in the piconewton range. We have previously demonstrated that primary human osteoblasts respond to this type of stimulus by upregulating bone related gene expression and producing mineralized matrix at early time points. In this paper we discuss the optimization of this technique by presenting data on the effects of this type of force on osteoblast proliferation, phagocytosis and also the potential use of this technique in developing 3D tissue engineered constructs.
Calculation of tunnel splitting in a biaxial spin particle with an applied magnetic field
Shen, SQ; Zhou, B.; Liang, JQ
2004-01-01
The level splitting formulae of excited states as well as ground state for a biaxial spin particle in the presence of an applied magnetic field are obtained in a simple way from Schrödinger theory. Considering the boundary condition of the wave function, we obtain the tunneling splitting of the energy levels for half-integral spins as well as for the integral spins. The results obtained are compared with those previously derived by complicated pseudoparticle methods and numerical calculation ...
Otto, S.; Trautmann, T.; M. Wendisch
2011-01-01
Realistic size equivalence and shape of Saharan mineral dust particles are derived from in-situ particle, lidar and sun photometer measurements during SAMUM-1 in Morocco (19 May 2006), dealing with measured size- and altitude-resolved axis ratio distributions of assumed spheroidal model particles. The data were applied in optical property, radiative effect, forcing and heating effect simulations to quantify the realistic impact of particle non-sphericity. It turned out that volume-to-surface ...
Otto, S.; Trautmann, T.; M. Wendisch
2010-01-01
Realistic size equivalence and shape of Saharan mineral dust particles are derived from on in-situ particle, lidar and sun photometer measurements during SAMUM-1 in Morocco (19 May 2006), dealing with measured size- and altitude-resolved axis ratio distributions of assumed spheroidal model particles. The data were applied in optical property, radiative effect, forcing and heating effect simulations to quantify the realistic impact of particle non-sphericity. It turned out that volume-to-surfa...
Particle swarm optimization applied to data reconciliation in nuclear power plant
Mass and energy balance are important issues that needs to keep into account in nuclear power plants. Data Reconciliation and Parameter Estimation (DRPE) and gross errors detection are techniques of increasing interest. Works using Genetic Algorithm (GA) have been successfully used in the Data Reconciliation (DR) nonlinear optimization problem, and it seems that evolutionary algorithms performs well without the complex calculations used by the conventional methods. The aim of this paper is to present the Particle Swarm Optimization Algorithm (PSO) as an alternative to the use of modified GA, which was applied to data reconciliation with simultaneous gross errors detection. In this paper, the DR formulation uses a redescending estimator as objective function and simulation results show that PSO applied to DRPE problem is faster than modified GA presented in literature, do not involve complex calculations and do not need complex parameters to adjust. The PSO algorithm is also able to handle the non-differentiable characteristics of the redescending estimator. (author)
Ičević Ivana Đ.
2011-01-01
Full Text Available Polyhydroxylated, water soluble, fullerenol C60(OH24 nano particles (FNP in vitro and in vivo models, showed an expressive biological activity. The goal of this work was to investigate the potential protective effects of orally applied FNP on rats after a single dose of doxorubicin (DOX (8 mg/kg (i.p. 6 h after the last application of FNP. After the last drug administration, the rats were sacrificed, and the blood and tissues were taken for the analysis. Biochemical and pathological results obtained in this study indicate that fullerenol (FNP, in H2O:DMSO (80:20, w/w solution given orally in final doses of 10, 14.4, and 21.2 mg/kg three days successively, has the protective (hepatoprotective and nephroprotective effect against doxorubicin-induced cytotoxicity via its antioxidant properties.
Recent advances in hybrid methods applied to neutral particle transport problems
Full text: Particle transport methods are essential for accurate simulation of nuclear systems including nuclear reactors, medical devices, nondestructive interrogation devices, and radiation imaging devices. Commonly, the Monte Carlo and deterministic discrete ordinates (Sn) approaches are used to solve radiation transport problems. Both approaches when used for simulation of large 3-D real-world problems may become inefficient. So, various hybrid methodologies have been developed; these methodologies can be categorized into four groups: coupled deterministic and Monte Carlo methods; Monte Carlo variance reduction using the deterministic importance function; acceleration of the deterministic methods based on a lower-order deterministic formulation; and coupled deterministic methods This paper compares the Sn deterministic and Monte Carlo approaches, reviews different hybrid methodologies, and discusses recent methods we (the University of Florida Transport Theory Group (UFTTG)) have developed and applied to real-world problems. (author)
Particle swarm optimization with random keys applied to the nuclear reactor reload problem
In 1995, Kennedy and Eberhart presented the Particle Swarm Optimization (PSO), an Artificial Intelligence metaheuristic technique to optimize non-linear continuous functions. The concept of Swarm Intelligence is based on the socials aspects of intelligence, it means, the ability of individuals to learn with their own experience in a group as well as to take advantage of the performance of other individuals. Some PSO models for discrete search spaces have been developed for combinatorial optimization, although none of them presented satisfactory results to optimize a combinatorial problem as the nuclear reactor fuel reloading problem (NRFRP). In this sense, we developed the Particle Swarm Optimization with Random Keys (PSORK) in previous research to solve Combinatorial Problems. Experiences demonstrated that PSORK performed comparable to or better than other techniques. Thus, PSORK metaheuristic is being applied in optimization studies of the NRFRP for Angra 1 Nuclear Power Plant. Results will be compared with Genetic Algorithms and the manual method provided by a specialist. In this experience, the problem is being modeled for an eight-core symmetry and three-dimensional geometry, aiming at the minimization of the Nuclear Enthalpy Power Peaking Factor as well as the maximization of the cycle length. (author)
Particle tracking velocimetry applied to estimate the pressure field around a Savonius turbine
Murai, Yuichi; Nakada, Taishi; Suzuki, Takao; Yamamoto, Fujio
2007-08-01
Particle tracking velocimetry (PTV) is applied to flows around a Savonius turbine. The velocity vector field measured with PTV is utilized to estimate the pressure field around the turbine, as well as to evaluate the torque performance. The main objective of the work is the establishment of the pressure estimation scheme required to discuss the turbine performance. First, the PTV data are interpolated on a regular grid with a fourth-order ellipsoidal differential equation to generate velocity vectors satisfying the third-order spatio-temporal continuity both in time and space. Second, the phase-averaged velocity vector information with respect to the turbine angle is substituted into three different types of pressure-estimating equations, i.e. the Poisson equation, the Navier-Stokes equation and the sub-grid scale model of turbulence. The results obtained based on the Navier-Stokes equation are compared with those based on the Poisson equation, and have shown several merits in employing the Navier-Stokes-based method for the PTV measurement. The method is applied to a rotating turbine with the tip-speed ratio of 0.5 to find the relationship between torque behaviour and flow structure in a phase-averaged sense. We have found that a flow attached to the convex surface of the blades induces low-pressure regions to drive the turbine, namely, the lift force helps the turbine blades to rotate even when the drag force is insufficient. Secondary mechanisms of torque generation are also discussed.
In aerosol research aerosols of known size, shape, and density are highly desirable because most aerosols properties depend strongly on particle size. However, such constant and reproducible generation of those aerosol particles whose size and concentration can be easily controlled, can be achieved only in laboratory-scale tests. In large scale experiments, different generation methods for various elements and compounds have been applied. This work presents, in a brief from, a review of applications of these methods used in large scale experiments on aerosol behaviour and source term. Description of generation method and generated aerosol transport conditions is followed by properties of obtained aerosol, aerosol instrumentation used, and the scheme of aerosol generation system-wherever it was available. An information concerning aerosol generation particular purposes and reference number(s) is given at the end of a particular case. These methods reviewed are: evaporation-condensation, using a furnace heating and using a plasma torch; atomization of liquid, using compressed air nebulizers, ultrasonic nebulizers and atomization of liquid suspension; and dispersion of powders. Among the projects included in this worked are: ACE, LACE, GE Experiments, EPRI Experiments, LACE-Spain. UKAEA Experiments, BNWL Experiments, ORNL Experiments, MARVIKEN, SPARTA and DEMONA. The aim chemical compounds studied are: Ba, Cs, CsOH, CsI, Ni, Cr, NaI, TeO2, UO2Al2O3, Al2SiO5, B2O3, Cd, CdO, Fe2O3, MnO, SiO2, AgO, SnO2, Te, U3O8, BaO, CsCl, CsNO3, Urania, RuO2, TiO2, Al(OH)3, BaSO4, Eu2O3 and Sn. (Author)
Walter, Johannes; Thajudeen, Thaseem; Süß, Sebastian; Segets, Doris; Peukert, Wolfgang
2015-04-01
Analytical centrifugation (AC) is a powerful technique for the characterisation of nanoparticles in colloidal systems. As a direct and absolute technique it requires no calibration or measurements of standards. Moreover, it offers simple experimental design and handling, high sample throughput as well as moderate investment costs. However, the full potential of AC for nanoparticle size analysis requires the development of powerful data analysis techniques. In this study we show how the application of direct boundary models to AC data opens up new possibilities in particle characterisation. An accurate analysis method, successfully applied to sedimentation data obtained by analytical ultracentrifugation (AUC) in the past, was used for the first time in analysing AC data. Unlike traditional data evaluation routines for AC using a designated number of radial positions or scans, direct boundary models consider the complete sedimentation boundary, which results in significantly better statistics. We demonstrate that meniscus fitting, as well as the correction of radius and time invariant noise significantly improves the signal-to-noise ratio and prevents the occurrence of false positives due to optical artefacts. Moreover, hydrodynamic non-ideality can be assessed by the residuals obtained from the analysis. The sedimentation coefficient distributions obtained by AC are in excellent agreement with the results from AUC. Brownian dynamics simulations were used to generate numerical sedimentation data to study the influence of diffusion on the obtained distributions. Our approach is further validated using polystyrene and silica nanoparticles. In particular, we demonstrate the strength of AC for analysing multimodal distributions by means of gold nanoparticles.
Groot, S.; Harmanny, R.; Driessen, H.; Yarovoy, A.
2013-01-01
In this article, a novel motion model-based particle filter implementation is proposed to classify human motion and to estimate key state variables, such as motion type, i.e. running or walking, and the subject’s height. Micro-Doppler spectrum is used as the observable information. The system and me
Two-particle quantum walks applied to the graph isomorphism problem
Gamble, John King; Zhou, Dong; Joynt, Robert; Coppersmith, S N
2010-01-01
We show that the quantum dynamics of interacting and noninteracting quantum particles are fundamentally different in the context of solving a particular computational problem. Specifically, we consider the graph isomorphism problem, in which one wishes to determine whether two graphs are isomorphic (related to each other by a relabeling of the graph vertices), and focus on a class of graphs with particularly high symmetry called strongly regular graphs (SRG's). We study the Green's functions that characterize the dynamical evolution single-particle and two-particle quantum walks on pairs of non-isomorphic SRG's and show that interacting particles can distinguish non-isomorphic graphs that noninteracting particles cannot. We obtain the following specific results: (1) We prove that quantum walks of two noninteracting particles, Fermions or Bosons, cannot distinguish certain pairs of non-isomorphic SRG's. (2) We demonstrate numerically that two interacting Bosons are more powerful than single particles and two n...
Rigley, Michael
2014-01-01
Physically based preconditioning is applied to linear systems resulting from solving the first order formulation of the particle transport equation and from solving the homogenized form of the simple flow equation for porous media flows. The first order formulation of the particle transport equation is solved two ways. The first uses a least squares finite element method resulting in a symmetric positive definite linear system which is solved by a preconditioned conjugate gradient method. The...
A computational framework for particle and whole cell tracking applied to a real biological dataset.
Yang, Feng Wei; Venkataraman, Chandrasekhar; Styles, Vanessa; Kuttenberger, Verena; Horn, Elias; von Guttenberg, Zeno; Madzvamuse, Anotida
2016-05-24
Cell tracking is becoming increasingly important in cell biology as it provides a valuable tool for analysing experimental data and hence furthering our understanding of dynamic cellular phenomena. The advent of high-throughput, high-resolution microscopy and imaging techniques means that a wealth of large data is routinely generated in many laboratories. Due to the sheer magnitude of the data involved manual tracking is often cumbersome and the development of computer algorithms for automated cell tracking is thus highly desirable. In this work, we describe two approaches for automated cell tracking. Firstly, we consider particle tracking. We propose a few segmentation techniques for the detection of cells migrating in a non-uniform background, centroids of the segmented cells are then calculated and linked from frame to frame via a nearest-neighbour approach. Secondly, we consider the problem of whole cell tracking in which one wishes to reconstruct in time whole cell morphologies. Our approach is based on fitting a mathematical model to the experimental imaging data with the goal being that the physics encoded in the model is reflected in the reconstructed data. The resulting mathematical problem involves the optimal control of a phase-field formulation of a geometric evolution law. Efficient approximation of this challenging optimal control problem is achieved via advanced numerical methods for the solution of semilinear parabolic partial differential equations (PDEs) coupled with parallelisation and adaptive resolution techniques. Along with a detailed description of our algorithms, a number of simulation results are reported on. We focus on illustrating the effectivity of our approaches by applying the algorithms to the tracking of migrating cells in a dataset which reflects many of the challenges typically encountered in microscopy data. PMID:26948574
Two-particle quantum walks applied to the graph isomorphism problem
We show that the quantum dynamics of interacting and noninteracting quantum particles are fundamentally different in the context of solving a particular computational problem. Specifically, we consider the graph isomorphism problem, in which one wishes to determine whether two graphs are isomorphic (related to each other by a relabeling of the graph vertices), and focus on a class of graphs with particularly high symmetry called strongly regular graphs (SRGs). We study the Green's functions that characterize the dynamical evolution single-particle and two-particle quantum walks on pairs of nonisomorphic SRGs and show that interacting particles can distinguish nonisomorphic graphs that noninteracting particles cannot. We obtain the following specific results. (1) We prove that quantum walks of two noninteracting particles, fermions or bosons, cannot distinguish certain pairs of nonisomorphic SRGs. (2) We demonstrate numerically that two interacting bosons are more powerful than single particles and two noninteracting particles, in that quantum walks of interacting bosons distinguish all nonisomorphic pairs of SRGs that we examined. By utilizing high-throughput computing to perform over 500 million direct comparisons between evolution operators, we checked all tabulated pairs of nonisomorphic SRGs, including graphs with up to 64 vertices. (3) By performing a short-time expansion of the evolution operator, we derive distinguishing operators that provide analytic insight into the power of the interacting two-particle quantum walk.
Applying the relativistic quantization condition to a three-particle bound state in a periodic box
Hansen, Maxwell T
2016-01-01
Using our recently developed relativistic three-particle quantization condition, we study the finite-volume energy shift of a three-particle bound state. We reproduce the result obtained using non-relativistic quantum mechanics by Mei{\\ss}ner, R{\\'i}os and Rusetsky, and generalize the result to a moving frame.
S. J. Noh
2011-04-01
Full Text Available Applications of data assimilation techniques have been widely used to improve hydrologic prediction. Among various data assimilation techniques, sequential Monte Carlo (SMC methods, known as "particle filters", provide the capability to handle non-linear and non-Gaussian state-space models. In this paper, we propose an improved particle filtering approach to consider different response time of internal state variables in a hydrologic model. The proposed method adopts a lagged filtering approach to aggregate model response until uncertainty of each hydrologic process is propagated. The regularization with an additional move step based on Markov chain Monte Carlo (MCMC is also implemented to preserve sample diversity under the lagged filtering approach. A distributed hydrologic model, WEP is implemented for the sequential data assimilation through the updating of state variables. Particle filtering is parallelized and implemented in the multi-core computing environment via open message passing interface (MPI. We compare performance results of particle filters in terms of model efficiency, predictive QQ plots and particle diversity. The improvement of model efficiency and the preservation of particle diversity are found in the lagged regularized particle filter.
Particle tower technology applied to metallurgic plants and peak-time boosting of steam power plants
Amsbeck, Lars; Buck, Reiner; Prosin, Tobias
2016-05-01
Using solar tower technology with ceramic particles as heat transfer and storage medium to preheat scrap for induction furnaces in foundries provides solar generated heat to save electricity. With such a system an unsubsidized payback time of only 4 years is achieved for a 70000t/a foundry in Brazil. The same system can be also used for heat treatment of metals. If electricity is used to heat inert atmospheres a favorable economic performance is also achievable for the particle system. The storage in a particle system enables solar boosting to be restricted to only peak times, enabling an interesting business case opportunity.
S. Otto
2010-11-01
Full Text Available Realistic size equivalence and shape of Saharan mineral dust particles are derived from on in-situ particle, lidar and sun photometer measurements during SAMUM-1 in Morocco (19 May 2006, dealing with measured size- and altitude-resolved axis ratio distributions of assumed spheroidal model particles. The data were applied in optical property, radiative effect, forcing and heating effect simulations to quantify the realistic impact of particle non-sphericity. It turned out that volume-to-surface equivalent spheroids with prolate shape are most realistic: particle non-sphericity only slightly affects single scattering albedo and asymmetry parameter but may enhance extinction coefficient by up to 10%. At the bottom of the atmosphere (BOA the Saharan mineral dust always leads to a loss of solar radiation, while the sign of the forcing at the top of the atmosphere (TOA depends on surface albedo: solar cooling/warming over a mean ocean/land surface. In the thermal spectral range the dust inhibits the emission of radiation to space and warms the BOA. The most realistic case of particle non-sphericity causes changes of total (solar plus thermal forcing by 55/5% at the TOA over ocean/land and 15% at the BOA over both land and ocean and enhances total radiative heating within the dust plume by up to 20%. Large dust particles significantly contribute to all the radiative effects reported.
Ying-Yi Hong
2014-01-01
Full Text Available Particle swarm optimization (PSO has been successfully applied to solve many practical engineering problems. However, more efficient strategies are needed to coordinate global and local searches in the solution space when the studied problem is extremely nonlinear and highly dimensional. This work proposes a novel adaptive elite-based PSO approach. The adaptive elite strategies involve the following two tasks: (1 appending the mean search to the original approach and (2 pruning/cloning particles. The mean search, leading to stable convergence, helps the iterative process coordinate between the global and local searches. The mean of the particles and standard deviation of the distances between pairs of particles are utilized to prune distant particles. The best particle is cloned and it replaces the pruned distant particles in the elite strategy. To evaluate the performance and generality of the proposed method, four benchmark functions were tested by traditional PSO, chaotic PSO, differential evolution, and genetic algorithm. Finally, a realistic loss minimization problem in an electric power system is studied to show the robustness of the proposed method.
Advances in Uncertainty Representation and Management for Particle Filtering Applied to Prognostics
National Aeronautics and Space Administration — Particle filters (PF) have been established as the de facto state of the art in failure prognosis. They combine advantages of the rigors of Bayesian estimation to...
Two-particle quantum walks applied to the graph isomorphism problem
Gamble, John King; Friesen, Mark; Zhou, Dong; Joynt, Robert; Coppersmith, S. N.
2011-03-01
We show that an algorithm based on the dynamics of interacting quantum particles is more powerful than the corresponding algorithm for non-interacting particles. Specifically, our algorithm attempts to determine whether two graphs are isomorphic. We focus on strongly regular graphs (SRGs), a class of graphs with particularly high symmetry. By studying the dynamical evolution of two-particle quantum walks on pairs of non-isomorphic SRG's, we show that interacting particles can distinguish non-isomorphic graphs that noninteracting particles cannot. First, we prove that quantum walks of two noninteracting particles cannot distinguish pairs of non-isomorphic SRG's. Next, we demonstrate numerically that two interacting bosons are more powerful, in that their quantum walks distinguish all non-isomorphic pairs of SRGs we tried, including those with up to 64 vertices. Finally, we find a set of operators that determine these evolutions. This work was supported in part by ARO and DOD (W911NF-09-1-0439). J.K.G. acknowledges support from the NSF.
S. J. Noh
2011-10-01
Full Text Available Data assimilation techniques have received growing attention due to their capability to improve prediction. Among various data assimilation techniques, sequential Monte Carlo (SMC methods, known as "particle filters", are a Bayesian learning process that has the capability to handle non-linear and non-Gaussian state-space models. In this paper, we propose an improved particle filtering approach to consider different response times of internal state variables in a hydrologic model. The proposed method adopts a lagged filtering approach to aggregate model response until the uncertainty of each hydrologic process is propagated. The regularization with an additional move step based on the Markov chain Monte Carlo (MCMC methods is also implemented to preserve sample diversity under the lagged filtering approach. A distributed hydrologic model, water and energy transfer processes (WEP, is implemented for the sequential data assimilation through the updating of state variables. The lagged regularized particle filter (LRPF and the sequential importance resampling (SIR particle filter are implemented for hindcasting of streamflow at the Katsura catchment, Japan. Control state variables for filtering are soil moisture content and overland flow. Streamflow measurements are used for data assimilation. LRPF shows consistent forecasts regardless of the process noise assumption, while SIR has different values of optimal process noise and shows sensitive variation of confidential intervals, depending on the process noise. Improvement of LRPF forecasts compared to SIR is particularly found for rapidly varied high flows due to preservation of sample diversity from the kernel, even if particle impoverishment takes place.
An investigation on the application of alpha particles in the induction of a bias ionized background plasma before, during and after the discharge of the N2 TE UV laser (337.1 nm), built in the LEL-IF/UFF is presented. The alpha particles are provided by Americium (241-Am) stripes placed inside the discharge channel of the laser device. The stimulated radiation output characteristics, in terms of gas pressure, charging voltage and pulse width, of a N2 TE UV laser (337.1 nm) circuit are presented. The increased laser yield is interpreted qualitatively through plasma impedance in the discharge circuit. (author)
Fellows, C.E.; Rodegheri, C.C.; Tauber, U. [Universidade Federal Fluminense (UFF), Niteroi, RJ (Brazil). Inst. de Fisica. Lab. de Espectroscopia e Laser (LEL); Guterres, R.F. [Comissao Nacional de Energia Nuclear (CNEN), Rio de Janeiro, RJ (Brazil). Coordenacao de Instalacoes Radiativas]. E-mail: rgutterr@cnen.gov.br
2005-11-15
An investigation on the application of alpha particles in the induction of a bias ionized background plasma before, during and after the discharge of the N2 TE UV laser (337.1 nm), built in the LEL-IF/UFF is presented. The alpha particles are provided by Americium (241-Am) stripes placed inside the discharge channel of the laser device. The stimulated radiation output characteristics, in terms of gas pressure, charging voltage and pulse width, of a N2 TE UV laser (337.1 nm) circuit are presented. The increased laser yield is interpreted qualitatively through plasma impedance in the discharge circuit. (author)
Künzli, Pierre; Tsunematsu, Kae; Albuquerque, Paul; Falcone, Jean-Luc; Chopard, Bastien; Bonadonna, Costanza
2016-04-01
Volcanic ash transport and dispersal models typically describe particle motion via a turbulent velocity field. Particles are advected inside this field from the moment they leave the vent of the volcano until they deposit on the ground. Several techniques exist to simulate particles in an advection field such as finite difference Eulerian, Lagrangian-puff or pure Lagrangian techniques. In this paper, we present a new flexible simulation tool called TETRAS (TEphra TRAnsport Simulator) based on a hybrid Eulerian-Lagrangian model. This scheme offers the advantages of being numerically stable with no numerical diffusion and easily parallelizable. It also allows us to output particle atmospheric concentration or ground mass load at any given time. The model is validated using the advection-diffusion analytical equation. We also obtained a good agreement with field observations of the tephra deposit associated with the 2450 BP Pululagua (Ecuador) and the 1996 Ruapehu (New Zealand) eruptions. As this kind of model can lead to computationally intensive simulations, a parallelization on a distributed memory architecture was developed. A related performance model, taking into account load imbalance, is proposed and its accuracy tested.
The studies on aerosol transfer carried out in the field of staff protection and nuclear plants safety become more and more important. So techniques of pollutants simulation by specific tracers with the same aeraulic behaviour are an interesting tool in order to characterize their transfers. Resorting to aerosols tagged by a fluorescent dye allows to realize different studies in ventilation and filtration field. The feasibility of detection in real time for a particulate tracer is the main aim of this work. The need of such a technique is obvious because it can provide the specific aerosol behaviour. Furthermore, direct measurements in real time are required for model validation in calculation codes: they give the most realistic informations on interaction between contaminant and ventilation air flows. Up to now, the principle of fluorescent aerosol concentration measurement allows only an integral response in a delayed time, by means of sampling on filters and a fluorimetric analysis after a specific conditioning of these filters. In order to have the opportunity to detect in real time specific tracer, we have developed a new monitor able to count these particles on the following basis: fluorescent particles pass through a sampling nozzle up to a measurement chamber specially designed; sheath flow rate is defined to confine the test aerosol in the test aerosol in the sample flow rate at nozzle outlet; the interception of this stream by a highly focused laser beam allows aerosol detection and characterization particle by particle; the signature of a passing aerosol is the burst of photons that occurs when the fluoro-phore contained in the glycerol particle is excited by a light of adapted wavelength; these signals are transmitted to a photodetector by a patented optical arrangement. Then, an acquisition interfaced board connected to a computer, converts them into frequencies histograms. In the end, two kind of results could be provided simultaneously : the
A chaotic quantum-behaved particle swarm approach applied to optimization of heat exchangers
Particle swarm optimization (PSO) method is a population-based optimization technique of swarm intelligence field in which each solution called “particle” flies around in a multidimensional problem search space. During the flight, every particle adjusts its position according to its own experience, as well as the experience of neighboring particles, using the best position encountered by itself and its neighbors. In this paper, a new quantum particle swarm optimization (QPSO) approach combined with Zaslavskii chaotic map sequences (QPSOZ) to shell and tube heat exchanger optimization is presented based on the minimization from economic view point. The results obtained in this paper for two case studies using the proposed QPSOZ approach, are compared with those obtained by using genetic algorithm, PSO and classical QPSO showing the best performance of QPSOZ. In order to verify the capability of the proposed method, two case studies are also presented showing that significant cost reductions are feasible with respect to traditionally designed exchangers. Referring to the literature test cases, reduction of capital investment up to 20% and 6% for the first and second cases, respectively, were obtained. Therefore, the annual pumping cost decreased markedly 72% and 75%, with an overall decrease of total cost up to 30% and 27%, respectively, for the cases 1 and 2, respectively, showing the improvement potential of the proposed method, QPSOZ. - Highlights: ► Shell and tube heat exchanger is minimized from economic view point. ► A new quantum particle swarm optimization (QPSO) combined with Zaslavskii chaotic map sequences (QPSOZ) is proposed. ► Reduction of capital investment up to 20% and 6% for the first and second cases was obtained. ► Annual pumping cost decreased 72% and 75%, with an overall decrease of total cost up to 30% and 27% using QPSOZ.
2014-01-01
Particle swarm optimization (PSO) has been successfully applied to solve many practical engineering problems. However, more efficient strategies are needed to coordinate global and local searches in the solution space when the studied problem is extremely nonlinear and highly dimensional. This work proposes a novel adaptive elite-based PSO approach. The adaptive elite strategies involve the following two tasks: (1) appending the mean search to the original approach and (2) pruning/cloning par...
Berlinger, B; Bugge, M D; Ulvestad, B; Kjuus, H; Kandler, K; Ellingsen, D G
2015-12-01
Air samples were collected by personal sampling with five stage Sioutas cascade impactors and respirable cyclones in parallel among tappers and crane operators in two manganese (Mn) alloy smelters in Norway to investigate PM fractions. The mass concentrations of PM collected by using the impactors and the respirable cyclones were critically evaluated by comparing the results of the parallel measurements. The geometric mean (GM) mass concentrations of the respirable fraction and the <10 μm PM fraction were 0.18 and 0.39 mg m(-3), respectively. Particle size distributions were determined using the impactor data in the range from 0 to 10 μm and by stationary measurements by using a scanning mobility particle sizer in the range from 10 to 487 nm. On average 50% of the particulate mass in the Mn alloy smelters was in the range from 2.5 to 10 μm, while the rest was distributed between the lower stages of the impactors. On average 15% of the particulate mass was found in the <0.25 μm PM fraction. The comparisons of the different PM fraction mass concentrations related to different work tasks or different workplaces, showed in many cases statistically significant differences, however, the particle size distribution of PM in the fraction <10 μm d(ae) was independent of the plant, furnace or work task. PMID:26498986
S. Otto
2011-05-01
Full Text Available Realistic size equivalence and shape of Saharan mineral dust particles are derived from in-situ particle, lidar and sun photometer measurements during SAMUM-1 in Morocco (19 May 2006, dealing with measured size- and altitude-resolved axis ratio distributions of assumed spheroidal model particles. The data were applied in optical property, radiative effect, forcing and heating effect simulations to quantify the realistic impact of particle non-sphericity. It turned out that volume-to-surface equivalent spheroids with prolate shape are most realistic: particle non-sphericity only slightly affects single scattering albedo and asymmetry parameter but may enhance extinction coefficient by up to 10 %. At the bottom of the atmosphere (BOA the Saharan mineral dust always leads to a loss of solar radiation, while the sign of the forcing at the top of the atmosphere (TOA depends on surface albedo: solar cooling/warming over a mean ocean/land surface. In the thermal spectral range the dust inhibits the emission of radiation to space and warms the BOA. The most realistic case of particle non-sphericity causes changes of total (solar plus thermal forcing by 55/5 % at the TOA over ocean/land and 15 % at the BOA over both land and ocean and enhances total radiative heating within the dust plume by up to 20 %. Large dust particles significantly contribute to all the radiative effects reported. They strongly enhance the absorbing properties and forward scattering in the solar and increase predominantly, e.g., the total TOA forcing of the dust over land.
Efficiency of particle swarm optimization applied on fuzzy logic DC motor speed control
Allaoua Boumediene
2008-01-01
Full Text Available This paper presents the application of Fuzzy Logic for DC motor speed control using Particle Swarm Optimization (PSO. Firstly, the controller designed according to Fuzzy Logic rules is such that the systems are fundamentally robust. Secondly, the Fuzzy Logic controller (FLC used earlier was optimized with PSO so as to obtain optimal adjustment of the membership functions only. Finally, the FLC is completely optimized by Swarm Intelligence Algorithms. Digital simulation results demonstrate that in comparison with the FLC the designed FLC-PSO speed controller obtains better dynamic behavior and superior performance of the DC motor, as well as perfect speed tracking with no overshoot.
Theoretical Models of Light Scattering Applied in Sizing Particles in Coal Water Slurry
王仁哲; 张荣曾; 徐志强
2004-01-01
Advantges and disadvantage of Mie scattering model and Fraunhofer diffraction model are discussed. The result shows that 1) the Fraunhofer diffraction model is simple in design and fast in operation, which is quite suitable for on-line control and 2) the intensity and energy distribution of diffracted light of both the Mie scattering model and the Fraunhofer theoretical model are compared and researched. Feasibility of using the Fraunhofer diffraction model to replace the Mie scattering model in measuring particles in coal water slurry is demonstrated.
Auger electron spectroscopy applied to inner shell ionization by fast charged particles
Until recently, inner shell ionization by charged particle impact was studied almost exclusively through the use of x-ray spectroscopy. This method is limited in accuracy, however, for ionization of inner shells where the fluorescence yield is small. For K-shell ionization of elements with atomic number less than about ten the fluorescence yield can be considered negligible and Auger electron emission cross section provide direct information regarding the ionization cross section. The ionization cross sections determined in this way are accurate to approximately 20 percent whereas x-ray measurements may be uncertain by a factor of five or more due to uncertainties in fluorescence yields. In addition to ionization cross sections, Auger emission spectra provide information regarding multiple ionization, effects of molecular binding on inner shell ionization and, when coupled with x-ray measurements, provide fluorescence yields as a function of the final state of the target atom. These points will be illustrated for ionization by fast protons along with some results for heavier incident particles
Large surface proteins of hepatitis B virus containing the pre-s sequence.
Heermann, K H; Goldmann, U; Schwartz, W; Seyffarth, T; Baumgarten, H; Gerlich, W H
1984-11-01
The sequence of hepatitis B virus DNA contains an open reading frame which codes for a not-yet-identified protein of at least 389 amino acids. Only the products starting at the third (GP33/GP36) or the fourth (P24/GP27) initiation signal have been characterized as components of the viral surface antigen. We found a larger protein, P39, and its glycosylated form, GP42, in hepatitis B virus particles and viral surface antigen filaments. Immunological cross-reactions showed that P39/GP42 is partially homologous to P24/GP27 and GP33/GP36. The unique portion of its sequence bound monoclonal antibodies which had been induced by immunization with hepatitis B virus particles. Proteolytic cleavage patterns and subtype-specific size differences suggested that the sequence of P39 starts with the first initiation signal of the open reading frame. Its amino-terminal part (pre-s coded) is exposed at the viral surface and, probably, is highly immunogenic. A model is presented of how the open reading frame for the viral envelope leads to defined amounts of three different proteins. PMID:6492255
Applying Contact Angle to a 2D Multiphase Smoothed Particle Hydrodynamics Model
Farrokhpanah, Amirsaman; Mostaghimi, Javad
2016-01-01
Equilibrium contact angle of liquid drops over horizontal surfaces has been modeled using Smoothed Particle Hydrodynamics (SPH). The model is capable of accurate implementation of contact angles to stationary and moving contact lines. In this scheme, the desired value for stationary or dynamic contact angle is used to correct the profile near the triple point. This is achieved by correcting the surface normals near the contact line and also interpolating the drop profile into the boundaries. Simulations show that a close match to the chosen contact angle values can be achieved for both stationary and moving contact lines. This technique has proven to reduce the amount of nonphysical shear stresses near the triple point and to enhance the convergence characteristics of the solver.
Applying Sequential Particle Swarm Optimization Algorithm to Improve Power Generation Quality
Abdulhafid Sallama
2014-10-01
Full Text Available Swarm Optimization approach is a heuristic search method whose mechanics are inspired by the swarming or collaborative behaviour of biological populations. It is used to solve constrained, unconstrained, continuous and discrete problems. Swarm intelligence systems are widely used and very effective in solving standard and large-scale optimization, provided that the problem does not require multi solutions. In this paper, particle swarm optimisation technique is used to optimise fuzzy logic controller (FLC for stabilising a power generation and distribution network that consists of four generators. The system is subject to different types of faults (single and multi-phase. Simulation studies show that the optimised FLC performs well in stabilising the network after it recovers from a fault. The controller is compared to multi-band and standard controllers.
The particle swarm optimization algorithm applied to nuclear systems surveillance test planning
This work shows a new approach to solve availability maximization problems in electromechanical systems, under periodic preventive scheduled tests. This approach uses a new Optimization tool called PSO developed by Kennedy and Eberhart (2001), Particle Swarm Optimization, integrated with probabilistic safety analysis model. Two maintenance optimization problems are solved by the proposed technique, the first one is a hypothetical electromechanical configuration and the second one is a real case from a nuclear power plant (Emergency Diesel Generators). For both problem PSO is compared to a genetic algorithm (GA). In the experiments made, PSO was able to obtain results comparable or even slightly better than those obtained b GA. Therefore, the PSO algorithm is simpler and its convergence is faster, indicating that PSO is a good alternative for solving such kind of problems. (author)
Highlights: • A variable order spherical harmonics scheme is presented. • An adaptive process is proposed to automatically refine the angular resolution. • A regular error estimator and a goal-based error estimator are presented. • The adaptive methods are applied to fixed source and eigenvalue problems. • Adaptive methods give more accurate solutions than uniform angular resolution. - Abstract: A variable order spherical harmonics scheme has been described and employed for the solution of the neutral particle transport equation. The scheme is specifically described with application within the inner-element sub-grid scale finite element spatial discretisation. The angular resolution is variable across both the spatial and energy dimensions. That is, the order of the spherical harmonic expansion may differ at each node of the mesh for each energy group. The variable order scheme has been used to develop adaptive methods for the angular resolution of the particle transport phase-space. Two types of adaptive method have been developed and applied to examples. The first is regular adaptivity, in which the error in the solution over the entire domain is minimised. The second is goal-based adaptivity, in which the error in a specified functional is minimised. The methods were applied to fixed source and eigenvalue examples. Both methods demonstrate an improved accuracy for a given number of degrees of freedom in the angular discretisation
Recent advances in particle-induced X-ray emission analysis applied to biological samples
Papers reporting the application of particle induced X-ray emission (PIXE) analysis to biological samples continue to appear regularly in the literature. The majority of these papers deal with blood, hair, and other common body organs while a few deal with biological samples from the environnment. A variety of sample preparation methods have been demonstrated, a number of which are improvements, refinements and extensions of the thick- and thin-sample preparation methods reported in the early development of PIXE. While many papers describe the development of PIXE techniques some papers are now describing applications of the methods to serious biological problems. The following two factors may help to stimulate more consistant use of the PIXE method. First, each PIXE facility should be organized to give rapid sample processing and should have available several sample preparation and handling methods. Second, those with the skill to use PIXE methods need to become closely associated with researches knowledge able in medical and biological sciences and they also need to become more involved in project planning and sample handling. (orig.)
Goudard, R; Ribeiro, R; Klumb, F
1999-01-01
The Compact Muon Solenoid experiment, CMS, is one of the two general purpose experiments foreseen to operate at the Large Hadron Collider, LHC, at CERN, the European Laboratory for Particle Physics. The experiment aims to study very high energy collisions of proton beams. Investigation of the most fundamental properties of matter, in particular the study of the nature of the electroweak symmetry breaking and the origin of mass, is the experiment scope. The central Tracking System, a six meter long cylinder with 2.4 m diameter, will play a major role in all physics searches of the CMS experiment. Its performance depends upon the intrinsic detector performance, on the stability of the supporting structure and on the overall survey, alignment and position monitoring system. The proposed position monitoring system is based on a novel lens-less laser straightness measurement method able to detect deviations from a nominal position of all structural elements of the Central Tracking system. It is based on the recipr...
Highlights: ► A new method called QPSO-DM is applied to BNPP in-core fuel management optimization. ► It is found that QPSO-DM performs better than PSO and QPSO. ► This method provides a permissible arrangement for optimum loading pattern. - Abstract: This paper presents a new method using Quantum Particle Swarm Optimization with Differential Mutation operator (QPSO-DM) for optimizing WWER-1000 core fuel management. Genetic Algorithm (GA) and Particle Swarm Optimization (PSO) have shown good performance on in-core fuel management optimization (ICFMO). The objective of this paper is to show that QPSO-DM performs very well and is comparable to PSO and Quantum Particle Swarm Optimization (QPSO). Most of the strategies for ICFMO are based on maximizing multiplication factor (keff) to increase cycle length and minimizing power peaking factor (Pq) in order to improve fuel integrity. PSO, QPSO and QPSO-DM have been implemented to fulfill these requirements for the first operating cycle of WWER-1000 Bushehr Nuclear Power Plant (BNPP). The results show that QPSO-DM performs better than the others. A program has been written in MATLAB to map PSO, QPSO and QPSO-DM for loading pattern optimization. WIMS and CITATION have been used to simulate reactor core for neutronic calculations
Senegačnik, Jure; Tavčar, Gregor; Katrašnik, Tomaž
2015-03-01
The paper presents a computationally efficient method for solving the time dependent diffusion equation in a granule of the Li-ion battery's granular solid electrode. The method, called Discrete Temporal Convolution method (DTC), is based on a discrete temporal convolution of the analytical solution of the step function boundary value problem. This approach enables modelling concentration distribution in the granular particles for arbitrary time dependent exchange fluxes that do not need to be known a priori. It is demonstrated in the paper that the proposed method features faster computational times than finite volume/difference methods and Padé approximation at the same accuracy of the results. It is also demonstrated that all three addressed methods feature higher accuracy compared to the quasi-steady polynomial approaches when applied to simulate the current densities variations typical for mobile/automotive applications. The proposed approach can thus be considered as one of the key innovative methods enabling real-time capability of the multi particle electrochemical battery models featuring spatial and temporal resolved particle concentration profiles.
This work focuses on the usage the Artificial Intelligence technique Particle Swarm Optimization (PSO) to optimize the fuel recharge at a nuclear reactor. This is a combinatorial problem, in which the search of the best feasible solution is done by minimizing a specific objective function. However, in this first moment it is possible to compare the fuel recharge problem with the Traveling Salesman Problem (TSP), since both of them are combinatorial, with one advantage: the evaluation of the TSP objective function is much more simple. Thus, the proposed methods have been applied to two TSPs: Oliver 30 and Rykel 48. In 1995, KENNEDY and EBERHART presented the PSO technique to optimize non-linear continued functions. Recently some PSO models for discrete search spaces have been developed for combinatorial optimization. Although all of them having different formulation from the ones presented here. In this paper, we use the PSO theory associated with to the Random Keys (RK)model, used in some optimizations with Genetic Algorithms. The Particle Swarm Optimization with Random Keys (PSORK) results from this association, which combines PSO and RK. The adaptations and changings in the PSO aim to allow the usage of the PSO at the nuclear fuel recharge. This work shows the PSORK being applied to the proposed combinatorial problem and the obtained results. (author)
A method for applying goal-based adaptive methods to the angular resolution of the neutral particle transport equation is presented. The methods are applied to an octahedral wavelet discretisation of the spherical angular domain which allows for anisotropic resolution. The angular resolution is adapted across both the spatial and energy dimensions. The spatial domain is discretised using an inner-element sub-grid scale finite element method. The goal-based adaptive methods optimise the angular discretisation to minimise the error in a specific functional of the solution. The goal-based error estimators require the solution of an adjoint system to determine the importance to the specified functional. The error estimators and the novel methods to calculate them are described. Several examples are presented to demonstrate the effectiveness of the methods. It is shown that the methods can significantly reduce the number of unknowns and computational time required to obtain a given error. The novelty of the work is the use of goal-based adaptive methods to obtain anisotropic resolution in the angular domain for solving the transport equation. -- Highlights: •Wavelet angular discretisation used to solve transport equation. •Adaptive method developed for the wavelet discretisation. •Anisotropic angular resolution demonstrated through the adaptive method. •Adaptive method provides improvements in computational efficiency
Goffin, Mark A., E-mail: mark.a.goffin@gmail.com [Applied Modelling and Computation Group, Department of Earth Science and Engineering, Imperial College London, London, SW7 2AZ (United Kingdom); Buchan, Andrew G.; Dargaville, Steven; Pain, Christopher C. [Applied Modelling and Computation Group, Department of Earth Science and Engineering, Imperial College London, London, SW7 2AZ (United Kingdom); Smith, Paul N. [ANSWERS Software Service, AMEC, Kimmeridge House, Dorset Green Technology Park, Winfrith Newburgh, Dorchester, Dorset, DT2 8ZB (United Kingdom); Smedley-Stevenson, Richard P. [AWE, Aldermaston, Reading, RG7 4PR (United Kingdom)
2015-01-15
A method for applying goal-based adaptive methods to the angular resolution of the neutral particle transport equation is presented. The methods are applied to an octahedral wavelet discretisation of the spherical angular domain which allows for anisotropic resolution. The angular resolution is adapted across both the spatial and energy dimensions. The spatial domain is discretised using an inner-element sub-grid scale finite element method. The goal-based adaptive methods optimise the angular discretisation to minimise the error in a specific functional of the solution. The goal-based error estimators require the solution of an adjoint system to determine the importance to the specified functional. The error estimators and the novel methods to calculate them are described. Several examples are presented to demonstrate the effectiveness of the methods. It is shown that the methods can significantly reduce the number of unknowns and computational time required to obtain a given error. The novelty of the work is the use of goal-based adaptive methods to obtain anisotropic resolution in the angular domain for solving the transport equation. -- Highlights: •Wavelet angular discretisation used to solve transport equation. •Adaptive method developed for the wavelet discretisation. •Anisotropic angular resolution demonstrated through the adaptive method. •Adaptive method provides improvements in computational efficiency.
Riipinen, I; Manninen, H. E.; Yli-Juuti, T.; M. Boy; Sipilä, M.; M. Ehn; Junninen, H.; T. Petäjä; M. Kulmala
2009-01-01
Measurements on the composition of nanometer-sized atmospheric particles are the key to understand which vapors participate in the secondary aerosol formation processes. Knowledge on these processes is crucial in assessing the climatic effects of secondary aerosol formation. We present data of >2 nm particle concentrations and their water-affinity measured with the Condensation Particle Counter Battery (CPCB) at a boreal forest site in Hyytiälä, Finland, during spring 2006. The data re...
Ingvorsen, Kristian Mark; Buchmann, Nicolas A.; Soria, Julio
2012-01-01
Particle-fluid interactions in supersonic flows are relevant in many different applications e.g. the cold gas-dynamic spray process. The optimal application of the process is hindered by a lack of understanding of the particle-fluid interactions. To obtain detailed information on the particle-flu...... American Institute of Aeronautics and Astronautics, Inc. All rights reserved....
PIV (particle image velocimetry) is a measurement technique with growing application to the study of complex flows with relevance to industry. This work is focused on the assessment of some significant PIV measurement errors. In particular, procedures are proposed for estimating, and sometimes correcting, errors coming from the sensor geometry and performance, namely peak-locking and contemporary CCD camera read-out errors. Although the procedures are of general application to PIV, they are applied to a particular real case, giving an example of the methodology steps and the improvement in results that can be obtained. This real case corresponds to an ensemble of hot high-speed coaxial jets, representative of the civil transport aircraft propulsion system using turbofan engines. Errors of ∼0.1 pixels displacements have been assessed. This means 10% of the measured magnitude at many points. These results allow the uncertainty interval associated with the measurement to be provided and, under some circumstances, the correction of some of the bias components of the errors. The detection of conditions where the peak-locking error has a period of 2 pixels instead of the classical 1 pixel has been made possible using these procedures. In addition to the increased worth of the measurement, the uncertainty assessment is of interest for the validation of CFD codes
A two-dimensional axisymmetric electromagnetic particle-in-cell code with Monte Carlo collision conditions has been developed for an applied-field magnetoplasmadynamic thruster simulation. This theoretical approach establishes a particle acceleration model to investigate the microscopic and macroscopic characteristics of particles. This new simulation code was used to study the physical processes associated with applied magnetic fields. In this paper (I), detail of the computation procedure and results of predictions of local plasma and field properties are presented. The numerical model was applied to the configuration of a NASA Lewis Research Center 100-kW magnetoplasmadynamic thruster which has well documented experimental results. The applied magnetic field strength was varied from 0 to 0.12 T, and the effects on thrust were calculated as a basis for verification of the theoretical approach. With this confirmation, the changes in the distributions of ion density, velocity, and temperature throughout the acceleration region related to the applied magnetic fields were investigated. Using these results, the effects of applied field on physical processes in the thruster discharge region could be represented in detail, and those results are reported.
Fisher-Hoch, S P; McCormick, J. B.; Auperin, D; Brown, B G; Castor, M; Perez, G; Ruo, S; Conaty, A; Brammer, L.; S. Bauer
1989-01-01
Lassa fever is an acute febrile disease of West Africa, where there are as many as 300,000 infections a year and an estimated 3000 deaths. As control of the rodent host is impracticable at present, the best immediate prospect is vaccination. We tested as potential vaccines in rhesus monkeys a closely related virus, Mopeia virus (two monkeys), and a recombinant vaccinia virus containing the Lassa virus glycoprotein gene, V-LSGPC (four monkeys). Two monkeys vaccinated with the New York Board of...
Burnup calculations have been performed on a standard HTR fuel pebble with a radius of 3 cm containing 9 g of 8% enriched uranium and burnable poison particles (BPP) made of B4C highly enriched in 10B. The radius of the BPP and the number of particles per fuel pebble have been varied to find the flattest reactivity-to-time curve. It was found that for a k∞ of 1.1, a reactivity swing as low as 2% can be obtained when each fuel pebble contains about 1070 BPP with a radius of 75 μm. For coated BPP that consist of a graphite kernel with a radius of 300 μm covered with a B4C burnable poison layer, a similar value for the reactivity swing can be obtained. Cylindrical particles seem to perform worse. In general, the modification of the geometry of BPP is an effective means to tailor the reactivity curve of HTRs
Costa, Evaldo L.C., E-mail: evaldo@cnen.gov.br [Comissao Nacional de Energia Nuclear (CNEN), Rio de Janeiro, RJ (Brazil); Melo, Paulo F.F., E-mail: frutuoso@nuclear.ufrj.br [Universidade Federal do Rio de Janeiro (UFRJ), Rio de Janeiro, RJ (Brazil)
2013-07-01
This work aims to bring to discussion the proposal of a new classification model toward to generating ionizing radiation, specifically particle accelerators, considering two parameters: the size of these facilities and the level of energy they operate, emphasizing large accelerators, which typically operate at higher levels of energy. Also motivated by the fact that the Brazilian rules do not provide an adequate standard of licensing for this size of installation, this work will seek to revise the existing classification, where generators of ionizing radiation (including particle accelerators) are considered up to the level of energy of 50 MeV.
Ingvorsen, Kristian Mark; Buchmann, Nicolas A.; Soria, Julio
2012-01-01
for magnified digital in-line holography is created, using an ultra-high-speed camera capable of frame rates of up to 1.0MHz. To test the new technique an axisymmetric supersonic underexpanded particle-laden jet is investigated. The results show that the new technique allows for the acquisition of time resolved...
Kang, Kwan Hyoung; Li, Dongqing
2005-06-15
There is a concentration-polarization (CP) force acting on a particle submerged in an electrolyte solution with a concentration (conductivity) gradient under an externally applied DC electric field. This force originates from the two mechanisms: (i) gradient of electrohydrodynamic pressure around the particle developed by the Coulombic force acting on induced free charges by the concentration polarization, and (ii) dielectric force due to nonuniform electric field induced by the conductivity gradient. A perturbation analysis is performed for the electric field, the concentration field, and the hydrodynamic field, under the assumptions of creeping flow and small concentration gradient. The leading order component of this force acting on a dielectric spherical particle is obtained by integrating the Maxwell and the hydrodynamic stress tensors. The analytical results are validated by comparing the surface pressure and the skin friction to those of a numerical analysis. The CP force is proportional to square of the applied electric field, effective for electrically neutral particles, and always directs towards the region of higher ionic concentration. The magnitude of the CP force is compared to that of the electrophoretic and the conventional dielectrophoretic forces. PMID:15897097
The present Note concerns the dynamics of organic matter in soils under forest (C3-type vegetation) and 12 and 50 years old sugar-cane (C4-type vegetation) cultivation. The decomposition rate of ‘forest organic matter” and the accumulation rate of “sugar-cane organic matter” are estimated through 13C measurements of total soil and different organic fractions (particle-size, fractionation)
Bi, Lei; Yang, Ping; Kattawar, George W.; Mishchenko, Michael I.
2012-01-01
Three terms, ''Waterman's T-matrix method'', ''extended boundary condition method (EBCM)'', and ''null field method'', have been interchangeable in the literature to indicate a method based on surface integral equations to calculate the T-matrix. Unlike the previous method, the invariant imbedding method (IIM) calculates the T-matrix by the use of a volume integral equation. In addition, the standard separation of variables method (SOV) can be applied to compute the T-matrix of a sphere centered at the origin of the coordinate system and having a maximal radius such that the sphere remains inscribed within a nonspherical particle. This study explores the feasibility of a numerical combination of the IIM and the SOV, hereafter referred to as the IIMþSOV method, for computing the single-scattering properties of nonspherical dielectric particles, which are, in general, inhomogeneous. The IIMþSOV method is shown to be capable of solving light-scattering problems for large nonspherical particles where the standard EBCM fails to converge. The IIMþSOV method is flexible and applicable to inhomogeneous particles and aggregated nonspherical particles (overlapped circumscribed spheres) representing a challenge to the standard superposition T-matrix method. The IIMþSOV computational program, developed in this study, is validated against EBCM simulated spheroid and cylinder cases with excellent numerical agreement (up to four decimal places). In addition, solutions for cylinders with large aspect ratios, inhomogeneous particles, and two-particle systems are compared with results from discrete dipole approximation (DDA) computations, and comparisons with the improved geometric-optics method (IGOM) are found to be quite encouraging.
In order to exploit the recently introduced 1 nm gold colloids in routine electron microscopic labeling experiments, an efficient enhancement step for a better visualization of this small marker is a prerequisite. Efficiency and reproducibility of enhancement as well as growth homogeneity of gold particles were evaluated for three different silver intensifying solutions: silver lactate/hydroquinone/gum arabic, and the commercially available IntenSE M silver enhancement kit. The best results were obtained by using the silver lactate/hydroquinone/gum arabic mixture. The quality of enhancement of the IntenSE M kit was considerably increased by the addition of the protective colloid gum arabic
Polok, G
1999-01-01
The Cherenkov radiation is fully described by two variables Theta and phi , polar and azimuthal angles, respectively. In all published methods the azimuthal angle phi is completely neglected. We want to suggest that one can profit using the phi angle as additional aid in the particle identification procedure. For the first time, two- dimensional analysis results, taking into account not only both angles but also their errors, are presented. The two-dimensional method based on the Lagrange technique couples together the constraint equation and the minimization function and leads to the correct probability estimation. The principles and advantages of the proposed method are presented. (10 refs).
Zhang, Wenjing; Sano, Natsuha; Kataoka, Michiyo; Ami, Yasushi; Suzaki, Yuriko; Wakita, Takaji; Ikeda, Hidetoshi; Li, Tian-Cheng
2016-06-01
Porcine bocaviruses (PBoVs), new members of the Bocavirus genus, have been identified in swine worldwide. However, the antigenicity and epidemiology of PBoVs are still unclear. Here we used a recombinant baculovirus expression system to express the main capsid protein VP2 of Japan strain JY31b in insect Tn5 cells, and successfully produced the virus-like particles of PBoV (PBoV-LPs). The diameter and densities of the PBoV-LPs were estimated to be 30nm and 1.300g/cm(3), respectively, which were similar to the values for the native virion of PBoV. Antigenic analysis demonstrated that the PBoV-LPs were not cross-reactive with porcine circovirus 2, but were cross-reactive with human bocavirus 1, 2, 3 and 4. An ELISA for detection of anti-PBoV IgG antibodies was established using PBoV-LPs as antigen, which proved to be useful for monitoring PBoV infection in both swine and wild boars. The preliminary epidemiology research showed that 90.7% of pigs and 59.5% of wild boars were positive for the anti-PBoV-IgG, suggesting that both species were also widely infected with PBoV. The seven PBoV strains detected in wild boars separated into four subgroups, demonstrating the genetic diversity of PBoV. PMID:26959654
Anchishkin, D
2014-01-01
Generalized mean-field approach for thermodynamic description of relativistic single- and multi-component gas in the grand canonical ensemble is formulated. In the framework of the proposed approach different phenomenological excluded-volume procedures are presented and compared to the existing ones. The mean-field approach is then used to effectively include hard-core repulsion in hadron-resonance gas model for description of chemical freeze-out in heavy-ion collisions. We calculate the collision energy dependence of several quantities for different values of hard-core hadron radius and for different excluded-volume procedures such as van der Waals and Carnahan-Starling models. It is shown that a choice of the excluded-volume model becomes important for large particle densities, and for large enough values of hadron radii ($r\\gtrsim0.9$ fm) there can be a sizable difference between different excluded-volume procedures used to describe the chemical freeze-out in heavy-ion collisions.
Anchishkin, D.; Vovchenko, V.
2015-10-01
A generalized mean-field approach for the thermodynamic description of relativistic single- and multi-component gas in the grand canonical ensemble is formulated. In the framework of the proposed approach, different phenomenological excluded-volume procedures are presented and compared to the existing ones. The mean-field approach is then used to effectively include hard-core repulsion in hadron-resonance gas model for description of chemical freeze-out in heavy-ion collisions. We calculate the collision energy dependence of several quantities for different values of hard-core hadron radius and for different excluded-volume procedures such as the van der Waals and Carnahan-Starling models. It is shown that a choice of the excluded-volume model becomes important for large particle densities. For large enough values of hadron radii (r≳ 0.9 fm) there can be a sizable difference between different excluded-volume procedures used to describe the chemical freeze-out in heavy-ion collisions. At the same time, for the smaller and more commonly used values of hard-core hadron radii (r≲ 0.5 fm), the precision of the van der Waals excluded-volume procedure is shown to be sufficient.
Melody, Kevin; McBeth, Sarah; Kline, Christopher; Kashuba, Angela D M; Mellors, John W; Ambrose, Zandrea
2015-12-01
Preexposure prophylaxis (PrEP) using antiretroviral drugs is effective in reducing the risk of human immunodeficiency virus type 1 (HIV-1) infection, but adherence to the PrEP regimen is needed. To improve adherence, a long-acting injectable formulation of the nonnucleoside reverse transcriptase (RT) inhibitor rilpivirine (RPV LA) has been developed. However, there are concerns that PrEP may select for drug-resistant mutations during preexisting or breakthrough infections, which could promote the spread of drug resistance and limit options for antiretroviral therapy. To address this concern, we administered RPV LA to macaques infected with simian immunodeficiency virus containing HIV-1 RT (RT-SHIV). Peak plasma RPV levels were equivalent to those reported in human trials and waned over time after dosing. RPV LA resulted in a 2-log decrease in plasma viremia, and the therapeutic effect was maintained for 15 weeks, until plasma drug concentrations dropped below 25 ng/ml. RT mutations E138G and E138Q were detected in single clones from plasma virus in separate animals only at one time point, and no resistance mutations were detected in viral RNA isolated from tissues. Wild-type and E138Q RT-SHIV displayed similar RPV susceptibilities in vitro, whereas E138G conferred 2-fold resistance to RPV. Overall, selection of RPV-resistant variants was rare in an RT-SHIV macaque model despite prolonged exposure to slowly decreasing RPV concentrations following injection of RPV LA. PMID:26438501
Fisher-Hoch, S P; McCormick, J B; Auperin, D; Brown, B G; Castor, M; Perez, G; Ruo, S; Conaty, A; Brammer, L; Bauer, S
1989-01-01
Lassa fever is an acute febrile disease of West Africa, where there are as many as 300,000 infections a year and an estimated 3000 deaths. As control of the rodent host is impracticable at present, the best immediate prospect is vaccination. We tested as potential vaccines in rhesus monkeys a closely related virus, Mopeia virus (two monkeys), and a recombinant vaccinia virus containing the Lassa virus glycoprotein gene, V-LSGPC (four monkeys). Two monkeys vaccinated with the New York Board of Health strain of vaccinia virus as controls died after challenge with Lassa virus. The two monkeys vaccinated with Mopeia virus developed antibodies measurable by radioimmunoprecipitation prior to challenge, and they survived challenge by Lassa virus with minimal physical or physiologic disturbances. However, both showed a transient, low-titer Lassa viremia. Two of the four animals vaccinated with V-LSGPC had antibodies to both Lassa glycoproteins, as determined by radioimmunoprecipitation. All four animals survived a challenge of Lassa virus but experienced a transient febrile illness and moderate physiologic changes following challenge. Virus was recoverable from each of these animals, but at low titer and only during a brief period, as observed for the Mopeia-protected animals. We conclude that V-LSGPC can protect rhesus monkeys against death from Lassa fever. PMID:2911575
The Myrrha project (1) at SCK-CEN, the Belgian nuclear research centre, intends to design and develop a prototype accelerator driven system. Such a system will enable, next to other application fields (Technological demonstration, integral experiments validation,...), the benchmarking of the codes applied to assess the performances of the ADS. In the present situation we coupled, at SCK.CEN, the high energy Monte Carlo code HETC to the DORT/TORT S-N Neutron transport codes to perform the neutronic calculations of the Myrrha project. The HETC code is used to compute the space and energy distribution of the primary spallation neutron source, also including all other particles involved. The high energy cascade is calculated down to 20 MeV neutrons. Whereas the neutrons below this energy limit are stored as primary particles (without any interaction in the spallation medium) in a multigroup energy structure and will be treated as a fixed neutron source in the S-N transport code. The neutron interaction cross-section library used in this step is based on the ENDF/B-IV nuclear data. It is a 27 energy group with 7 groups below the thermal cut-off and allowing the up-scattering and the anisotropic scattering up to P3. The neutron transport calculations of the sub-critical assembly are performed using the DORT code either in Keff or fixed source with multiplication modes. Quadrature sets of S8 and S16 were used during these calculations. This calculational scheme was validated on basis of Monte Carlo calculational results and experimental data. In this paper we present the global calculational scheme as we applied it to Myrrh a and the corresponding results. (Author) 14 refs
Neri, Margherita; Turillazzi, Emanuela; Riezzo, Irene; Fineschi, Vittorio
2007-07-01
In this study, we applied a microscopic quantitative method based on the use of sodium rhodizonate to verify the presence of residues and their distribution on the cutis of gunshot wounds. A total of 250 skin samples were selected from cases in which the manner of death (accidental, suicide, and homicide) and the shooting distance could be reliably determined. The samples were examined under a light microscope, in transmitted bright field illumination and phase contrast mode, and with confocal laser scanning microscopy. In all skin specimens the area of each histological section was directly measured by an image analysis system. Both the number and the size of powder particles were measured. The distribution of gunshot residues (GSR) in the epidermal and subepidermal layers was also analyzed. The evaluation of the microscopic entrance wounds demonstrated different findings related to the range of fire. The data derived from the evaluation of both macroscopic and microscopic features demonstrated that the amount and the spatial distribution of GSR deposits in the skin surrounding entrance wounds strictly correlate with shooting distance. PMID:16862444
Cláudia Flores Braga; Elba Calesso Teixeira; Daniela Migliavacca; Fabiana G. Carvalho; Jandyra Fachel
2002-01-01
The present paper has as objective to apply a sequential Cluster Analysis to the atmospheric particles: Hierarchical Cluster Analysis followed by Nonhierarchical Cluster Analysis. The hierarchical cluster analysis results were used as start point for the nonhierarchical cluster analysis as an agglomerative technique. These particles were taken from two areas of the metropolitan region of Porto Alegre, Charqueadas and Sapucaia do Sul., from may /97 to may/98, using a High Volume Sampler (Hi-Vo...
Riipinen, I; Manninen, H. E.; Yli-Juuti, T.; M. Boy; Sipilä, M.; M. Ehn; Junninen, H.; T. Petäjä; M. Kulmala
2008-01-01
Measurements on the composition of nanometer-sized atmospheric particles are the key to understand which vapors participate in the secondary aerosol formation processes. Knowledge on these processes is crucial in assessing the climatic effects of secondary aerosol formation. We present data of >2 nm particle concentrations and their hygroscopicity measured with the Condensation Particle Counter Battery (CPCB) at a boreal forest site in Hyytiälä, Finland, during spring 2006. This is the...
Gisler, G.
1990-01-01
A first-principles approach to the physics of particle energization is presented. The general physics of particle acceleration is then applied to a number of the classical astrophysical mechanisms for accelerating particles, with references to recent literature where these are used in specific circumstances. The solar flare is recommended as a microcosm for studying particle acceleration because many different processes seem to be occurring in close proximity, and there is abundant high time resolution data for diagnosing those processes. Finally, a list of possible sites and mechanisms for particle acceleration in spiral galaxies is presented. 66 refs., 6 figs., 3 tabs.
Stamhuis, EJ; Videler, JJ; van Duren, LA; Muller, UK
2002-01-01
Digital particle image velocimetry (DPIV) has been applied to animal-generated flows since 1993 to map the flow patterns and vortex wakes produced by a range of feeding and swimming aquatic animals, covering a Re range of 10(-2)-10(5). In this paper, the special circumstances, problems and some solu
We investigated the influence of an external magnetic field on microstructures in a colloidal dispersion composed of rod-like ferromagnetic particles using the cluster-moving Monte Carlo method. The internal microstructures obtained by simulations have been analysed in terms of the orientational distribution and pair correlation functions. The results obtained are summarized as follows. As the magnetic field increases, the particles align in the direction of the magnetic field. In the case of a relatively strong magnetic interaction between particles, chain-like clusters are formed along the magnetic field direction. However, the aspect ratio of the particles and the magnetic interaction between them do not affect their orientational distribution. Two types of structures are observed in the chain-like clusters—a straight linear structure and a step-like structure. The chain-like clusters become shorter when the area fraction of the particles decreases, and the number of step-like structures increases when the area fraction of the particles increases. The step-like structure formation can be explained by the dependence of the potential energy curves on the shape of the spherocylinder particles
Ingvorsen, Kristian Mark; Buchmann, Nicolas A.; Soria, Julio
2012-01-01
for magnified digital in-line holography is created, using an ultra-high-speed camera capable of frame rates of up to 1.0MHz. To test the new technique an axisymmetric supersonic underexpanded particle-laden jet is investigated. The results show that the new technique allows for the acquisition of time resolved...
This volume is based on the proceedings of the CERN Accelerator School's course on Applied Geodesy for Particle Accelerators held in April 1986. The purpose was to record and disseminate the knowledge gained in recent years on the geodesy of accelerators and other large systems. The latest methods for positioning equipment to sub-millimetric accuracy in deep underground tunnels several tens of kilometers long are described, as well as such sophisticated techniques as the Navstar Global Positioning System and the Terrameter. Automation of better known instruments such as the gyroscope and Distinvar is also treated along with the highly evolved treatment of components in a modern accelerator. Use of the methods described can be of great benefit in many areas of research and industrial geodesy such as surveying, nautical and aeronautical engineering, astronomical radio-interferometry, metrology of large components, deformation studies, etc
Prevost, C. [CEA Saclay, Departement de Prevention et d`Etude des Accidents, 91 - Gif-sur-Yvette (France)]|[Conservatoire National des Arts et Metiers (CNAM), 75 - Paris (France)
1996-06-01
The studies on aerosol transfer carried out in the field of staff protection and nuclear plants safety become more and more important. So techniques of pollutants simulation by specific tracers with the same aeraulic behaviour are an interesting tool in order to characterize their transfers. Resorting to aerosols tagged by a fluorescent dye allows to realize different studies in ventilation and filtration field. The feasibility of detection in real time for a particulate tracer is the main aim of this work. The need of such a technique is obvious because it can provide the specific aerosol behaviour. Furthermore, direct measurements in real time are required for model validation in calculation codes: they give the most realistic informations on interaction between contaminant and ventilation air flows. Up to now, the principle of fluorescent aerosol concentration measurement allows only an integral response in a delayed time, by means of sampling on filters and a fluorimetric analysis after a specific conditioning of these filters. In order to have the opportunity to detect in real time specific tracer, we have developed a new monitor able to count these particles on the following basis: fluorescent particles pass through a sampling nozzle up to a measurement chamber specially designed; sheath flow rate is defined to confine the test aerosol in the test aerosol in the sample flow rate at nozzle outlet; the interception of this stream by a highly focused laser beam allows aerosol detection and characterization particle by particle; the signature of a passing aerosol is the burst of photons that occurs when the fluoro-phore contained in the glycerol particle is excited by a light of adapted wavelength; these signals are transmitted to a photodetector by a patented optical arrangement. Then, an acquisition interfaced board connected to a computer, converts them into frequencies histograms. In the end, two kind of results could be provided simultaneously : the
K. Kalyani; T. Chakravarthi
2015-01-01
The perceived applicability of honey bee mating optimization HBMO and Particle Swarm Optimization PSO among the research scholars in Tamil Nadu are understudied. The purpose of the present study is to address this dearth in the literature in three ways: (i) providing descriptive data related to the applicability of these algorithm in their area of study. (ii) Applying Three Factor theory to assess the perceived range of applicability of the two said theories and to develop, a theoretically-ba...
Soares, João; Valle, Zita; Morais, Hugo
2013-01-01
This paper presents a decision support Tool methodology to help virtual power players (VPPs) in the Smart Grid (SGs) context to solve the day-ahead energy ressource scheduling considering the intensive use of Distributed Generation (DG) and Vehicle-To-Grid (V2G). The main focus is the application of a new hybrid method combing a particle swarm approach and a deterministic technique based on mixedinteger linear programming (MILP) to solve the day-aheadscheduling minimizing total operation cost...
Characteristics necessary to specify an ISO 6980 Series 1 reference radiation field were determined for a commercially available 85Kr beta-particle source, using a BEAM EGS4 Monte Carlo code. The characteristics include residual maximum beta energy, Eres, and the uniformity of the dose rate over the calibration area. The Eres and the uniformity were also determined experimentally, using an extrapolation ionization chamber (EC) and a 0.2 cm3 parallel plate ionization chamber, respectively. The depth-dose curve measured with the EC gave a value 0.62 MeV for the Eres. Series 2 90Sr + 90Y and Series 1 85Kr beta-particle sources calibrated for Hp(0.07) at the secondary standard dosimetry laboratory (SSDL) of STUK were used to determine the energy and angular responses of TMDIS-1 direct ion storage dosemeters. The averaged zero angle Hp(0.07) responses to the 90Sr + 90Y and 85Kr reference radiations were 135 and 80%, respectively. The responses were normalized to 100%, Hp (0.07) response to 137Cs photon radiation. (authors)
Soares, João; Valle, Zita; Morais, Hugo
2013-01-01
This paper presents a decision support Tool methodology to help virtual power players (VPPs) in the Smart Grid (SGs) context to solve the day-ahead energy ressource scheduling considering the intensive use of Distributed Generation (DG) and Vehicle-To-Grid (V2G). The main focus is the application...... of a new hybrid method combing a particle swarm approach and a deterministic technique based on mixedinteger linear programming (MILP) to solve the day-ahead scheduling minimizing total operation costs from the aggregator point of view. A realistic mathematical formulation, considering the electric...... network constraints and V2G charging and discharging efficiencies is presented. Full AC power flow calculation is included in the hybrid method to allow taking into account the network constraints. A case study with a 33-bus distribution network and 1800 V2G resources is used to illustrate the performance...
Watts, R R; Wallingford, K M; Williams, R W; House, D E; Lewtas, J
1998-01-01
Personal exposure monitoring was conducted for road paving workers in three states. A research objective was to characterize and compare occupational exposures to fine respirable particles (asphalt and asphalt containing crumb rubber from shredded tires. Workers not exposed to asphalt fume were also included for comparison (to support the biomarker component of this study). The rubber content of the crumb rubber modified (CRM) asphalt at the three study sites was 12, 15, and 20%. A comparison of some specific job categories from two sites indicates greater potential carcinogenic PAH exposures during CRM asphalt work, however, the site with the greatest overall exposures did not indicate any differences for specific jobs. A statistical analysis of means for fine particle, pyrene and total carcinogenic PAH personal exposure shows, with two exceptions, there were no differences in exposures for these three measurement variables. One site shows significantly elevated pyrene exposure for CRM asphalt workers and another site similarly shows greater carcinogenic PAH exposure for CRM asphalt workers. Conventional and CRM asphalt worker airborne exposures to the PAH carcinogen marker, BaP, were very low with concentrations comparable to ambient air in many cities. However, this study demonstrates that asphalt road paving workers are exposed to elevated airborne concentrations of a group of unknown compounds that likely consist of the carcinogenic PAHs benz(a)anthracene, chrysene and methylated derivatives of both. The research described in this article has been reviewed in accordance with U.S. Environmental Protection Agency policy and approved for publication. Mention of trade names or commercial products does not constitute endorsement or recommendation for use. PMID:9577752
The setup and operating conditions of a gridded twin ionization chamber with sample change facility to study light charged particle properties in the 1 MeV region is described. Detailed studies of different grid geometries in connection with the choice of an eligible counting gas mixture and the applied high voltage have been performed. Due to the high overall amplification of the small electrical chamber signals obtained from such low-energy particles, special filters have been developed in order to increase the signal-to-noise ratio. Timing properties of the chamber signals are discussed in detail. Information available from chamber signals and encoding methods are elucidated by spectra of alpha particles created by 234,235U spontaneous alpha decay. The detector permits the independent and simultaneous measurement of energy and angular distribution of particles in both sides of the chamber. Finally, preliminary results and related analysis methods will be presented for the investigation of the 10B(n, α0)/10B(n, α1γ) branching ratios
Göpfert, A; Bax, H
2000-01-01
The setup and operating conditions of a gridded twin ionization chamber with sample change facility to study light charged particle properties in the 1 MeV region is described. Detailed studies of different grid geometries in connection with the choice of an eligible counting gas mixture and the applied high voltage have been performed. Due to the high overall amplification of the small electrical chamber signals obtained from such low-energy particles, special filters have been developed in order to increase the signal-to-noise ratio. Timing properties of the chamber signals are discussed in detail. Information available from chamber signals and encoding methods are elucidated by spectra of alpha particles created by sup 2 sup 3 sup 4 sup , sup 2 sup 3 sup 5 U spontaneous alpha decay. The detector permits the independent and simultaneous measurement of energy and angular distribution of particles in both sides of the chamber. Finally, preliminary results and related analysis methods will be presented for the...
Hilke, Hans Jürgen; CERN. Geneva
1991-01-01
Lecture 5: Detector characteristics: ALEPH Experiment cut through the devices and events - Discuss the principles of the main techniques applied to particle detection ( including front-end electronics), the construction and performance of some of the devices presently in operartion and a few ideas on the future performance. Lecture 4-pt. b Following the Scintillators. Lecture 4-pt. a : Scintillators - Used for: -Timing (TOF, Trigger) - Energy Measurement (Calorimeters) - Tracking (Fibres) Basic scintillation processes- Inorganic Scintillators - Organic Scintil - Discuss the principles of the main techniques applied to particle detection ( including front-end electronics), the construction and performance of some of the devices presently in operation and a fiew ideas on future developpement session 3 - part. b Following Calorimeters lecture 3-pt. a Calorimeters - determine energy E by total absorption of charged or neutral particles - fraction of E is transformed into measurable quantities - try to acheive sig...
Mittal, K L
2015-01-01
The book provides a comprehensive and easily accessible reference source covering all important aspects of particle adhesion and removal. The core objective is to cover both fundamental and applied aspects of particle adhesion and removal with emphasis on recent developments. Among the topics to be covered include: 1. Fundamentals of surface forces in particle adhesion and removal.2. Mechanisms of particle adhesion and removal.3. Experimental methods (e.g. AFM, SFA,SFM,IFM, etc.) to understand particle-particle and particle-substrate interactions.4. Mechanics of adhesion of micro- and n
A number of novel features of QCD are reviewed, including the consequences of formation zone and color transparency phenomena in hadronic collisions, the use of automatic scale setting for perturbative predictions, null-zone phenomena as a fundamental test of gauge theory, and the relationship of intrinsic heavy colored particle Fock state components to new particle production. We conclude with a review of the applications of QCD to nuclear multiquark systems. 74 references
Joram, Christian
1998-01-01
The lecture series will present and overview of the basic techniques and underlying physical principles of particle detectors, applied to current and future high energy physics experiments. Illustrating examples, mainly from the field of collider experiments, will demonstrate the performance and limitations of the various techniques. After and introduction we shall concentrate on particle tracking. Wire chambers, drift chambers, micro gaseous tracking devices and solid state trackers will be discussed. It follows and overview of scintillators, photon detection, fiber tracking and nuclear emulsions. One lecture will deal with the various techniques of calorimetry. Finally we shall focus on methods developed for particle identification. These comprise specific energy loss, time of flight Cherenkov and transition radiation detectors.
These proceedings contain papers relating to the 3rd Japanese-Bulgarian-Macedonian Joint Seminar on Applied Electromagnetics. Included are the following groups: Numerical Methods I; Electrical and Mechanical System Analysis and Simulations; Inverse Problems and Optimizations; Software Methodology; Numerical Methods II; Applied Electromagnetics
... Your Health Particle Pollution Public Health Issues Particle Pollution Recommend on Facebook Tweet Share Compartir Particle pollution ... see them in the air. Where does particle pollution come from? Particle pollution can come from two ...
Newhouse, Vernon L
1975-01-01
Applied Superconductivity, Volume II, is part of a two-volume series on applied superconductivity. The first volume dealt with electronic applications and radiation detection, and contains a chapter on liquid helium refrigeration. The present volume discusses magnets, electromechanical applications, accelerators, and microwave and rf devices. The book opens with a chapter on high-field superconducting magnets, covering applications and magnet design. Subsequent chapters discuss superconductive machinery such as superconductive bearings and motors; rf superconducting devices; and future prospec
Lucas, Spencer G.
Stratigraphy is a cornerstone of the Earth sciences. The study of layered rocks, especially their age determination and correlation, which are integral parts of stratigraphy, are key to fields as diverse as geoarchaeology and tectonics. In the Anglophile history of geology, in the early 1800s, the untutored English surveyor William Smith was the first practical stratigrapher, constructing a geological map of England based on his own applied stratigraphy. Smith has, thus, been seen as the first “industrial stratigrapher,” and practical applications of stratigraphy have since been essential to most of the extractive industries from mining to petroleum. Indeed, gasoline is in your automobile because of a tremendous use of applied stratigraphy in oil exploration, especially during the latter half of the twentieth century. Applied stratigraphy, thus, is a subject of broad interest to Earth scientists.
Logan, J David
2013-01-01
Praise for the Third Edition"Future mathematicians, scientists, and engineers should find the book to be an excellent introductory text for coursework or self-study as well as worth its shelf space for reference." -MAA Reviews Applied Mathematics, Fourth Edition is a thoroughly updated and revised edition on the applications of modeling and analyzing natural, social, and technological processes. The book covers a wide range of key topics in mathematical methods and modeling and highlights the connections between mathematics and the applied and nat
Park, W.C.; Hausen, D.M.; Hagni, R.D. (eds.)
1985-01-01
A conference on applied mineralogy was held and figures were presented under the following headings: methodology (including image analysis); ore genesis; exploration; beneficiations (including precious metals); process mineralogy - low and high temperatures; and medical science applications. Two papers have been abstracted separately.
Schiehlen, Werner
2014-01-01
Applied Dynamics is an important branch of engineering mechanics widely applied to mechanical and automotive engineering, aerospace and biomechanics as well as control engineering and mechatronics. The computational methods presented are based on common fundamentals. For this purpose analytical mechanics turns out to be very useful where D’Alembert’s principle in the Lagrangian formulation proves to be most efficient. The method of multibody systems, finite element systems and continuous systems are treated consistently. Thus, students get a much better understanding of dynamical phenomena, and engineers in design and development departments using computer codes may check the results more easily by choosing models of different complexity for vibration and stress analysis.
The 1988 progress report, of the Applied Optics laboratory, of the (Polytechnic School, France), is presented. The optical fiber activities are focused on the development of an optical gyrometer, containing a resonance cavity. The following domains are included, in the research program: the infrared laser physics, the laser sources, the semiconductor physics, the multiple-photon ionization and the nonlinear optics. Investigations on the biomedical, the biological and biophysical domains are carried out. The published papers and the congress communications are listed
Diaz-Pache, F.; Alonso, F.J.; Esbert, R.M. [Departamento de Geologia, Universidad de Oviedo (Spain)
1996-06-01
Solid pollution particles play an important role in the decay of monumental stone. Scanning electron microscopy (SEM) in conjunction with microanalysis (EDX) are a very valuable study tool. In the present paper, particular attention is paid to sample collection and preparation. Examples of particles providing information on the source of decay are submitted. (Author) 9 refs.
Applications of particle accelerators
Particle accelerators are now widely used in a variety of applications for scientific research, applied physics, medicine, industrial processing, while possible utilisation in power engineering is envisaged. Earlier presentations of this subject, given at previous CERN Accelerator School sessions have been updated with papers contributed to the first European Conference on Accelerators in Applied Research and Technology (ECAART) held in September 1989 in Frankfurt and to the Second European Particle Accelerator Conference in Nice in June 1990. (orig.)
The 1988 progress report of the Applied Mathematics center (Polytechnic School, France), is presented. The research fields of the Center are the scientific calculus, the probabilities and statistics and the video image synthesis. The research topics developed are: the analysis of numerical methods, the mathematical analysis of the physics and mechanics fundamental models, the numerical solution of complex models related to the industrial problems, the stochastic calculus and the brownian movement, the stochastic partial differential equations, the identification of the adaptive filtering parameters, the discrete element systems, statistics, the stochastic control and the development, the image synthesis techniques for education and research programs. The published papers, the congress communications and the thesis are listed
The invention discloses a method and apparatus for applying radiation by producing X-rays of a selected spectrum and intensity and directing them to a desired location. Radiant energy is directed from a laser onto a target to produce such X-rays at the target, which is so positioned adjacent to the desired location as to emit the X-rays toward the desired location; or such X-rays are produced in a region away from the desired location, and are channeled to the desired location. The radiant energy directing means may be shaped (as with bends; adjustable, if desired) to circumvent any obstruction between the laser and the target. Similarly, the X-ray channeling means may be shaped (as with fixed or adjustable bends) to circumvent any obstruction between the region where the X-rays are produced and the desired location. For producing a radiograph in a living organism the X-rays are provided in a short pulse to avoid any blurring of the radiograph from movement of or in the organism. For altering tissue in a living organism the selected spectrum and intensity are such as to affect substantially the tissue in a preselected volume without injuring nearby tissue. Typically, the selected spectrum comprises the range of about 0.1 to 100 keV, and the intensity is selected to provide about 100 to 1000 rads at the desired location. The X-rays may be produced by stimulated emission thereof, typically in a single direction
Venter, Gerhard; Sobieszczanski-Sobieski Jaroslaw
2002-01-01
The purpose of this paper is to show how the search algorithm known as particle swarm optimization performs. Here, particle swarm optimization is applied to structural design problems, but the method has a much wider range of possible applications. The paper's new contributions are improvements to the particle swarm optimization algorithm and conclusions and recommendations as to the utility of the algorithm, Results of numerical experiments for both continuous and discrete applications are presented in the paper. The results indicate that the particle swarm optimization algorithm does locate the constrained minimum design in continuous applications with very good precision, albeit at a much higher computational cost than that of a typical gradient based optimizer. However, the true potential of particle swarm optimization is primarily in applications with discrete and/or discontinuous functions and variables. Additionally, particle swarm optimization has the potential of efficient computation with very large numbers of concurrently operating processors.
Particle-Particle-String Vertex
Ishibashi, Nobuyuki
1996-01-01
We study a theory of particles interacting with strings. Considering such a theory for Type IIA superstring will give some clue about M-theory. As a first step toward such a theory, we construct the particle-particle-string interaction vertex generalizing the D-particle boundary state.
Research on Magnetic Fe3O4 Nano-Particles Applied in Water Treatment%用于水处理的磁性Fe3O4纳米微粒研究
苏洁; 程文; 魏红; 何泽楠; 刘东; 左芬
2012-01-01
采用化学共沉淀法和水热法制备Fe3O4纳米磁性粒子及油酸包覆Fe3O4磁流体.通过实验确定最佳反应条件；用XRD分析Fe3O4粒子的晶体结构；用TEM观察磁流体样品的微观结构；用HPLC研究纳米粒子对左旋氧氟沙星溶液模拟废水超声降解的影响.结果表明产物为反尖晶石结构立方晶系的AB2O4型化合物,平均粒径小于15 nm;磁流体基本上为规则的球形,颗粒均匀,无团聚情况；制得的磁流体样品具有较好的流动性和超顺磁性；Fe3O4纳米粒子对左旋氧氟沙星具有一定的降解性能.%The Fe3O4 nano-particles and oil up on the Fe3O4 fluid are prepared by a chemical co-precipitation method and hydrothermal method. The optimum reaction conditions are determined through experiments. The crystal structure of Fe3O4 particles is analyzed via X-ray diffraction (XRD); the microstruc-ture of magnetic fluid sample is observed via transmission electron microscope (TEM); the effects of nano-particles on the degradation of levofloxacin solution imitating wastewater is investigated by high performance liquid chromatography( HPLC). The results show that the product is inverse spinel structure of the cubic crystal system-type compounds of AB204. The average particle size is less than 15 nm; magnetic fluid is regular spherical shape basically, particles uniformly and without agglomeration. Magnetic fluid samples are of good mobility and superparamagneticity; nano-size Fe3O4 can degrada levofloxacin to a certain extent.
Electric tweezers: negative dielectrophoretic multiple particle positioning
Electric tweezers are a touchless positioning apparatus that employs dielectrophoresis and electroorientation to arbitrarily position cell-sized particles. In this paper, we develop an algorithm which enables electric tweezers to operate on multiple particles. Furthermore, we probe the limits of this technique in simulation, examining the range of electric field magnitudes and forces that can be applied. We then demonstrate this new functionality on two particles. The device can apply forces on any particle of non-zero polarizability and here this is highlighted by manipulating negatively polarized glass beads. Additionally, we demonstrate that negligibly polarized particles can also be manipulated through mechanical forces applied by other particles. (paper)
Integral equation study of particle confinement effects in a polymer/particle mixture
Henderson, D; Trokhymchuk, A; Kalyuzhnyi, Y; Gee, R; Lacevic, N
2007-05-09
Integral equation theory techniques are applied to evaluate the structuring of the polymer when large solid particles are embedded into a bulk polymer melt. The formalism presented here is applied to obtain an insight into the filler particle aggregation tendency. We find that with the employed polymer-particle interaction model it is very unlikely that the particles will aggregate. We believe that in such a system aggregation and clustering can occur when the filler particles are dressed by tightly bound polymer layers.
Raju, M.R.
1993-09-01
Particle therapy has a long history. The experimentation with particles for their therapeutic application got started soon after they were produced in the laboratory. Physicists played a major role in proposing the potential applications in radiotherapy as well as in the development of particle therapy. A brief review of the current status of particle radiotherapy with some historical perspective is presented and specific contributions made by physicists will be pointed out wherever appropriate. The rationale of using particles in cancer treatment is to reduce the treatment volume to the target volume by using precise dose distributions in three dimensions by using particles such as protons and to improve the differential effects on tumors compared to normal tissues by using high-LET radiations such as neutrons. Pions and heavy ions combine the above two characteristics.
Particle therapy has a long history. The experimentation with particles for their therapeutic application got started soon after they were produced in the laboratory. Physicists played a major role in proposing the potential applications in radiotherapy as well as in the development of particle therapy. A brief review of the current status of particle radiotherapy with some historical perspective is presented and specific contributions made by physicists will be pointed out wherever appropriate. The rationale of using particles in cancer treatment is to reduce the treatment volume to the target volume by using precise dose distributions in three dimensions by using particles such as protons and to improve the differential effects on tumors compared to normal tissues by using high-LET radiations such as neutrons. Pions and heavy ions combine the above two characteristics
Provides step-by-step derivations. Contains numerous tables and diagrams. Supports learning and teaching with numerous worked examples, questions and problems with answers. Sketches also the historical development of the subject. This textbook teaches particle physics very didactically. It supports learning and teaching with numerous worked examples, questions and problems with answers. Numerous tables and diagrams lead to a better understanding of the explanations. The content of the book covers all important topics of particle physics: Elementary particles are classified from the point of view of the four fundamental interactions. The nomenclature used in particle physics is explained. The discoveries and properties of known elementary particles and resonances are given. The particles considered are positrons, muon, pions, anti-protons, strange particles, neutrino and hadrons. The conservation laws governing the interactions of elementary particles are given. The concepts of parity, spin, charge conjugation, time reversal and gauge invariance are explained. The quark theory is introduced to explain the hadron structure and strong interactions. The solar neutrino problem is considered. Weak interactions are classified into various types, and the selection rules are stated. Non-conservation of parity and the universality of the weak interactions are discussed. Neutral and charged currents, discovery of W and Z bosons and the early universe form important topics of the electroweak interactions. The principles of high energy accelerators including colliders are elaborately explained. Additionally, in the book detectors used in nuclear and particle physics are described. This book is on the upper undergraduate level.
Particle motion in fluidised beds
Gas fluidised beds are important components in many process industries, e.g. coal combustors and granulators, but not much is known about the movement of the solids. Positron Emission Particle Tracking (PEPT) enables the movement of a single, radioactive tracer particle to be followed rapidly and faithfully. Experiments were carried out in columns sized between 70 and 240mm. diameter, operating in the bubbling regime at ambient process conditions using particles of group B and D (Geldart Classification). Particle motion was tracked and the data applied to models for particle movement at the gas distributor as well as close to other surfaces and to models for particle circulation in beds of cohesive particles. In the light of these data, models for particle and bubble interaction, particle circulation, segregation, attrition, erosion, heat transfer and fluidised bed scale-up rules were reassessed. Particle motion is directly caused by bubble motion, and their velocities were found to be equal for particles travelling in a bubble. PEPT enables particle circulation to be measured, giving a more accurate correlation for future predictions. Particle motion follows the scale-up rules based on similarities of the bubble motion in the bed. A new group of parameters was identified controlling the amount of attrition in fluidised beds and a new model to predict attrition is proposed. (author)
C. A. Kumar
2011-01-01
Full Text Available Problem statement: Most of the control engineering problems are characterized by several, contradicting, conflicting objectives, which have to be satisfied simultaneously. Two widely used methods for finding the optimal solution to such problems are aggregating to a single criterion and using Pareto-optimal solutions. Approach: Non-Dominated Sorting Particle Swarm Optimization algorithm (NSPSO based approach is used in the design of multiobjective PID controller to find the constant proportional-integral-derivative gains for a chemical neutralization plant. The plant considered in this study is highly non-linear and with varying time delay, provides a challenging test bed for nonlinear control problems. Results: Experimental results confirm that a multi-objective, Paretobased GA search gives a better performance than a single objective GA. Conclusion: Finally, the results for single objective and multiobjective optimization using NSPSO for the neutralization plant are compared. Gain scheduled PID controllers are designed from Pareto front obtained with NSPSO which exhibit good disturbance rejection capability.
Particle decay in inflationary cosmology
Boyanovsky, D.; de Vega, H. J.
2004-01-01
We investigate the relaxation and decay of a particle during inflation by implementing the dynamical renormalization group. This investigation allows us to give a meaningful definition for the decay rate in an expanding universe. As a prelude to a more general scenario, the method is applied here to study the decay of a particle in de Sitter inflation via a trilinear coupling to massless conformally coupled particles, both for wavelengths much larger and much smaller than the Hubble radius. F...
Carlsmith, Duncan
2012-01-01
Particle Physics is the first book to connect theory and experiment in particle physics. Duncan Carlsmith provides the first accessible exposition of the standard model with sufficient mathematical depth to demystify the language of gauge theory and Feynman diagrams used by researchers in the field. Carlsmith also connects theories to past, present, and future experiments.
Particle splitting in smoothed particle hydrodynamics based on Voronoi diagram
Chiaki, Gen
2015-01-01
We present a novel method for particle splitting in smoothed particle hydrodynamics simulations. Our method utilizes the Voronoi diagram for a given particle set to determine the position of fine daughter particles. We perform several test simulations to compare our method with a conventional splitting method in which the daughter particles are placed isotropically over the local smoothing length. We show that, with our method, the density deviation after splitting is reduced by a factor of about two compared with the conventional method. Splitting would smooth out the anisotropic density structure if the daughters are distributed isotropically, but our scheme allows the daughter particles to trace the original density distribution with length scales of the mean separation of their parent. We apply the particle splitting to simulations of the primordial gas cloud collapse. The thermal evolution is accurately followed to the hydrogen number density of 10^12 /cc. With the effective mass resolution of ~10^-4 Msu...
Ziegel, Johanna; Nyengaard, Jens Randel; Jensen, Eva B. Vedel
In the present paper, statistical procedures for estimating shape and orientation of arbitrary three-dimensional particles are developed. The focus of this work is on the case where the particles cannot be observed directly, but only via sections. Volume tensors are used for describing particle s...
李欣然; 靳雁霞
2012-01-01
WTA problem is vital in modern warfare. The WTA model is built aiming at minimum failure probability in allocating weapons for shooting all the targets. This paper puts forward a quantum behaviour particle swarm optimisation algorithm with inertia weight adaptive adjustment to overcome the deficiencies of premature convergence and low optimisation efficiency the existing algorithm has in solving such kind of problems. First, the concept of focusing distance changing rate is introduced, the inertial weight factor is formulated as the function of focusing distance rate so as to provide the algorithm with dynamic adaptability. Meanwhile, an effective method of judging and preventing premature and stagnation is embedded into the algorithm. The optimisation example shows that this algorithm can effectively solve the WTA problems.%武器一目标分配(WTA)问题是现代战争中一个十分重要的问题.以分配武器迎击全部目标的失败概率最小为目标,构建武器一目标分配问题模型；针对已有算法求解这类问题存在的早熟收敛、优化效率较低的缺点,提出一种惯性权重自适应调整的量子行为粒子群优化算法.首先引入聚焦距离变化率的概念,将惯性权重因子表示为关于聚焦距离变化率的函数,从而使算法具有动态自适应性；同时在算法中嵌入一种判断和避免搜索早熟和停滞的有效方法.优化实例的结果分析表明,该算法能有效地解决武器-目标分配问题.
Fritzsch, Harald; Heusch, Karin
Introduction -- Electrons and atomic nuclei -- Quantum properties of atoms and particles -- The knives of Democritus -- Quarks inside atomic nuclei -- Quantum electrodynamics -- Quantum chromodynamics -- Mesons, baryons, and quarks -- Electroweak interactions -- Grand unification -- Conclusion.
Parham, R.
1974-01-01
Presents the text of a speech given to a conference of physics teachers in which the full spectrum of elementary particles is given, along with their classification. Also includes some teaching materials available on this topic. (PEB)
A variety of subjects are addressed within the general context of searching for limitations in capability of particle identification due to high average rates. Topics receiving attention included Cerenkov ring imaging, transition radiation, synchrotron radiation, time-of-flight, high P spectrometer, heavy quark tagging with leptons, general purpose muon and electron detector, and dE/dx. It is concluded that particle identification will probably not represent a primary obstacle at luminosities of 1033cm-2sec-1
The theoretical work on models of the electroweak interaction and simple grand unified models with a nonstandard set of Higgs particles is reviewed. Emphasis is placed on light and even strictly massless Higgs particles: Goldstone and pseudo-Goldstone bosons. It is shown that such bosons arise in a natural way in the theory if the Higgs particles are in fact composite. The low-energy effective Lagrangian of these particles is studied. A detailed study is made of the problem of CP breaking in a strong interaction and of a natural solution of this problem through the introduction of a pseudo-Goldstone particle: an axion. The theory of the ''standard'' axion and its experimental status are reviewed. Possible ''invisible'' and ''visualized'' axions are discussed, as are certain astrophysical aspects of the existence of an axion. By analogy with the axion, an analysis is made of another hypothetical particle: the strictly massless Goldstone boson or arion. Model-independent properties of the arion are determined. The similarity between the arion fields and magnetic fields and the differences between these fields are shown. Possible methods for detecting an arion field are discussed. An experiment which has set a limit on the strength of the arion interaction is described. Neutral Goldstone bosons whose emission is accompanied by changes in fermion flavors (''familons'') are discussed. Two versions of the theory with a Goldstone boson (a majoron) which arises upon a spontaneous breaking of lepton number are described
Gas-cooled reactors, using packed beds of small diameter coated fuel particles have been proposed for compact, high-power systems. The particulate fuel used in the tests was 800 microns in diameter, consisting of a thoria kernel coated with 200 microns of pyrocarbon. Typically, the bed of fuel particles was contained in a ceramic cylinder with porous metallic frits at each end. A dc voltage was applied to the metallic frits and the resulting electric current heated the bed. Heat was removed by passing coolant (helium or hydrogen) through the bed. Candidate frit materials, rhenium, nickel, zirconium carbide, and zirconium oxide were unaffected, while tungsten and tungsten-rhenium lost weight and strength. Zirconium-carbide particles were tested at 2000 K in H2 for 12 hours with no visible reaction or weight loss
Ergodicity of particle systems
Dzhin Ven Chen
2002-01-01
The ergodicity relative to shifts, the mixing and related problems on the invariant measures for the interacting particles systems, such as the Ising ferromagnetic stochastic models, the contact processes, the systems with exception, the selector systems with three possible stochastically preconceived opinions or with many possible opinions, etc. are studied. The obtained results provide for the answers to certain questions, related to these models. The applied methods are based on duality
PARTICLE BEAM TRACKING CIRCUIT
Anderson, O.A.
1959-05-01
>A particle-beam tracking and correcting circuit is described. Beam induction electrodes are placed on either side of the beam, and potentials induced by the beam are compared in a voltage comparator or discriminator. This comparison produces an error signal which modifies the fm curve at the voltage applied to the drift tube, thereby returning the orbit to the preferred position. The arrangement serves also to synchronize accelerating frequency and magnetic field growth. (T.R.H.)
Fuzzy logic particle tracking velocimetry
Wernet, Mark P.
1993-01-01
Fuzzy logic has proven to be a simple and robust method for process control. Instead of requiring a complex model of the system, a user defined rule base is used to control the process. In this paper the principles of fuzzy logic control are applied to Particle Tracking Velocimetry (PTV). Two frames of digitally recorded, single exposure particle imagery are used as input. The fuzzy processor uses the local particle displacement information to determine the correct particle tracks. Fuzzy PTV is an improvement over traditional PTV techniques which typically require a sequence (greater than 2) of image frames for accurately tracking particles. The fuzzy processor executes in software on a PC without the use of specialized array or fuzzy logic processors. A pair of sample input images with roughly 300 particle images each, results in more than 200 velocity vectors in under 8 seconds of processing time.
Hunt, Arlon J.
1984-01-01
A method and apparatus whereby small carbon particles are made by pyrolysis of a mixture of acetylene carried in argon. The mixture is injected through a nozzle into a heated tube. A small amount of air is added to the mixture. In order to prevent carbon build-up at the nozzle, the nozzle tip is externally cooled. The tube is also elongated sufficiently to assure efficient pyrolysis at the desired flow rates. A key feature of the method is that the acetylene and argon, for example, are premixed in a dilute ratio, and such mixture is injected while cool to minimize the agglomeration of the particles, which produces carbon particles with desired optical properties for use as a solar radiant heat absorber.
Martin, Brian R
2017-01-01
An accessible and carefully structured introduction to Particle Physics, including important coverage of the Higgs Boson and recent progress in neutrino physics. Fourth edition of this successful title in the Manchester Physics series. Includes information on recent key discoveries including : An account of the discovery of exotic hadrons, beyond the simple quark model; Expanded treatments of neutrino physics and CP violation in B-decays; An updated account of ‘physics beyond the standard model’, including the interaction of particle physics with cosmology; Additional problems in all chapters, with solutions to selected problems available on the book’s website; Advanced material appears in optional starred sections.
Dusty-Plasma Particle Accelerator
Foster, John E.
2005-01-01
A dusty-plasma apparatus is being investigated as means of accelerating nanometer- and micrometer-sized particles. Applications for the dusty-plasma particle accelerators fall into two classes: Simulation of a variety of rapidly moving dust particles and micrometeoroids in outer-space environments that include micrometeoroid streams, comet tails, planetary rings, and nebulae and Deposition or implantation of nanoparticles on substrates for diverse industrial purposes that could include hardening, increasing thermal insulation, altering optical properties, and/or increasing permittivities of substrate materials. Relative to prior apparatuses used for similar applications, dusty-plasma particle accelerators offer such potential advantages as smaller size, lower cost, less complexity, and increased particle flux densities. A dusty-plasma particle accelerator exploits the fact that an isolated particle immersed in plasma acquires a net electric charge that depends on the relative mobilities of electrons and ions. Typically, a particle that is immersed in a low-temperature, partially ionized gas, wherein the average kinetic energy of electrons exceeds that of ions, causes the particle to become negatively charged. The particle can then be accelerated by applying an appropriate electric field. A dusty-plasma particle accelerator (see figure) includes a plasma source such as a radio-frequency induction discharge apparatus containing (1) a shallow cup with a biasable electrode to hold the particles to be accelerated and (2) a holder for the substrate on which the particles are to impinge. Depending on the specific design, a pair of electrostatic-acceleration grids between the substrate and discharge plasma can be used to both collimate and further accelerate particles exiting the particle holder. Once exposed to the discharge plasma, the particles in the cup quickly acquire a negative charge. Application of a negative voltage pulse to the biasable electrode results in the
2005-01-01
While biomedicine and geoscience use grids to bring together many different sub-disciplines, particle physicists use grid computing to increase computing power and storage resources, and to access and analyze vast amounts of data collected from detectors at the world's most powerful accelerators (1 page)
An infinite blender that achieves a homogeneous mixture of fuel microspheres is provided. Blending is accomplished by directing respective groups of desired particles onto the apex of a stationary coaxial cone. The particles progress downward over the cone surface and deposit in a space at the base of the cone that is described by a flexible band provided with a wide portion traversing and in continuous contact with the circumference of the cone base and extending upwardly therefrom. The band, being attached to the cone at a narrow inner end thereof, causes the cone to rotate on its arbor when the band is subsequently pulled onto a take-up spool. As a point at the end of the wide portion of the band passes the point where it is tangent to the cone, the blended particles are released into a delivery tube leading directly into a mold, and a plate mounted on the lower portion of the cone and positioned between the end of the wide portion of the band and the cone assures release of the particles only at the tangent point
Article coated with flash bonded superhydrophobic particles
Simpson, John T [Clinton, TN; Blue, Craig A [Knoxville, TN; Kiggans, Jr., James O [Oak Ridge, TN
2010-07-13
A method of making article having a superhydrophobic surface includes: providing a solid body defining at least one surface; applying to the surface a plurality of diatomaceous earth particles and/or particles characterized by particle sizes ranging from at least 100 nm to about 10 .mu.m, the particles being further characterized by a plurality of nanopores, wherein at least some of the nanopores provide flow through porosity, the particles being further characterized by a plurality of spaced apart nanostructured features that include a contiguous, protrusive material; flash bonding the particles to the surface so that the particles are adherently bonded to the surface; and applying a hydrophobic coating layer to the surface and the particles so that the hydrophobic coating layer conforms to the nanostructured features.
Graeser, M.; Bente, K.; Neumann, A.; Buzug, T. M.
2016-02-01
In magnetic particle imaging, scanners use different spatial sampling techniques to cover the field of view (FOV). As spatial encoding is realized by a selective low field region (a field-free-point, or field-free-line), this region has to be moved through the FOV on specific sampling trajectories. To achieve these trajectories complex time dependent magnetic fields are necessary. Due to the superposition of the selection field and the homogeneous time dependent fields, particles at different spatial positions experience different field sequences. As a result, the dynamic behaviour of those particles can be strongly spatially dependent. So far, simulation studies that determined the trajectory quality have used the Langevin function to model the particle response. This however, neglects the dynamic relaxation of the particles, which is highly affected by magnetic anisotropy. More sophisticated models based on stochastic differential equations that include these effects were only used for one dimensional excitation. In this work, a model based on stochastic differential equations is applied to two-dimensional trajectory field sequences, and the effects of these field sequences on the particle response are investigated. The results show that the signal of anisotropic particles is not based on particle parameters such as size and shape alone, but is also determined by the field sequence that a particle ensemble experiences at its spatial position. It is concluded, that the particle parameters can be optimized in terms of the used trajectory.
In magnetic particle imaging, scanners use different spatial sampling techniques to cover the field of view (FOV). As spatial encoding is realized by a selective low field region (a field-free-point, or field-free-line), this region has to be moved through the FOV on specific sampling trajectories. To achieve these trajectories complex time dependent magnetic fields are necessary. Due to the superposition of the selection field and the homogeneous time dependent fields, particles at different spatial positions experience different field sequences. As a result, the dynamic behaviour of those particles can be strongly spatially dependent. So far, simulation studies that determined the trajectory quality have used the Langevin function to model the particle response. This however, neglects the dynamic relaxation of the particles, which is highly affected by magnetic anisotropy. More sophisticated models based on stochastic differential equations that include these effects were only used for one dimensional excitation. In this work, a model based on stochastic differential equations is applied to two-dimensional trajectory field sequences, and the effects of these field sequences on the particle response are investigated. The results show that the signal of anisotropic particles is not based on particle parameters such as size and shape alone, but is also determined by the field sequence that a particle ensemble experiences at its spatial position. It is concluded, that the particle parameters can be optimized in terms of the used trajectory. (paper)
OMEC LS800 Laser Particle Sizer
Fugen Zhang
2003-01-01
@@ Laser particle sizers (LPS's) measure the size of small particles from the phenomenon of light scattering, while scattering by large particles is considered to consists of diffraction. Mie's Theory applies to small particles down to submicron dimensions. OMEC recognizes that the scattering theory should be used for both large and small particles in order to have a precise description of the scattering phenomena. Therefore, although numerical calculation based on the scattering theory is much more complicated than for the theory of diffraction, especially for particles much larger than 1 micron, OMEC has persisted in using the strict theory of scattering for all her products.
Particle IDentification (PID) is fundamental to particle physics experiments. This paper reviews PID strategies and methods used by the large LHC experiments, which provide outstanding examples of the state-of-the-art. The first part focuses on the general design of these experiments with respect to PID and the technologies used. Three PID techniques are discussed in more detail: ionization measurements, time-of-flight measurements and Cherenkov imaging. Four examples of the implementation of these techniques at the LHC are given, together with selections of relevant examples from other experiments and short overviews on new developments. Finally, the Alpha Magnetic Spectrometer (AMS 02) experiment is briefly described as an impressive example of a space-based experiment using a number of familiar PID techniques.
Vlahos, L.; Machado, M. E.; Ramaty, R.; Murphy, R. J.; Alissandrakis, C.; Bai, T.; Batchelor, D.; Benz, A. O.; Chupp, E.; Ellison, D.
1986-01-01
Data is compiled from Solar Maximum Mission and Hinothori satellites, particle detectors in several satellites, ground based instruments, and balloon flights in order to answer fundamental questions relating to: (1) the requirements for the coronal magnetic field structure in the vicinity of the energization source; (2) the height (above the photosphere) of the energization source; (3) the time of energization; (4) transistion between coronal heating and flares; (5) evidence for purely thermal, purely nonthermal and hybrid type flares; (6) the time characteristics of the energization source; (7) whether every flare accelerates protons; (8) the location of the interaction site of the ions and relativistic electrons; (9) the energy spectra for ions and relativistic electrons; (10) the relationship between particles at the Sun and interplanetary space; (11) evidence for more than one acceleration mechanism; (12) whether there is single mechanism that will accelerate particles to all energies and also heat the plasma; and (13) how fast the existing mechanisms accelerate electrons up to several MeV and ions to 1 GeV.
This century has been the century of atom constituents and of elementary particles. The electron was discovered at the very end of last century and now we are waiting for the experimental confirmation of the existence of Higgs boson. The discovery of neutrons in 1932 let out the existence of 2 new forces: the strong interaction that counterbalances the repulsive Coulomb force between protons inside the nucleus and the weak interaction that triggers the decay of the neutron. Another milestone in particle physics was the replacement of hadrons (more than 100 particles) by their constituents: a mere mix of 3 quarks and their antiparticles. The standard model was introduced in 1919 by H.Weyl, who later made it suitable for electromagnetism. This model was generalized in 1953 and in 1973-1975 it was proving fundamental for all the interactions but gravitation. Today theoretical speculations attempting to unify gravitation to the other interactions are made, they are based on super-cord and super-membrane models. The authors describe the progress of physics through this century. (A.C.)
Composites applied for pistons
Wieczorek J.; Śleziona J.; Dyzia M.; Dolata-Grosz A.
2007-01-01
In the article the possibility of application the composite materials in casts into metal mould to form the pistons for compressors have been presented. In cooperation with “Zlotecki” company was undertaken the test of casting in productive conditions the aluminium alloy matrix composites reinforced with silicon carbide particles and composites reinforced with the mixture of the silicon carbide (SiC) and amorphous glass carbon particles. On the basis microstructural investigations were affirm...
A historical aspect of electron discovery by J.J. Thomson through standard model is given and entitled only to say that the quarks and gluons as well as the photon, W+, W-,Z0, and the electron and other leptons are more elementary than nucleons and pions, because their fields appear in a theory, the Standard Model , that applies over a much wider range of energies than the effective field theory that describes nucleon and pions at low energy. Any final conclusion about the elementarily of the quarks and gluons or even the electrons themselves cannot reached. When we have such a theory we may find that the elementary structures of physics are not particles at all. It is probable that since obtaining a final theory for matter and force, a response to this question that which particle is elementary can not been given
The two main themes of this volume are the standard model of the fundamental interactions (and beyond) and astrophysics. The remarkable advances in the theoretical understanding and experimental confirmation of the standard model were reviewed in several lectures where the reader will find a thorough analysis of recent experiments as well as a detailed comparison of the standard model with experiment. On a more theoretical side, supersymmetry, supergravity and strings were discussed as well. The second theme concerns astrophysics where the school was quite successful in bridging the gap between this fascinating subject and more conventional particle physics
Current state of art in the discovery of new elementary particles is reviewed. At present, quarks and mesons are accepted as the basic constituents of matter. The charmonium model (canti-c system), and the 'open charm' are discussed. Explanations are offered for the recent discovery of the heavy lepton tau. Quark states such as the beauty and taste are also dealt with at length. The properties of the tanti-t bound system are speculated. It is concluded that the understanding of canti-c and banti-b families is facilitated by the assumption of the quarkonium model. Implications at the astrophysical level are indicated. (A.K.)
During the last years, particle spectroscopy has evolved into the spectroscopy of leptons and quarks. This era was initiated in 1974 by the discovery of J/psi mesons, quickly followed by new lepton Tau and finally Ypsilon mesons. In this report, talk is concentrated on one specific old hadrons, namely exotics. The main part is then devoted to the new quarks charm and beauty. As for the exotics, baryonium in its broad and narrow states, dibaryons, and exotic quantum numbers are described. There is no firm evidence for exotic quantum numbers so far. Consequently, both experiments and theory have to be improved. Next, new quark spectroscopy is described on quark charge, charm (charmonium, charm particles, F mesons), and beauty. Description of the beauty is further divided into Ypsilon parameters, event topology, quark jets, change of topology at the Ypsilon, other properties of events in the Ypsilon region, Ypsilon summary, and Ypsilon prime. As seen above, in addition to the charm quark, there is now ample evidence for the existence of a new heavy quark which is most probably of the 'beauty' type. To answer the question whether a 6th quark t would constitute perfect symmetry between leptons and quarks, again the answer can now only be: PETRA works and CESR and PEP will follow soon. (Wakatsuki, Y.)
2014-01-01
Advances in Applied Mechanics draws together recent significant advances in various topics in applied mechanics. Published since 1948, Advances in Applied Mechanics aims to provide authoritative review articles on topics in the mechanical sciences, primarily of interest to scientists and engineers working in the various branches of mechanics, but also of interest to the many who use the results of investigations in mechanics in various application areas, such as aerospace, chemical, civil, en...
Perspectives on Applied Ethics
2007-01-01
Applied ethics is a growing, interdisciplinary field dealing with ethical problems in different areas of society. It includes for instance social and political ethics, computer ethics, medical ethics, bioethics, envi-ronmental ethics, business ethics, and it also relates to different forms of professional ethics. From the perspective of ethics, applied ethics is a specialisation in one area of ethics. From the perspective of social practice applying eth-ics is to focus on ethical aspects and ...
Composites applied for pistons
Wieczorek J.
2007-01-01
Full Text Available In the article the possibility of application the composite materials in casts into metal mould to form the pistons for compressors have been presented. In cooperation with “Zlotecki” company was undertaken the test of casting in productive conditions the aluminium alloy matrix composites reinforced with silicon carbide particles and composites reinforced with the mixture of the silicon carbide (SiC and amorphous glass carbon particles. On the basis microstructural investigations were affirmed the uniformly distribution of reinforcing particles on the cross section of studied pistons. Realized technological tests confirmed the possibility of formation composite pistons with one kind of reinforcing phase and heterophase reinforcement from utilization the technology of mould casting.
Dust particle charging in sheath
The charging and the screening of spherical dust particles in sheaths near the wall were studied using computer simulation. The three-dimensional PIC/MCC method and molecular dynamics method were applied to describe plasma particles motion and interaction with macroscopic dust grain. Calculations were carried out at different neutral gas pressures and wall potentials. Values of the charge of the dust particles and spatial distributions of plasma parameters are obtained by modelling. The results have shown that the charge of the dust particles in the sheath, as well as the spatial distribution of the ions and electrons near the dust particles, depend strongly on the wall potential. It is shown that for large negative values of the wall potential the negative charge of a dust particle decreases due to the decline of the electron density in its vicinity. In addition, the flow of energy of the ions on the surface of dust particles is increased due to better focusing effect of the dust particle field on ions.
Applied Neuroscience Laboratory Complex
Federal Laboratory Consortium — Located at WPAFB, Ohio, the Applied Neuroscience lab researches and develops technologies to optimize Airmen individual and team performance across all AF domains....
Particle Swarm Optimization Toolbox
Grant, Michael J.
2010-01-01
The Particle Swarm Optimization Toolbox is a library of evolutionary optimization tools developed in the MATLAB environment. The algorithms contained in the library include a genetic algorithm (GA), a single-objective particle swarm optimizer (SOPSO), and a multi-objective particle swarm optimizer (MOPSO). Development focused on both the SOPSO and MOPSO. A GA was included mainly for comparison purposes, and the particle swarm optimizers appeared to perform better for a wide variety of optimization problems. All algorithms are capable of performing unconstrained and constrained optimization. The particle swarm optimizers are capable of performing single and multi-objective optimization. The SOPSO and MOPSO algorithms are based on swarming theory and bird-flocking patterns to search the trade space for the optimal solution or optimal trade in competing objectives. The MOPSO generates Pareto fronts for objectives that are in competition. A GA, based on Darwin evolutionary theory, is also included in the library. The GA consists of individuals that form a population in the design space. The population mates to form offspring at new locations in the design space. These offspring contain traits from both of the parents. The algorithm is based on this combination of traits from parents to hopefully provide an improved solution than either of the original parents. As the algorithm progresses, individuals that hold these optimal traits will emerge as the optimal solutions. Due to the generic design of all optimization algorithms, each algorithm interfaces with a user-supplied objective function. This function serves as a "black-box" to the optimizers in which the only purpose of this function is to evaluate solutions provided by the optimizers. Hence, the user-supplied function can be numerical simulations, analytical functions, etc., since the specific detail of this function is of no concern to the optimizer. These algorithms were originally developed to support entry
Acceleration of polarized particles
The spin kinetics of polarized beams in circular accelerators is reviewed in the case of spin-1/2 particles (electrons and protons) with emphasis on the depolarization phenomena. The acceleration of polarized proton beams in synchrotrons is described together with the cures applied to reduce depolarization, including the use of 'Siberian Snakes'. The in-situ polarization of electrons in storage rings due to synchrotron radiation is studied as well as depolarization in presence of ring imperfections. The applications of electron polarization to accurately calibrate the rings in energy and to use polarized beams in colliding-beam experiments are reviewed. (author) 76 refs., 19 figs., 1 tab
Biomimetic folding particle chains
Full text: The sequence of the amino acids in proteins dictates their folded 3-D structure. We have recently by simulations shown that this principle can be applied to flexible strings of isotropically interacting particles with at least one attractive patchy interaction, allowing the design of new materials and structures. Our goal is now to realize this directed self-folding on a colloidal size scale to study the folding in real time in real space. We discuss our use of polymer brushes, depletion interactions and liquid-interface scaffold chemistry to realize the goal. (author)
Applied Linguistics: Brazilian Perspectives
Cavalcanti, Marilda C.
2004-01-01
The aim of this paper is to present perspectives in Applied Linguistics (AL) against the background of a historical overview of the field in Brazil. I take the stance of looking at AL as a field of knowledge and as a professional area of research. This point of view directs my reflections towards research-based Applied Linguistics carried out from…
Experimental Investigation of Particle Deagglomeration using Turbulence
The effect of turbulence on powder aerosol deagglomeration was investigated. Two impinging jets were used to generate turbulence. Lactose particles, whose fully dispersed fine particle fraction (FPF) - number percentage of the particles whose diameter smaller than 5 μm- is above 90 %, were applied as aerosol powder. The particle size distribution after the dispersion unit were measured by using phase Doppler anemometer (PDA) and turbulence level were quantified at the impingement point of two jets with laser Doppler anemometer. As the turbulence level increases turbulent time and length scales decrease, and the ratio of fine particle fraction (FPF) increases from 36% to 86%.
Particle scattering applications in solar panels
Seneviratne, Jehan; Berg, Matthew
2014-03-01
The focus of this work is to apply the scattering characteristics of particles to model particle assisted solar concentrators. In this work, the scattering patterns of particles of different shapes, sizes, and refractive indices are computationally studied using Discrete Dipole Approximation (DDA). The study investigates the optical behavior of different particle ensembles. The simulated results are used to explain the characteristic behavior seen in. The computational methodology can be used to determine the ideal ensemble of particles to produce the most efficient energy yield in a scattering-based photovoltaic concentrator.
Burnout of pulverized biomass particles in large scale boiler – Single particle model approach
Saastamoinen, Jaakko; Aho, Martti; Moilanen, Antero;
2010-01-01
the particle combustion model is coupled with one-dimensional equation of motion of the particle, is applied for the calculation of the burnout in the boiler. The particle size of biomass can be much larger than that of coal to reach complete burnout due to lower density and greater reactivity. The...... burner location and the trajectories of the particles might be optimised to maximise the residence time and burnout....
Particle deposition in ventilation ducts
Sippola, Mark R.
2002-09-01
the experimental measurements was applied to evaluate particle losses in supply and return duct runs. Model results suggest that duct losses are negligible for particle sizes less than 1 {micro}m and complete for particle sizes greater than 50 {micro}m. Deposition to insulated ducts, horizontal duct floors and bends are predicted to control losses in duct systems. When combined with models for HVAC filtration and deposition to indoor surfaces to predict the ultimate fates of particles within buildings, these results suggest that ventilation ducts play only a small role in determining indoor particle concentrations, especially when HVAC filtration is present. However, the measured and modeled particle deposition rates are expected to be important for ventilation system contamination.
Development of Particle Flow Calorimetry
Repond, Jose
2011-01-01
This talk reviews the development of imaging calorimeters for the purpose of applying Particle Flow Algorithms (PFAs) to the measurement of hadronic jets at a future lepton collider. After a short introduction, the current status of PFA developments is presented, followed by a review of the major developments in electromagnetic and hadronic calorimetry.
Nucleation of atmospheric particles
Curtius J
2009-01-01
Two types of particles exist in the atmosphere, primary and secondary particles. While primary particles such as soot, mineral dust, sea salt particles or pollen are introduced directly as particles into the atmosphere, secondary particles are formed in the atmosphere by condensation of gases. The formation of such new aerosol particles takes place frequently and at a broad variety of atmospheric conditions and geographic locations. A considerable fraction of the atmospheric particles is form...
Microfabricated particle focusing device
Ravula, Surendra K.; Arrington, Christian L.; Sigman, Jennifer K.; Branch, Darren W.; Brener, Igal; Clem, Paul G.; James, Conrad D.; Hill, Martyn; Boltryk, Rosemary June
2013-04-23
A microfabricated particle focusing device comprises an acoustic portion to preconcentrate particles over large spatial dimensions into particle streams and a dielectrophoretic portion for finer particle focusing into single-file columns. The device can be used for high throughput assays for which it is necessary to isolate and investigate small bundles of particles and single particles.
Experimental entanglement of four particles
Sackett; Kielpinski; King; Langer; Meyer; Myatt; Rowe; Turchette; Itano; Wineland; Monroe
2000-03-16
Quantum mechanics allows for many-particle wavefunctions that cannot be factorized into a product of single-particle wavefunctions, even when the constituent particles are entirely distinct. Such 'entangled' states explicitly demonstrate the non-local character of quantum theory, having potential applications in high-precision spectroscopy, quantum communication, cryptography and computation. In general, the more particles that can be entangled, the more clearly nonclassical effects are exhibited--and the more useful the states are for quantum applications. Here we implement a recently proposed entanglement technique to generate entangled states of two and four trapped ions. Coupling between the ions is provided through their collective motional degrees of freedom, but actual motional excitation is minimized. Entanglement is achieved using a single laser pulse, and the method can in principle be applied to any number of ions. PMID:10749201
Particle identification by silicon detectors
A method is developed for the evaluation of the energy loss, dE/dx, of a charged particle traversing a silicon strip detector. The method is applied to the DELPHI microvertex detector leading to diagrams of dE/dx versus momentum for different particles. The specific case of pions and protons is treated and the most probable value of dE/dx and the width of the dE/dx distribution for those particles in the momentum range of 0.2 GeV/c to 1.5 GeV/c, are obtained. The resolution found is 13.4 % for particles with momentum higher than 2 GeV/c and the separation power is 2.9 for 1.0 GeV/c pions and protons. (author)
Antonella Del Rosso
2014-01-01
These devices are designed to provide a current pulse of 5000 Amps which will in turn generate a fast magnetic pulse that steers the incoming beam into the LHC. Today, the comprehensive upgrade of the LHC injection kicker system is entering its final stages. The upgraded system will ensure the LHC can be refilled without needing to wait for the kicker magnets to cool, thus enhancing the performance of the whole accelerator. An upgraded kicker magnet in its vacuum tank, with an upgraded beam screen. The LHC is equipped with two kicker systems installed at the injection points (near points 2 and 8, see schematic diagram) where the particle beams coming from the SPS are injected into the accelerator’s orbit. Each system comprises four magnets and four pulse generators in which the field rises to 0.12 Tesla in less than 900 nanoseconds and for a duration of approximately 8 microseconds. Although the injection kickers only pulse 12 times to fill the LHC up with beam, the LHC beam circ...
Mesothelioma Applied Research Foundation
... Percentage Donations Tribute Wall Other Giving/Fundraising Opportunities Bitcoin Donation Form FAQs Help us raise awareness and ... Percentage Donations Tribute Wall Other Giving/Fundraising Opportunities Bitcoin Donation Form FAQs © 2013 Mesothelioma Applied Research Foundation, ...
Applied Mathematics Seminar 1982
This report contains the abstracts of the lectures delivered at 1982 Applied Mathematics Seminar of the DPD/LCC/CNPq and Colloquy on Applied Mathematics of LCC/CNPq. The Seminar comprised 36 conferences. Among these, 30 were presented by researchers associated to brazilian institutions, 9 of them to the LCC/CNPq, and the other 6 were given by visiting lecturers according to the following distribution: 4 from the USA, 1 from England and 1 from Venezuela. The 1981 Applied Mathematics Seminar was organized by Leon R. Sinay and Nelson do Valle Silva. The Colloquy on Applied Mathematics was held from october 1982 on, being organized by Ricardo S. Kubrusly and Leon R. Sinay. (Author)
Papageorgiou, Nikolaos S
2009-01-01
Offers an examination of important theoretical methods and procedures in applied analysis. This book details the important theoretical trends in nonlinear analysis and applications to different fields. It is suitable for those working on nonlinear analysis.
Applying contemporary statistical techniques
Wilcox, Rand R
2003-01-01
Applying Contemporary Statistical Techniques explains why traditional statistical methods are often inadequate or outdated when applied to modern problems. Wilcox demonstrates how new and more powerful techniques address these problems far more effectively, making these modern robust methods understandable, practical, and easily accessible.* Assumes no previous training in statistics * Explains how and why modern statistical methods provide more accurate results than conventional methods* Covers the latest developments on multiple comparisons * Includes recent advanc
Single-particle behaviour in plasmas
This paper discusses essentially the motion of charged particles in electromagnetic fields. Difficult methods of averaging are explained and applied to calculation of constants of motion. The breakdown of these constants and its consequences on fusion is analyzed
Radiation in Particle Simulations
More, R; Graziani, F; Glosli, J; Surh, M
2010-11-19
Hot dense radiative (HDR) plasmas common to Inertial Confinement Fusion (ICF) and stellar interiors have high temperature (a few hundred eV to tens of keV), high density (tens to hundreds of g/cc) and high pressure (hundreds of megabars to thousands of gigabars). Typically, such plasmas undergo collisional, radiative, atomic and possibly thermonuclear processes. In order to describe HDR plasmas, computational physicists in ICF and astrophysics use atomic-scale microphysical models implemented in various simulation codes. Experimental validation of the models used to describe HDR plasmas are difficult to perform. Direct Numerical Simulation (DNS) of the many-body interactions of plasmas is a promising approach to model validation but, previous work either relies on the collisionless approximation or ignores radiation. We present four methods that attempt a new numerical simulation technique to address a currently unsolved problem: the extension of molecular dynamics to collisional plasmas including emission and absorption of radiation. The first method applies the Lienard-Weichert solution of Maxwell's equations for a classical particle whose motion is assumed to be known. The second method expands the electromagnetic field in normal modes (planewaves in a box with periodic boundary-conditions) and solves the equation for wave amplitudes coupled to the particle motion. The third method is a hybrid molecular dynamics/Monte Carlo (MD/MC) method which calculates radiation emitted or absorbed by electron-ion pairs during close collisions. The fourth method is a generalization of the third method to include small clusters of particles emitting radiation during close encounters: one electron simultaneously hitting two ions, two electrons simultaneously hitting one ion, etc. This approach is inspired by the virial expansion method of equilibrium statistical mechanics. Using a combination of these methods we believe it is possible to do atomic-scale particle
Bayesian target tracking based on particle filter
无
2005-01-01
For being able to deal with the nonlinear or non-Gaussian problems, particle filters have been studied by many researchers. Based on particle filter, the extended Kalman filter (EKF) proposal function is applied to Bayesian target tracking. Markov chain Monte Carlo (MCMC) method, the resampling step, etc novel techniques are also introduced into Bayesian target tracking. And the simulation results confirm the improved particle filter with these techniques outperforms the basic one.
Vector particles tunneling from BTZ black holes
Chen, Ge-Rui; Huang, Yong-Chang
2014-01-01
In this paper we investigate vector particles' Hawking radiation from a BTZ black hole. By applying the WKB approximation and the Hamilton-Jacobi Ansatz to the Proca equation, we obtain the tunneling spectrum of vector particles. The expected Hawking temperature is recovered.
Vector particles tunneling from BTZ black holes
Chen, Ge-Rui; Zhou, Shiwei; Huang, Yong-Chang
2015-11-01
In this paper we investigate vector particles' Hawking radiation from a Banados-Teitelboim-Zanelli (BTZ) black hole. By applying the Wentzel-Kramers-Brillouin (WKB) approximation and the Hamilton-Jacobi ansatz to the Proca equation, we obtain the tunneling spectrum of vector particles. The expected Hawking temperature is recovered.
Oscillatory regime of avalanche particle detectors
We describe the model of an avalanche high energy particle detector consisting of two pn-junctions, connected through an intrinsic semiconductor with a reverse biased voltage applied. We show that this detector is able to generate the oscillatory response on the single particle passage through the structure. The possibility of oscillations leading to chaotic behaviour is pointed out. (author). 15 refs, 7 figs
Janka, K. [Dekati Oy, Tampere (Finland)
2006-10-15
The project deals with development of basic phenomena and mechanism utilised in aerosol particle measurement techniques. The areas under development are: particle-charging techniques, photoelectric charging, particle concentrating using virtual-impactor technique, and optical characterising techniques of particles. Results will be applied on detection techniques of bioaerosol attract, particle emission sensors for diesel exhaust gases, and widening the application areas of existing measurement techniques. (orig.)
Giant Negative Mobility of Janus Particles in a Corrugated Channel
Ghosh, Pulak K.; Hanggi, Peter; Marchesoni, Fabio; Nori, Franco
2014-01-01
We numerically simulate the transport of elliptic Janus particles along narrow two-dimensional channels with reflecting walls. The self-propulsion velocity of the particle is oriented along either their major (prolate) or minor axis (oblate). In smooth channels, we observe long diffusion transients: ballistic for prolate particles and zero-diffusion for oblate particles. Placed in a rough channel, prolate particles tend to drift against an applied drive by tumbling over the wall protrusions; ...
Applied Literature for Healing,
Susanna Marie Anderson
2014-11-01
Full Text Available In this qualitative research study interviews conducted with elite participants serve to reveal the underlying elements that unite the richly diverse emerging field of Applied Literature. The basic interpretative qualitative method included a thematic analysis of data from the interviews yielding numerous common elements that were then distilled into key themes that elucidated the beneficial effects of engaging consciously with literature. These themes included developing a stronger sense of self in balance with an increasing connection with community; providing a safe container to engage challenging and potentially overwhelming issues from a stance of empowered action; and fostering a healing space for creativity. The findings provide grounds for uniting the work being done in a range of helping professions into a cohesive field of Applied Literature, which offers effective tools for healing, transformation and empowerment.Keywords: Applied Literature, Bibliotherapy, Poetry Therapy, Arts in Corrections, Arts in Medicine
PSYCHOANALYSIS AS APPLIED AESTHETICS.
Richmond, Stephen H
2016-07-01
The question of how to place psychoanalysis in relation to science has been debated since the beginning of psychoanalysis and continues to this day. The author argues that psychoanalysis is best viewed as a form of applied art (also termed applied aesthetics) in parallel to medicine as applied science. This postulate draws on a functional definition of modernity as involving the differentiation of the value spheres of science, art, and religion. The validity criteria for each of the value spheres are discussed. Freud is examined, drawing on Habermas, and seen to have erred by claiming that the psychoanalytic method is a form of science. Implications for clinical and metapsychological issues in psychoanalysis are discussed. PMID:27428582
Applied chemical engineering thermodynamics
Tassios, Dimitrios P
1993-01-01
Applied Chemical Engineering Thermodynamics provides the undergraduate and graduate student of chemical engineering with the basic knowledge, the methodology and the references he needs to apply it in industrial practice. Thus, in addition to the classical topics of the laws of thermodynamics,pure component and mixture thermodynamic properties as well as phase and chemical equilibria the reader will find: - history of thermodynamics - energy conservation - internmolecular forces and molecular thermodynamics - cubic equations of state - statistical mechanics. A great number of calculated problems with solutions and an appendix with numerous tables of numbers of practical importance are extremely helpful for applied calculations. The computer programs on the included disk help the student to become familiar with the typical methods used in industry for volumetric and vapor-liquid equilibria calculations.
Introduction to applied thermodynamics
Helsdon, R M; Walker, G E
1965-01-01
Introduction to Applied Thermodynamics is an introductory text on applied thermodynamics and covers topics ranging from energy and temperature to reversibility and entropy, the first and second laws of thermodynamics, and the properties of ideal gases. Standard air cycles and the thermodynamic properties of pure substances are also discussed, together with gas compressors, combustion, and psychrometry. This volume is comprised of 16 chapters and begins with an overview of the concept of energy as well as the macroscopic and molecular approaches to thermodynamics. The following chapters focus o
Huizingh, Eelko K R E
2007-01-01
Accessibly written and easy to use, Applied Statistics Using SPSS is an all-in-one self-study guide to SPSS and do-it-yourself guide to statistics. Based around the needs of undergraduate students embarking on their own research project, the text's self-help style is designed to boost the skills and confidence of those that will need to use SPSS in the course of doing their research project. The book is pedagogically well developed and contains many screen dumps and exercises, glossary terms and worked examples. Divided into two parts, Applied Statistics Using SPSS covers :
Applied mathematics made simple
Murphy, Patrick
1982-01-01
Applied Mathematics: Made Simple provides an elementary study of the three main branches of classical applied mathematics: statics, hydrostatics, and dynamics. The book begins with discussion of the concepts of mechanics, parallel forces and rigid bodies, kinematics, motion with uniform acceleration in a straight line, and Newton's law of motion. Separate chapters cover vector algebra and coplanar motion, relative motion, projectiles, friction, and rigid bodies in equilibrium under the action of coplanar forces. The final chapters deal with machines and hydrostatics. The standard and conte
Retransmission Steganography Applied
Mazurczyk, Wojciech; Szczypiorski, Krzysztof
2010-01-01
This paper presents experimental results of the implementation of network steganography method called RSTEG (Retransmission Steganography). The main idea of RSTEG is to not acknowledge a successfully received packet to intentionally invoke retransmission. The retransmitted packet carries a steganogram instead of user data in the payload field. RSTEG can be applied to many network protocols that utilize retransmissions. We present experimental results for RSTEG applied to TCP (Transmission Control Protocol) as TCP is the most popular network protocol which ensures reliable data transfer. The main aim of the performed experiments was to estimate RSTEG steganographic bandwidth and detectability by observing its influence on the network retransmission level.
Mapping chaos in particle revolutions
The relatively new technique of frequency map analysis has over the last 10 years turned out to be very effective for the analysis of numerical simulations in physical systems ranging even beyond the solar system to galaxies and back again to particle accelerators, particularly for systems with three or more degrees of freedom. More recently, with an eye towards revealing the dynamics of an actual particle beam, it has been applied for the first time to measured rather than simulated electron trajectories in a storage ring, in this case at the Advanced Light Source (ALS) at the Ernest Orlando Lawrence Berkeley National Laboratory (Berkeley Lab)
Applying Mathematical Processes (AMP)
Kathotia, Vinay
2011-01-01
This article provides insights into the "Applying Mathematical Processes" resources, developed by the Nuffield Foundation. It features Nuffield AMP activities--and related ones from Bowland Maths--that were designed to support the teaching and assessment of key processes in mathematics--representing a situation mathematically, analysing,…
Szapacs, Cindy
2006-01-01
Teaching strategies that work for typically developing children often do not work for those diagnosed with an autism spectrum disorder. However, teaching strategies that work for children with autism do work for typically developing children. In this article, the author explains how the principles and concepts of Applied Behavior Analysis can be…
Essays on Applied Microeconomics
Mejia Mantilla, Carolina
2013-01-01
Each chapter of this dissertation studies a different question within the field of Applied Microeconomics. The first chapter examines the mid- and long-term effects of the 1998 Asian Crisis on the educational attainment of Indonesian children ages 6 to 18, at the time of the crisis. The effects are identified as deviations from a linear trend for…
Applied singular integral equations
Mandal, B N
2011-01-01
The book is devoted to varieties of linear singular integral equations, with special emphasis on their methods of solution. It introduces the singular integral equations and their applications to researchers as well as graduate students of this fascinating and growing branch of applied mathematics.
Huizingh, Eelko K. R. E.
2007-01-01
Accessibly written and easy to use, "Applied Statistics Using SPSS" is an all-in-one self-study guide to SPSS and do-it-yourself guide to statistics. What is unique about Eelko Huizingh's approach is that this book is based around the needs of undergraduate students embarking on their own research project, and its self-help style is designed to…
Review of Particle Physics Particle Data Group
Beringer, J.; Arguin, J.-F.; R.M. Barnett; Copic, K.; Dahl, O; Groom, D. E.; Lin, C.-J.; Lys, J.; Murayama, H.; Wohl, C. G.; Yao, W.-M.; Zyla, P. A.; Amsler, C; Antonelli, M.; Asner, D. M.
2012-01-01
This biennial Review summarizes much of particle physics. Using data from previous editions, plus 2658 new measurements from 644 papers, we list, evaluate, and average measured properties of gauge bosons, leptons, quarks, mesons, and baryons. We summarize searches for hypothetical particles such as Higgs bosons, heavy neutrinos, and supersymmetric particles. All the particle properties and search limits are listed in Summary Tables. We also give numerous tables, figures, formulae, and reviews...
Surgical smoke and ultrafine particles
Nowak Dennis
2008-12-01
Full Text Available Abstract Background Electrocautery, laser tissue ablation, and ultrasonic scalpel tissue dissection all generate a 'surgical smoke' containing ultrafine ( Methods To measure the amount of generated particulates in 'surgical smoke' during different surgical procedures and to quantify the particle number concentration for operation room personnel a condensation particle counter (CPC, model 3007, TSI Inc. was applied. Results Electro-cauterization and argon plasma tissue coagulation induced the production of very high number concentration (> 100000 cm-3 of particles in the diameter range of 10 nm to 1 μm. The peak concentration was confined to the immediate local surrounding of the production side. In the presence of a very efficient air conditioning system the increment and decrement of ultrafine particle occurrence was a matter of seconds, with accumulation of lower particle number concentrations in the operation room for only a few minutes. Conclusion Our investigation showed a short term very high exposure to ultrafine particles for surgeons and close assisting operating personnel – alternating with longer periods of low exposure.
Plasma physics via particle simulation
Plasmas are studied by following the motion of many particles in applied and self fields, analytically, experimentally and computationally. Plasmas for magnetic fusion energy devices are very hot, nearly collisionless and magnetized, with scale lengths of many ion gyroradii and Debye lengths. The analytic studies of such plasmas are very difficult as the plasma is nonuniform, anisotropic and nonlinear. The experimental studies have become very expensive in time and money, as the size, density and temperature approach fusion reactor values. Computational studies using many particles and/or fluids have complemented both theories and experiments for many years and have progressed to fully three dimensional electromagnetic models, albeit with hours of running times on the fastest largest computers. Particle simulation methods are presented in some detail, showing particle advance from acceleration to velocity to position, followed by calculation of the fields from charge and current densities and then further particle advance, and so on. Limitations due to the time stepping and use of a spatial grid are given, to avoid inaccuracies and instabilities. Examples are given for an electrostatic program in one dimension of an orbit averaging program, and for a three dimensional electromagnetic program. Applications of particle simulations of plasmas in magnetic and inertial fusion devices continue to grow, as well as to plasmas and beams in peripheral devices, such as sources, accelerators, and converters. (orig.)
Conical Intersections from Particle-Particle Random Phase and Tamm-Dancoff Approximations.
Yang, Yang; Shen, Lin; Zhang, Du; Yang, Weitao
2016-07-01
The particle-particle random phase approximation (pp-RPA) and the particle-particle Tamm-Dancoff approximation (pp-TDA) are applied to the challenging conical intersection problem. Because they describe the ground and excited states on the same footing and naturally take into account the interstate interaction, these particle-particle methods, especially the pp-TDA, can correctly predict the dimensionality of the conical intersection seam as well as describe the potential energy surface in the vicinity of conical intersections. Though the bond length of conical intersections is slightly underestimated compared with the complete-active-space self-consistent field (CASSCF) theory, the efficient particle-particle methods are promising for conical intersections and nonadiabatic dynamics. PMID:27293013
Estimation of particle magnetic moment distribution for antiferromagnetic ferrihydrite nanoparticles
Magnetization as a function of applied magnetic field at different temperatures for antiferromagnetic nanoparticles of ferrihydrite is measured and analyzed considering a distribution in particle magnetic moment. We find that the magnetization of this nanoparticle system is affected by the presence of particle magnetic moment distribution. This particle magnetic moment distribution is estimated at different temperatures. - Highlights: • Magnetic behavior of a nanoparticle system is affected by the presence of particle magnetic moment distribution. • One can not get correct and physically meaningful fit parameters if the particle magnetic moment distribution is ignored. • This particle magnetic moment distribution using the magnetization data is estimated for 2 nm antiferromagnetic ferrihydrite particles
Measurement of Turbulence Modulation by Non-Spherical Particles
Mandø, Matthias; Rosendahl, Lasse
2010-01-01
The change in the turbulence intensity of an air jet resulting from the addition of particles to the flow is measured using Laser Doppler Anemometry. Three distinct shapes are considered: the prolate spheroid, the disk and the sphere. Measurements of the carrier phase and particle phase velocities...... particle size, the particle mass flow and the integral length scale of the flow. The expression developed on basis of spherical particles only is applied on the data for the non-spherical particles. The results suggest that non-spherical particles attenuate the carrier phase turbulence significantly more...
Laser and Particle Guiding Micro-Elements for Particle Accelerators
Plettner, T.; Gaume, R.; Wisdom, J.; /Stanford U., Phys. Dept.; Spencer, J.; /SLAC
2005-06-07
Laser driven particle accelerators require sub-micron control of the laser field as well as precise electron-beam guiding so fabrication techniques that allow integrating both elements into an accelerator-on-chip format become critical for the success of such next generation machines. Micromachining technology for silicon has been shown to be one such feasible technology in PAC2003[1] but with a variety of complications on the laser side. However, fabrication of transparent ceramics has become an interesting technology that could be applied for laser-particle accelerators in several ways. We discuss the advantages such as the range of materials available and ways to implement them followed by some different test examples we been considered. One important goal is an integrated system that avoids having to inject either laser or particle pulses into these structures.
Accelerator system and method of accelerating particles
Wirz, Richard E. (Inventor)
2010-01-01
An accelerator system and method that utilize dust as the primary mass flux for generating thrust are provided. The accelerator system can include an accelerator capable of operating in a self-neutralizing mode and having a discharge chamber and at least one ionizer capable of charging dust particles. The system can also include a dust particle feeder that is capable of introducing the dust particles into the accelerator. By applying a pulsed positive and negative charge voltage to the accelerator, the charged dust particles can be accelerated thereby generating thrust and neutralizing the accelerator system.
Particle methods: An introduction with applications
Moral Piere Del
2014-01-01
Full Text Available Interacting particle methods are increasingly used to sample from complex high-dimensional distributions. They have found a wide range of applications in applied probability, Bayesian statistics and information engineering. Understanding rigorously these new Monte Carlo simulation tools leads to fascinating mathematics related to Feynman-Kac path integral theory and their interacting particle interpretations. In these lecture notes, we provide a pedagogical introduction to the stochastic modeling and the theoretical analysis of these particle algorithms. We also illustrate these methods through several applications including random walk confinements, particle absorption models, nonlinear filtering, stochastic optimization, combinatorial counting and directed polymer models.
Monique Duval
2004-01-01
Please note that Paul Kunz will be giving his very popular and highly recommended C++ course again on 15 ï¿½- 19 November. The course costs 200 CHF, and advance registration is required. People with CERN EDH accounts can apply electronically directly from the Web course description page: Team Visitors should ask their Group Leader to send an e-mail to the DTO of PH Department, M. Burri, referring to the 'C++ for Particle Physicists' course and giving their name, CERN ID number, the Team account number to which the course fee should be charged, and VERY IMPORTANTLY an email address to which an invitation to the course can be sent. ENSEIGNEMENT TECHNIQUE TECHNICAL TRAINING Monique Duval 74924 technical.training@cern.ch
Monique Duval
2004-01-01
Please note that Paul Kunz will be giving his very popular and highly recommended C++ course again on 15 - 19 November. The course costs 200 CHF, and advance registration is required. People with CERN EDH accounts can apply electronically directly from the Web course description page: Team Visitors should ask their Group Leader to send an e-mail to the DTO of PH Department, M. Burri, referring to the 'C++ for Particle Physicists' course and giving their name, CERN ID number, the Team account number to which the course fee should be charged, and VERY IMPORTANTLY an email address to which an invitation to the course can be sent. ENSEIGNEMENT TECHNIQUE TECHNICAL TRAINING Monique Duval 74924 technical.training@cern.ch
Radiation in Particle Simulations
More, R M; Graziani, F R; Glosli, J; Surh, M
2009-06-15
Hot dense radiative (HDR) plasmas common to Inertial Confinement Fusion (ICF) and stellar interiors have high temperature (a few hundred eV to tens of keV), high density (tens to hundreds of g/cc) and high pressure (hundreds of Megabars to thousands of Gigabars). Typically, such plasmas undergo collisional, radiative, atomic and possibly thermonuclear processes. In order to describe HDR plasmas, computational physicists in ICF and astrophysics use atomic-scale microphysical models implemented in various simulation codes. Experimental validation of the models used to describe HDR plasmas are difficult to perform. Direct Numerical Simulation (DNS) of the many-body interactions of plasmas is a promising approach to model validation but, previous work either relies on the collisionless approximation or ignores radiation. We present four methods that attempt a new numerical simulation technique to address a currently unsolved problem: the extension of molecular dynamics to collisional plasmas including emission and absorption of radiation. The first method applies the Lienard-Weichert solution of Maxwell's equations for a classical particle whose motion is assumed to be known (section 3). The second method expands the electromagnetic field in normal modes (plane-waves in a box with periodic boundary-conditions) and solves the equation for wave amplitudes coupled to the particle motion (section 4). The third method is a hybrid MD/MC (molecular dynamics/Monte Carlo) method which calculates radiation emitted or absorbed by electron-ion pairs during close collisions (section 5). The fourth method is a generalization of the third method to include small clusters of particles emitting radiation during close encounters: one electron simultaneously hitting two ions, two electrons simultaneously hitting one ion, etc.(section 6). This approach is inspired by the Virial expansion method of equilibrium statistical mechanics.
Applied Control Systems Design
Mahmoud, Magdi S
2012-01-01
Applied Control System Design examines several methods for building up systems models based on real experimental data from typical industrial processes and incorporating system identification techniques. The text takes a comparative approach to the models derived in this way judging their suitability for use in different systems and under different operational circumstances. A broad spectrum of control methods including various forms of filtering, feedback and feedforward control is applied to the models and the guidelines derived from the closed-loop responses are then composed into a concrete self-tested recipe to serve as a check-list for industrial engineers or control designers. System identification and control design are given equal weight in model derivation and testing to reflect their equality of importance in the proper design and optimization of high-performance control systems. Readers’ assimilation of the material discussed is assisted by the provision of problems and examples. Most of these e...
Essays in Applied Econometrics
Michèle A. Weynandt
2014-01-01
This thesis includes three essays in applied econometrics. The first and third chapters focus on labor market outcomes of minority group members, while the second focuses on education. Chapter 1 deals with the relationship between sexual orientation, gender, partnership, and labor outcomes. I suggest that if there are compensating differentials and a gender gap in potential wages, an income effect can lead partnered gay men to jobs with lower wages and higher amenities than partnered straight...
朱红萍
2009-01-01
This paper explains some plain phenomena in teaching and class management with an economic view. Some basic economic principles mentioned therein are: everything has its opportunity cost; the marginal utility of consumption of any kind is diminishing; Game theory is everywhere. By applying the economic theories to teaching, it is of great help for teachers to understand the students' behavior and thus improve the teaching effectiveness and efficiency.
Methods of applied mathematics
Hildebrand, Francis B
1992-01-01
This invaluable book offers engineers and physicists working knowledge of a number of mathematical facts and techniques not commonly treated in courses in advanced calculus, but nevertheless extremely useful when applied to typical problems in many different fields. It deals principally with linear algebraic equations, quadratic and Hermitian forms, operations with vectors and matrices, the calculus of variations, and the formulations and theory of linear integral equations. Annotated problems and exercises accompany each chapter.
Spichkova, Maria
2016-01-01
Logic not only helps to solve complicated and safety-critical problems, but also disciplines the mind and helps to develop abstract thinking, which is very important for any area of Engineering. In this technical report, we present an overview of common challenges in teaching of formal methods and discuss our experiences from the course Applied Logic in Engineering. This course was taught at TU Munich, Germany, in Winter Semester 2012/2013.
Fitzmaurice, Garrett M; Ware, James H
2012-01-01
Praise for the First Edition "". . . [this book] should be on the shelf of everyone interested in . . . longitudinal data analysis.""-Journal of the American Statistical Association Features newly developed topics and applications of the analysis of longitudinal data Applied Longitudinal Analysis, Second Edition presents modern methods for analyzing data from longitudinal studies and now features the latest state-of-the-art techniques. The book emphasizes practical, rather than theoretical, aspects of methods for the analysis of diverse types of lo
Essays on Applied Microeconomics
Lee, Hoan Soo
2013-01-01
Empirical and theoretical topics in applied microeconomics are discussed in this dissertation. The first essay identifies and measures managerial advantages from access to high-quality deals in venture capital investments. The underlying social network of Harvard Business School MBA venture capitalists and entrepreneurs is used to proxy availability of deal access. Random section assignment of HBS MBA graduates provides a key exogenous variation for identification. Being socially connected to...
Applied statistics for economists
Lewis, Margaret
2012-01-01
This book is an undergraduate text that introduces students to commonly-used statistical methods in economics. Using examples based on contemporary economic issues and readily-available data, it not only explains the mechanics of the various methods, it also guides students to connect statistical results to detailed economic interpretations. Because the goal is for students to be able to apply the statistical methods presented, online sources for economic data and directions for performing each task in Excel are also included.
Applied heterogeneous catalysis
This reference book explains the scientific principles of heterogeneous catalysis while also providing details on the methods used to develop commercially viable catalyst products. A section of the book presents reactor design engineering theory and practices for the profitable application of these catalysts in large-scale industrial processes. A description of the mechanisms and commercial applications of catalysis is followed by a review of catalytic reaction kinetics. There are five chapters on selecting catalyst agents, developing and preparing industrial catalysts, measuring catalyst properties, and analyzing the physico-chemical characteristics of solid catalyst particles. The final chapter reviews the elements of catalytic reactor design, with emphasis on flow regimes vs. reactor types, heat and mass transfer in reactor beds, single- and multi-phase flows, and the effects of thermodynamics and other catalyst properties on the process flow scheme
Holographic interferometry for aerosol particle characterization
Berg, Matthew J.; Subedi, Nava R.
2015-01-01
Using simulations based on Mie theory, this work shows how double-exposure digital holography can be used to measure the change in size of an expanding, or contracting, spherical particle. Here, a single particle is illuminated by a plane wave twice during its expansion: once when the particle is 27 λ in radius, and again when it is 47 λ. A hologram is formed from each illumination stage from the interference of the scattered and unscattered, i.e., incident, light. The two holograms are then superposed to form a double exposure. By applying the Fresnel-Kirchhoff diffraction theory to the double-exposed hologram, a silhouette-like image of the particle is computationally reconstructed that is superposed with interference fringes. These fringes are a direct result of the change in particle size occurring between the two illumination stages. The study finds that expansion on the scale of ~ 6 λ is readily discerned from the reconstructed particle image. This work could be important for improved characterization of single and multiple aerosol particles in situ. For example, by illuminating an aerosol particle with infrared light, it may be possible to measure photothermally induced particle expansion, thus providing insight into a particle's material properties simultaneous with an image of the particle.
Particle relabelling transformations in elastodynamics
Al-Attar, David; Crawford, Ophelia
2016-04-01
The motion of a self-gravitating hyperelastic body is described through a time-dependent mapping from a reference body into physical space, and its material properties are determined by a referential density and strain-energy function defined relative to the reference body. Points within the reference body do not have a direct physical meaning, but instead act as particle labels that could be assigned in different ways. We use Hamilton's principle to determine how the referential density and strain-energy functions transform when the particle labels are changed, and describe an associated `particle relabelling symmetry'. We apply these results to linearized elastic wave propagation and discuss their implications for seismological inverse problems. In particular, we show that the effects of boundary topography on elastic wave propagation can be mapped exactly into volumetric heterogeneity while preserving the form of the equations of motion. Several numerical calculations are presented to illustrate our results.
Burnout of pulverized biomass particles in large scale boiler - Single particle model approach
Saastamoinen, Jaakko; Aho, Martti; Moilanen, Antero [VTT Technical Research Centre of Finland, Box 1603, 40101 Jyvaeskylae (Finland); Soerensen, Lasse Holst [ReaTech/ReAddit, Frederiksborgsveij 399, Niels Bohr, DK-4000 Roskilde (Denmark); Clausen, Soennik [Risoe National Laboratory, DK-4000 Roskilde (Denmark); Berg, Mogens [ENERGI E2 A/S, A.C. Meyers Vaenge 9, DK-2450 Copenhagen SV (Denmark)
2010-05-15
Burning of coal and biomass particles are studied and compared by measurements in an entrained flow reactor and by modelling. The results are applied to study the burning of pulverized biomass in a large scale utility boiler originally planned for coal. A simplified single particle approach, where the particle combustion model is coupled with one-dimensional equation of motion of the particle, is applied for the calculation of the burnout in the boiler. The particle size of biomass can be much larger than that of coal to reach complete burnout due to lower density and greater reactivity. The burner location and the trajectories of the particles might be optimised to maximise the residence time and burnout. (author)
Burnout of pulverized biomass particles in large scale boiler - Single particle model approach
Burning of coal and biomass particles are studied and compared by measurements in an entrained flow reactor and by modelling. The results are applied to study the burning of pulverized biomass in a large scale utility boiler originally planned for coal. A simplified single particle approach, where the particle combustion model is coupled with one-dimensional equation of motion of the particle, is applied for the calculation of the burnout in the boiler. The particle size of biomass can be much larger than that of coal to reach complete burnout due to lower density and greater reactivity. The burner location and the trajectories of the particles might be optimised to maximise the residence time and burnout.
The presentation focuses on some of the time-proven and new technologies being used to accomplish radiological work. These techniques can be applied at nuclear facilities to reduce radiation doses and protect the environment. The last reactor plants and processing facilities were shutdown and Hanford was given a new mission to put the facilities in a safe condition, decontaminate, and prepare them for decommissioning. The skills that were necessary to operate these facilities were different than the skills needed today to clean up Hanford. Workers were not familiar with many of the tools, equipment, and materials needed to accomplish:the new mission, which includes clean up of contaminated areas in and around all the facilities, recovery of reactor fuel from spent fuel pools, and the removal of millions of gallons of highly radioactive waste from 177 underground tanks. In addition, this work has to be done with a reduced number of workers and a smaller budget. At Hanford, facilities contain a myriad of radioactive isotopes that are 2048 located inside plant systems, underground tanks, and the soil. As cleanup work at Hanford began, it became obvious early that in order to get workers to apply ALARA and use hew tools and equipment to accomplish the radiological work it was necessary to plan the work in advance and get radiological control and/or ALARA committee personnel involved early in the planning process. Emphasis was placed on applying,ALARA techniques to reduce dose, limit contamination spread and minimize the amount of radioactive waste generated. Progress on the cleanup has,b6en steady and Hanford workers have learned to use different types of engineered controls and ALARA techniques to perform radiological work. The purpose of this presentation is to share the lessons learned on how Hanford is accomplishing radiological work
Motion of a particle and the vacuum
Krasnoholovets, V; Krasnoholovets, Volodymyr; Ivanovsky, Dmytro
1993-01-01
We propose the deterministic dynamics of a free particle in a physical vacuum, which is considered as a discrete (quantum) medium. The motion of the particle is studied taking into account its interactions with the medium. It is assumed that this interaction results in the appearance of special virtual excitations, called "inertons," in the vacuum medium in the surroundings of the canonical particle. The solution of the equation of motion shows that a cloud of inertons oscillates around the particle with amplitude $\\Lambda=\\lambda v/c$, where $\\lambda$ is the de Broglie wavelength, v is the initial velocity of the particle, and c is the initial velocity of the inertons (velocity of light). This oscillating nature of motion is also applied to the particle, and the de Broglie wavelength $\\lambda$ becomes the amplitude of spacial oscillations. The oscillation frequency $\
Weisberg, Sanford
2005-01-01
Master linear regression techniques with a new edition of a classic text Reviews of the Second Edition: ""I found it enjoyable reading and so full of interesting material that even the well-informed reader will probably find something new . . . a necessity for all of those who do linear regression."" -Technometrics, February 1987 ""Overall, I feel that the book is a valuable addition to the now considerable list of texts on applied linear regression. It should be a strong contender as the leading text for a first serious course in regression analysis."" -American Scientist, May-June 1987
Applied impulsive mathematical models
Stamova, Ivanka
2016-01-01
Using the theory of impulsive differential equations, this book focuses on mathematical models which reflect current research in biology, population dynamics, neural networks and economics. The authors provide the basic background from the fundamental theory and give a systematic exposition of recent results related to the qualitative analysis of impulsive mathematical models. Consisting of six chapters, the book presents many applicable techniques, making them available in a single source easily accessible to researchers interested in mathematical models and their applications. Serving as a valuable reference, this text is addressed to a wide audience of professionals, including mathematicians, applied researchers and practitioners.
Hosmer, David W; Sturdivant, Rodney X
2013-01-01
A new edition of the definitive guide to logistic regression modeling for health science and other applications This thoroughly expanded Third Edition provides an easily accessible introduction to the logistic regression (LR) model and highlights the power of this model by examining the relationship between a dichotomous outcome and a set of covariables. Applied Logistic Regression, Third Edition emphasizes applications in the health sciences and handpicks topics that best suit the use of modern statistical software. The book provides readers with state-of-
ALMEIDA, J.
2009-12-01
Full Text Available Content-Based Image Retrieval (CBIR is a challenging task. Common approaches use only low-level features. Notwithstanding, such CBIR solutions fail on capturing some local features representing the details and nuances of scenes. Many techniques in image processing and computer vision can capture these scene semantics. Among them, the Scale Invariant Features Transform~(SIFT has been widely used in a lot of applications. This approach relies on the choice of several parameters which directly impact its effectiveness when applied to retrieve images. In this paper, we discuss the results obtained in several experiments proposed to evaluate the application of the SIFT in CBIR tasks.
Weisberg, Sanford
2013-01-01
Praise for the Third Edition ""...this is an excellent book which could easily be used as a course text...""-International Statistical Institute The Fourth Edition of Applied Linear Regression provides a thorough update of the basic theory and methodology of linear regression modeling. Demonstrating the practical applications of linear regression analysis techniques, the Fourth Edition uses interesting, real-world exercises and examples. Stressing central concepts such as model building, understanding parameters, assessing fit and reliability, and drawing conclusions, the new edition illus
Applied energy an introduction
Abdullah, Mohammad Omar
2012-01-01
Introduction to Applied EnergyGeneral IntroductionEnergy and Power BasicsEnergy EquationEnergy Generation SystemsEnergy Storage and MethodsEnergy Efficiencies and LossesEnergy industry and Energy Applications in Small -Medium Enterprises (SME) industriesEnergy IndustryEnergy-Intensive industryEnergy Applications in SME Energy industriesEnergy Sources and SupplyEnergy SourcesEnergy Supply and Energy DemandEnergy Flow Visualization and Sankey DiagramEnergy Management and AnalysisEnergy AuditsEnergy Use and Fuel Consumption StudyEnergy Life-Cycle AnalysisEnergy and EnvironmentEnergy Pollutants, S
Applied nonparametric statistical methods
Sprent, Peter
2007-01-01
While preserving the clear, accessible style of previous editions, Applied Nonparametric Statistical Methods, Fourth Edition reflects the latest developments in computer-intensive methods that deal with intractable analytical problems and unwieldy data sets. Reorganized and with additional material, this edition begins with a brief summary of some relevant general statistical concepts and an introduction to basic ideas of nonparametric or distribution-free methods. Designed experiments, including those with factorial treatment structures, are now the focus of an entire chapter. The text also e
Applied Semantic Web Technologies
Sugumaran, Vijayan
2011-01-01
The rapid advancement of semantic web technologies, along with the fact that they are at various levels of maturity, has left many practitioners confused about the current state of these technologies. Focusing on the most mature technologies, Applied Semantic Web Technologies integrates theory with case studies to illustrate the history, current state, and future direction of the semantic web. It maintains an emphasis on real-world applications and examines the technical and practical issues related to the use of semantic technologies in intelligent information management. The book starts with
Dettman, John W
1965-01-01
Analytic function theory is a traditional subject going back to Cauchy and Riemann in the 19th century. Once the exclusive province of advanced mathematics students, its applications have proven vital to today's physicists and engineers. In this highly regarded work, Professor John W. Dettman offers a clear, well-organized overview of the subject and various applications - making the often-perplexing study of analytic functions of complex variables more accessible to a wider audience. The first half of Applied Complex Variables, designed for sequential study, is a step-by-step treatment of fun
Particle-particle interactions in the Complex Particle Kinetics Method
Larson, David; Hewett, Dennis
2004-11-01
Unlike traditional particle-in-cell (PIC) simulation particles, the CPK (Complex Particle Kinetics) method [1] allows particles with a Gaussian spatial profile and a Mawellian velocity distribution to evolve self-consistently. Recent ideas for selective merging allow for adaptive resolution of highly dynamic regions. The combination of the CPK algorithm with a recently developed Coulomb collision algorithm [2] allows simulation of plasmas in the previously very costly partially-collisional regime. Plasmas with finite collisionality play a role in many AGEX, e.g. high-temperature hohlraums and fast igniter. A new algorithm for particle interaction allows the extension of the CPK method to the continuum regime. We will present progress towards our goal of simulating the transition from continuum to fully kinetic physics, including results from various 1 and 2D simulations. [1] D.W. Hewett, J. Comp. Phys. 189 (2003). [2] D. J. Larson, J. Comp. Phys. 188 (2003).
Buurmans, I.L.C.
2011-01-01
In this PhD research the investigation of the reactivity and acidity of Fluid Catalytic Cracking (FCC) catalysts at the level of an individual catalyst particles is described. A range of micro-spectroscopic techniques has been applied to visualize both the active zeolite component within the catalyst particles as well as the matrix components. The most important techniques applied were UV-Vis micro-spectroscopy, confocal fluorescence microscopy, integrated laser and electron microscopy (a com...
Lyu, Ke; Wang, Guang-Chuan; He, Ya-Ling; Han, Jian-Feng; Ye, Qing; Qin, Cheng-Feng; Chen, Rong
2015-02-01
Hand-foot-and-mouth disease (HFMD) remains a major health concern in the Asia-Pacific regions, and its major causative agents include human enterovirus 71 (EV71) and coxsackievirus A16. A desirable vaccine against HFMD would be multivalent and able to elicit protective responses against multiple HFMD causative agents. Previously, we have demonstrated that a thermostable recombinant EV71 vaccine candidate can be produced by the insertion of a foreign peptide into the BC loop of VP1 without affecting viral replication. Here we present crystal structures of two different naturally occurring empty particles, one from a clinical C4 strain EV71 and the other from its recombinant virus containing an insertion in the VP1 BC loop. Crystal structure analysis demonstrated that the inserted foreign peptide is well exposed on the particle surface without significant structural changes in the capsid. Importantly, such insertions do not seem to affect the virus uncoating process as illustrated by the conformational similarity between an uncoating intermediate of another recombinant virus and that of EV71. Especially, at least 18 residues from the N terminus of VP1 are transiently externalized. Altogether, our study provides insights into vaccine development against HFMD. PMID:25492868
Holographic interferometry for aerosol particle characterization
Using simulations based on Mie theory, this work shows how double-exposure digital holography can be used to measure the change in size of an expanding, or contracting, spherical particle. Here, a single particle is illuminated by a plane wave twice during its expansion: once when the particle is 27λ in radius, and again when it is 47λ. A hologram is formed from each illumination stage from the interference of the scattered and unscattered, i.e., incident, light. The two holograms are then superposed to form a double exposure. By applying the Fresnel–Kirchhoff diffraction theory to the double-exposed hologram, a silhouette-like image of the particle is computationally reconstructed that is superposed with interference fringes. These fringes are a direct result of the change in particle size occurring between the two illumination stages. The study finds that expansion on the scale of ∼6λ is readily discerned from the reconstructed particle image. This work could be important for improved characterization of single and multiple aerosol particles in situ. For example, by illuminating an aerosol particle with infrared light, it may be possible to measure photothermally induced particle expansion, thus providing insight into a particle's material properties simultaneous with an image of the particle. - Highlights: • A computational model to simulate digital holography is developed. • The model is used to image a multi-wavelength sized, expanding spherical particle. • An interferometry technique is described that can measure the particle expansion. • Implications for laboratory-based aerosol particle characterization are described
Review of particle properties. Particle Data Group
This review of the properties of leptons, mesons, and baryons is an updating of Review of Particle Properties, Particle Data Group [Rev. Mod. Phys. 48 (1976) No. 2, Part II; and Supplement, Phys. Lett. 68B (1977) 1]. Data are evaluated, listed, averaged, and summarized in tables. Numerous tables, figures, and formulae of interest to particle physicists are also included. A data booklet is available
Magnetic particles in medical research - a review
Magnetic (or magnetizable) particles have assumed increasing importance in medical and biological research since 1966 when the effect of a magnetic field on the movement of suspended particles was initially studied. In fields like haematology, cell biology, microbiology, biochemistry and immunoassays, they currently provide the basis for separation techniques, which previously relied on gravitational forces. The body cells (e.g., blood cells) can be made magnetic by incubating them in a medium containing several Fe/sub 3/O/sub 4/ particles, which are adsorbed to the membrane surfaces. Some bacteria (also called magnetostatic bacteria) respond to externally applied magnetic lines of force due to their intracellular magnetic particles. These properties are useful in the isolation of these cells/bacteria. In biochemistry magnetic particles are used to immobilize enzymes without any loss of enzyme activity. The immobilized enzymes can facilitate the separation of end products without extensive instrumentation. In immunoassays the antibodies are covalently linked to polymer coated iron oxide particles. An electromagnet is used to sediment these particles after reaction. This excludes the use of centrifuge to separate antigen-antibody complexes. In pharmacy and pharmacology the magnetic particles are important in drug transport. In techniques like ferrography, nuclear magnetic resonance imaging (NMRI), spectroscopic studies and magnetic resonance imaging (MRI) the magnetic particles serve as contrast agents and give clinically important spatial resolution. Magnetic particles also find extensive applications in cancer therapy, genetic engineering, pneumology, nuclear medicine, radiology and many other fields. This article reviews these applications. (author)
Particle trajectory entanglement in microfluidic channels
Marin, Alvaro; Rossi, Massimiliano; Kähler, Christian
2015-11-01
Suspensions in motion can show very complex and counterintuitive behavior, particularly at high concentrations. In this talk we show an overlooked phenomenon occurring when a dilute particle solution is forced to travel in a narrow channel (only a few times the particle size). At critical interparticle distances, particles tend to interlace their trajectories forming a sort of hydroclusters only bonded by hydrodynamic interactions. While classical studies on non-Brownian self-diffusivity report average particle displacements of fractions of the particle diameter, the trajectories observed in our system show displacements of several particle diameters. Indeed, such a behavior resemble the deterministic trajectories found by Uspal et al. (Nat. Comm. 4, 2013) with engineered particle doublets. Trajectory statistics are obtained for different shear rates and particle sizes. The results are compared with particle dynamics simulations and analyzed under the light of recent studies on the irreversibility of non-Brownian suspensions (Metzger et al., Phys. Rev. E, 2013) to elucidate the nature of the hydrodynamic interactions entering into play. The reported phenomenon could be applied to promote advective mixing in micro-channels or particle/droplet self-assembly.
A novel method for size uniform 200nm particles: multimetallic particles and in vitro gene delivery
Mair, Lamar; Ford, Kris; Superfine, Richard
2008-10-01
We report on the fabrication of arrays of mono- and multimetallic particles via metal evaporation onto lithographically patterned posts. Metal particles evaporated on cylindrical structures 0.20μm in diameter and 0.33μm tall are released via photoresist dissolution, resulting in freely suspended, shape defined particles. These Post-Particles have highly tunable composition, as demonstrated by our deposition of five different multimetallic particle blends. We calculate the susceptibility and magnetization of 200nm Fe particles in an applied 0.081T magnetic field. In order to evaluate their usefulness as magnetofection agents an antisense oligonucleotide designed to correct the aberrant splicing of enhanced green fluorescent protein mRNA was successfully attached to Fe Post-Particles via a polyethyleneimine linker and transfected into a modified HeLa cell line.
Applied Plasma Physics is a major sub-organizational unit of the MFE Program. It includes Fusion Plasma Theory and Experimental Plasma Research. The Fusion Plasma Theory group has the responsibility for developing theoretical-computational models in the general areas of plasma properties, equilibrium, stability, transport, and atomic physics. This group has responsibility for giving guidance to the mirror experimental program. There is a formal division of the group into theory and computational; however, in this report the efforts of the two areas are not separated since many projects have contributions from members of both. Under the Experimental Plasma Research Program, we are developing the intense, pulsed neutral-beam source (IPINS) for the generation of a reversed-field configuration on 2XIIB. We are also studying the feasibility of utilizing certain neutron-detection techniques as plasma diagnostics in the next generation of thermonuclear experiments
Contributions to Applied Cartography
Radovan Pavić
2012-12-01
Full Text Available According to the increasing awareness of the importance, advantagesand feasibility of representing/visualizing spatial relations and spatial content through corresponding cartography –maps are becoming increasingly more frequent and elaborate when one needs to represent some aspect of reality from various standpoints: economical, natural scientific or politological. Some contents practically impose the need for applied cartography which is especially true of international-political, military, geopolitical and transport issues. Therefore, mass communication media have been increasingly accepting and adopting specific cartography as significant content which successfully compete with the importance of the text itself – this is the case everywhere, including in Croatia. The French geographical-political-cartographic school is the model and exceptional accomplishment. It also has predecessors in the German/Nazi geopolitical school from the first half of the 20th century.
Niederreiter, Harald
2015-01-01
This textbook effectively builds a bridge from basic number theory to recent advances in applied number theory. It presents the first unified account of the four major areas of application where number theory plays a fundamental role, namely cryptography, coding theory, quasi-Monte Carlo methods, and pseudorandom number generation, allowing the authors to delineate the manifold links and interrelations between these areas. Number theory, which Carl-Friedrich Gauss famously dubbed the queen of mathematics, has always been considered a very beautiful field of mathematics, producing lovely results and elegant proofs. While only very few real-life applications were known in the past, today number theory can be found in everyday life: in supermarket bar code scanners, in our cars’ GPS systems, in online banking, etc. Starting with a brief introductory course on number theory in Chapter 1, which makes the book more accessible for undergraduates, the authors describe the four main application areas in Chapters...
Bower, Allan F
2009-01-01
Modern computer simulations make stress analysis easy. As they continue to replace classical mathematical methods of analysis, these software programs require users to have a solid understanding of the fundamental principles on which they are based. Develop Intuitive Ability to Identify and Avoid Physically Meaningless Predictions Applied Mechanics of Solids is a powerful tool for understanding how to take advantage of these revolutionary computer advances in the field of solid mechanics. Beginning with a description of the physical and mathematical laws that govern deformation in solids, the text presents modern constitutive equations, as well as analytical and computational methods of stress analysis and fracture mechanics. It also addresses the nonlinear theory of deformable rods, membranes, plates, and shells, and solutions to important boundary and initial value problems in solid mechanics. The author uses the step-by-step manner of a blackboard lecture to explain problem solving methods, often providing...
Applied Plasma Physics is a major sub-organizational unit of the MFE Porgram. It includes Fusion Plasma Theory and Experimental Plasma Research. Fusion Plasma Theory has the responsibility for developing theoretical-computational models in the general areas of plasma properties, equilibrium, stability, transport, and atomic physics. This group has responsibility for giving guidance to the mirror experimental program. There is a formal division of the group into theory and computational; however, in this report the efforts of the two areas are not separated since many projects have contributions from members of both. Under Experimental Plasma Research, we are developing the intense, pulsed ion-neutral source (IPINS) for the generation of a reversed-field configuration on 2XIIB. We are also studying the feasibility of utilizing certain neutron-detection techniques as plasma diagnostics in the next generation of thermonuclear experiments
Applied multivariate statistical analysis
Härdle, Wolfgang Karl
2015-01-01
Focusing on high-dimensional applications, this 4th edition presents the tools and concepts used in multivariate data analysis in a style that is also accessible for non-mathematicians and practitioners. It surveys the basic principles and emphasizes both exploratory and inferential statistics; a new chapter on Variable Selection (Lasso, SCAD and Elastic Net) has also been added. All chapters include practical exercises that highlight applications in different multivariate data analysis fields: in quantitative financial studies, where the joint dynamics of assets are observed; in medicine, where recorded observations of subjects in different locations form the basis for reliable diagnoses and medication; and in quantitative marketing, where consumers’ preferences are collected in order to construct models of consumer behavior. All of these examples involve high to ultra-high dimensions and represent a number of major fields in big data analysis. The fourth edition of this book on Applied Multivariate ...
Applied partial differential equations
Logan, J David
2015-01-01
This text presents the standard material usually covered in a one-semester, undergraduate course on boundary value problems and PDEs. Emphasis is placed on motivation, concepts, methods, and interpretation, rather than on formal theory. The concise treatment of the subject is maintained in this third edition covering all the major ideas: the wave equation, the diffusion equation, the Laplace equation, and the advection equation on bounded and unbounded domains. Methods include eigenfunction expansions, integral transforms, and characteristics. In this third edition, text remains intimately tied to applications in heat transfer, wave motion, biological systems, and a variety other topics in pure and applied science. The text offers flexibility to instructors who, for example, may wish to insert topics from biology or numerical methods at any time in the course. The exposition is presented in a friendly, easy-to-read, style, with mathematical ideas motivated from physical problems. Many exercises and worked e...