Regularization modeling for large-eddy simulation
Geurts, Bernardus J.; Holm, D.D.
2003-01-01
A new modeling approach for large-eddy simulation (LES) is obtained by combining a "regularization principle" with an explicit filter and its inversion. This regularization approach allows a systematic derivation of the implied subgrid model, which resolves the closure problem. The central role of
The Regular Education Initiative: A Deja Vu Remembered with Sadness and Concern.
Silver, Larry B.
1991-01-01
This article compares the ideals of the regular education initiative to provide services for learning-disabled students within the regular classroom to the ideals and resulting negative effects (e.g., homelessness) of the deinstitutionalization of the mentally ill during the 1960s. Resistance to efforts to decrease or eliminate special education…
Simulation of Initiation in Hexanitrostilbene
Thompson, Aidan; Shan, Tzu-Ray; Yarrington, Cole; Wixom, Ryan
We report on the effect of isolated voids and pairs of nearby voids on hot spot formation, growth and chemical reaction initiation in hexanitrostilbene (HNS) crystals subjected to shock loading. Large-scale, reactive molecular dynamics simulations are performed using the reactive force field (ReaxFF) as implemented in the LAMMPS software. The ReaxFF force field description for HNS has been validated previously by comparing the isothermal equation of state to available diamond anvil cell (DAC) measurements and density function theory (DFT) calculations. Micron-scale molecular dynamics simulations of a supported shockwave propagating in HNS crystal along the [010] orientation are performed (up = 1.25 km/s, Us =4.0 km/s, P = 11GPa.) We compare the effect on hot spot formation and growth rate of isolated cylindrical voids up to 0.1 µm in size with that of two 50nm voids set 100nm apart. Results from the micron-scale atomistic simulations are compared with hydrodynamics simulations. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lock- heed Martin Corporation, for the U.S. DOE National Nuclear Security Administration under Contract DE-AC04-94AL85000.
Age-related patterns of drug use initiation among polydrug using regular psychostimulant users.
Darke, Shane; Kaye, Sharlene; Torok, Michelle
2012-09-01
To determine age-related patterns of drug use initiation, drug sequencing and treatment entry among regular psychostimulant users. Cross-sectional study of 269 regular psychostimulant users, administered a structured interview examining onset of use for major licit and illicit drugs. The mean age at first intoxication was not associated with age or gender. In contrast, younger age was associated with earlier ages of onset for all of the illicit drug classes. Each additional year of age was associated with a 4 month increase in onset age for methamphetamine, and 3 months for heroin. By the age of 17, those born prior to 1961 had, on average, used only tobacco and alcohol, whereas those born between 1986 and 1990 had used nine different drug classes. The period between initial use and the transition to regular use, however, was stable. Age was also negatively correlated with both age at initial injection and regular injecting. Onset sequences, however, remained stable. Consistent with the age-related patterns of drug use, each additional year of age associated with a 0.47 year increase in the age at first treatment. While the age at first intoxication appeared stable, the trajectory through illicit drug use was substantially truncated. The data indicate that, at least among those who progress to regular illicit drug use, younger users are likely to be exposed to far broader polydrug use in their teens than has previously been the case. © 2012 Australasian Professional Society on Alcohol and other Drugs.
The "Learning in Regular Classrooms" Initiative for Inclusive Education in China
Xu, Su Qiong; Cooper, Paul; Sin, Kenneth
2018-01-01
The purpose of this article is to understand the Learning in Regular Classrooms (LRC) initiative for inclusive education in China. First, the paper reviews the policy, legislation, and practice in relation to the LRC. It then goes on to explore the specific social-political context of the LRC, and compares the Chinese LRC with the Western…
Discretizing LTI Descriptor (Regular Differential Input Systems with Consistent Initial Conditions
Directory of Open Access Journals (Sweden)
Athanasios D. Karageorgos
2010-01-01
Full Text Available A technique for discretizing efficiently the solution of a Linear descriptor (regular differential input system with consistent initial conditions, and Time-Invariant coefficients (LTI is introduced and fully discussed. Additionally, an upper bound for the error ‖x¯(kT−x¯k‖ that derives from the procedure of discretization is also provided. Practically speaking, we are interested in such kind of systems, since they are inherent in many physical, economical and engineering phenomena.
Homogeneity and EPR metrics for assessment of regular grids used in CW EPR powder simulations.
Crăciun, Cora
2014-08-01
CW EPR powder spectra may be approximated numerically using a spherical grid and a Voronoi tessellation-based cubature. For a given spin system, the quality of simulated EPR spectra depends on the grid type, size, and orientation in the molecular frame. In previous work, the grids used in CW EPR powder simulations have been compared mainly from geometric perspective. However, some grids with similar homogeneity degree generate different quality simulated spectra. This paper evaluates the grids from EPR perspective, by defining two metrics depending on the spin system characteristics and the grid Voronoi tessellation. The first metric determines if the grid points are EPR-centred in their Voronoi cells, based on the resonance magnetic field variations inside these cells. The second metric verifies if the adjacent Voronoi cells of the tessellation are EPR-overlapping, by computing the common range of their resonance magnetic field intervals. Beside a series of well known regular grids, the paper investigates a modified ZCW grid and a Fibonacci spherical code, which are new in the context of EPR simulations. For the investigated grids, the EPR metrics bring more information than the homogeneity quantities and are better related to the grids' EPR behaviour, for different spin system symmetries. The metrics' efficiency and limits are finally verified for grids generated from the initial ones, by using the original or magnetic field-constraint variants of the Spherical Centroidal Voronoi Tessellation method. Copyright © 2014 Elsevier Inc. All rights reserved.
Homogeneity and EPR metrics for assessment of regular grids used in CW EPR powder simulations
Crăciun, Cora
2014-08-01
CW EPR powder spectra may be approximated numerically using a spherical grid and a Voronoi tessellation-based cubature. For a given spin system, the quality of simulated EPR spectra depends on the grid type, size, and orientation in the molecular frame. In previous work, the grids used in CW EPR powder simulations have been compared mainly from geometric perspective. However, some grids with similar homogeneity degree generate different quality simulated spectra. This paper evaluates the grids from EPR perspective, by defining two metrics depending on the spin system characteristics and the grid Voronoi tessellation. The first metric determines if the grid points are EPR-centred in their Voronoi cells, based on the resonance magnetic field variations inside these cells. The second metric verifies if the adjacent Voronoi cells of the tessellation are EPR-overlapping, by computing the common range of their resonance magnetic field intervals. Beside a series of well known regular grids, the paper investigates a modified ZCW grid and a Fibonacci spherical code, which are new in the context of EPR simulations. For the investigated grids, the EPR metrics bring more information than the homogeneity quantities and are better related to the grids’ EPR behaviour, for different spin system symmetries. The metrics’ efficiency and limits are finally verified for grids generated from the initial ones, by using the original or magnetic field-constraint variants of the Spherical Centroidal Voronoi Tessellation method.
EIT image regularization by a new Multi-Objective Simulated Annealing algorithm.
Castro Martins, Thiago; Sales Guerra Tsuzuki, Marcos
2015-01-01
Multi-Objective Optimization can be used to produce regularized Electrical Impedance Tomography (EIT) images where the weight of the regularization term is not known a priori. This paper proposes a novel Multi-Objective Optimization algorithm based on Simulated Annealing tailored for EIT image reconstruction. Images are reconstructed from experimental data and compared with images from other Multi and Single Objective optimization methods. A significant performance enhancement from traditional techniques can be inferred from the results.
Schnek: A C++ library for the development of parallel simulation codes on regular grids
Schmitz, Holger
2018-05-01
A large number of algorithms across the field of computational physics are formulated on grids with a regular topology. We present Schnek, a library that enables fast development of parallel simulations on regular grids. Schnek contains a number of easy-to-use modules that greatly reduce the amount of administrative code for large-scale simulation codes. The library provides an interface for reading simulation setup files with a hierarchical structure. The structure of the setup file is translated into a hierarchy of simulation modules that the developer can specify. The reader parses and evaluates mathematical expressions and initialises variables or grid data. This enables developers to write modular and flexible simulation codes with minimal effort. Regular grids of arbitrary dimension are defined as well as mechanisms for defining physical domain sizes, grid staggering, and ghost cells on these grids. Ghost cells can be exchanged between neighbouring processes using MPI with a simple interface. The grid data can easily be written into HDF5 files using serial or parallel I/O.
Numerical simulation of the regularized long wave equation by He's homotopy perturbation method
Energy Technology Data Exchange (ETDEWEB)
Inc, Mustafa [Department of Mathematics, Firat University, 23119 Elazig (Turkey)], E-mail: minc@firat.edu.tr; Ugurlu, Yavuz [Department of Mathematics, Firat University, 23119 Elazig (Turkey)
2007-09-17
In this Letter, we present the homotopy perturbation method (shortly HPM) for obtaining the numerical solution of the RLW equation. We obtain the exact and numerical solutions of the Regularized Long Wave (RLW) equation for certain initial condition. The initial approximation can be freely chosen with possible unknown constants which can be determined by imposing the boundary and initial conditions. Comparison of the results with those of other methods have led us to significant consequences. The numerical solutions are compared with the known analytical solutions.
Numerical simulation of the regularized long wave equation by He's homotopy perturbation method
International Nuclear Information System (INIS)
Inc, Mustafa; Ugurlu, Yavuz
2007-01-01
In this Letter, we present the homotopy perturbation method (shortly HPM) for obtaining the numerical solution of the RLW equation. We obtain the exact and numerical solutions of the Regularized Long Wave (RLW) equation for certain initial condition. The initial approximation can be freely chosen with possible unknown constants which can be determined by imposing the boundary and initial conditions. Comparison of the results with those of other methods have led us to significant consequences. The numerical solutions are compared with the known analytical solutions
Atomistic Simulation of Initiation in Hexanitrostilbene
Shan, Tzu-Ray; Wixom, Ryan; Yarrington, Cole; Thompson, Aidan
2015-06-01
We report on the effect of cylindrical voids on hot spot formation, growth and chemical reaction initiation in hexanitrostilbene (HNS) crystals subjected to shock. Large-scale, reactive molecular dynamics simulations are performed using the reactive force field (ReaxFF) as implemented in the LAMMPS software. The ReaxFF force field description for HNS has been validated previously by comparing the isothermal equation of state to available diamond anvil cell (DAC) measurements and density function theory (DFT) calculations and by comparing the primary dissociation pathway to ab initio calculations. Micron-scale molecular dynamics simulations of a supported shockwave propagating through the HNS crystal along the [010] orientation are performed with an impact velocity (or particle velocity) of 1.25 km/s, resulting in shockwave propagation at 4.0 km/s in the bulk material and a bulk shock pressure of ~ 11GPa. The effect of cylindrical void sizes varying from 0.02 to 0.1 μm on hot spot formation and growth rate has been studied. Interaction between multiple voids in the HNS crystal and its effect on hot spot formation will also be addressed. Results from the micron-scale atomistic simulations are compared with hydrodynamics simulations. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. DOE National Nuclear Security Administration under Contract DE-AC04-94AL85000.
A regularized vortex-particle mesh method for large eddy simulation
DEFF Research Database (Denmark)
Spietz, Henrik Juul; Walther, Jens Honore; Hejlesen, Mads Mølholm
We present recent developments of the remeshed vortex particle-mesh method for simulating incompressible ﬂuid ﬂow. The presented method relies on a parallel higher-order FFT based solver for the Poisson equation. Arbitrary high order is achieved through regularization of singular Green’s function...... solutions to the Poisson equation and recently we have derived novel high order solutions for a mixture of open and periodic domains. With this approach the simulated variables may formally be viewed as the approximate solution to the ﬁltered Navier Stokes equations, hence we use the method for Large Eddy...
A regularized vortex-particle mesh method for large eddy simulation
Spietz, H. J.; Walther, J. H.; Hejlesen, M. M.
2017-11-01
We present recent developments of the remeshed vortex particle-mesh method for simulating incompressible fluid flow. The presented method relies on a parallel higher-order FFT based solver for the Poisson equation. Arbitrary high order is achieved through regularization of singular Green's function solutions to the Poisson equation and recently we have derived novel high order solutions for a mixture of open and periodic domains. With this approach the simulated variables may formally be viewed as the approximate solution to the filtered Navier Stokes equations, hence we use the method for Large Eddy Simulation by including a dynamic subfilter-scale model based on test-filters compatible with the aforementioned regularization functions. Further the subfilter-scale model uses Lagrangian averaging, which is a natural candidate in light of the Lagrangian nature of vortex particle methods. A multiresolution variation of the method is applied to simulate the benchmark problem of the flow past a square cylinder at Re = 22000 and the obtained results are compared to results from the literature.
Large-eddy simulation of plume dispersion within regular arrays of cubic buildings
Nakayama, H.; Jurcakova, K.; Nagai, H.
2011-04-01
There is a potential problem that hazardous and flammable materials are accidentally or intentionally released within populated urban areas. For the assessment of human health hazard from toxic substances, the existence of high concentration peaks in a plume should be considered. For the safety analysis of flammable gas, certain critical threshold levels should be evaluated. Therefore, in such a situation, not only average levels but also instantaneous magnitudes of concentration should be accurately predicted. In this study, we perform Large-Eddy Simulation (LES) of plume dispersion within regular arrays of cubic buildings with large obstacle densities and investigate the influence of the building arrangement on the characteristics of mean and fluctuating concentrations.
CFD Simulations of Floating Point Absorber Wave Energy Converter Arrays Subjected to Regular Waves
Directory of Open Access Journals (Sweden)
Brecht Devolder
2018-03-01
Full Text Available In this paper we use the Computational Fluid Dynamics (CFD toolbox OpenFOAM to perform numerical simulations of multiple floating point absorber wave energy converters (WECs arranged in a geometrical array configuration inside a numerical wave tank (NWT. The two-phase Navier-Stokes fluid solver is coupled with a motion solver to simulate the hydrodynamic flow field around the WECs and the wave-induced rigid body heave motion of each WEC within the array. In this study, the numerical simulations of a single WEC unit are extended to multiple WECs and the complexity of modelling individual floating objects close to each other in an array layout is tackled. The NWT is validated for fluid-structure interaction (FSI simulations by using experimental measurements for an array of two, five and up to nine heaving WECs subjected to regular waves. The validation is achieved by using mathematical models to include frictional forces observed during the experimental tests. For all the simulations presented, a good agreement is found between the numerical and the experimental results for the WECs’ heave motions, the surge forces on the WECs and the perturbed wave field around the WECs. As a result, our coupled CFD–motion solver proves to be a suitable and accurate toolbox for the study of fluid-structure interaction problems of WEC arrays.
On the regularities of gamma-ray initiated emission of really-secondary electrons
International Nuclear Information System (INIS)
Grudskij, M.Ya.; Roldugin, N.N.; Smirnov, V.V.
1982-01-01
Emission regularities of the really-secondary electrons from metals are discussed on the basis of experimental data on electron emission characteristics under gamma radiation of incident quanta produced for a wide energy range (Esub(γ)=0.03+-2 MeV) and atomic numbers of target materials (Z=13+-79). Comparison with published experimental and calculated data is performed. It is shown that yield of the really-secondary electrons into vacuum from the target surface bombarded with a normally incident collimated beam of gamma radiation calculating on energy unit absorbed in the yield zone of the really-secondary electrons is determined only with the target material emittivity and can be calculated if spatial-energy distributions and the number of secondary fast electrons emitted out of the target are known
Berg, Carla J; Barr, Dana Boyd; Stratton, Erin; Escoffery, Cam; Kegler, Michelle
2014-10-01
We examined 1) changes in smoking and vaping behavior and associated cotinine levels and health status among regular smokers who were first-time e-cigarette purchasers and 2) attitudes, intentions, and restrictions regarding e-cigarettes. We conducted a pilot longitudinal study with assessments of the aforementioned factors and salivary cotinine at weeks 0, 4, and 8. Eligibility criteria included being ≥18 years old, smoking ≥25 of the last 30 days, smoking ≥5 cigarettes per day (cpd), smoking regularly ≥1 year, and not having started using e-cigarettes. Of 72 individuals screened, 40 consented, 36 completed the baseline survey, and 83.3% and 72.2% were retained at weeks 4 and 8, respectively. Participants reduced cigarette consumption from baseline to week 4 and 8 (p's e-cigarettes versus regular cigarettes have fewer health risks (97.2%) and that e-cigarettes have been shown to help smokers quit (80.6%) and reduce cigarette consumption (97.2%). In addition, the majority intended to use e-cigarettes as a complete replacement for regular cigarettes (69.4%) and reported no restriction on e-cigarette use in the home (63.9%) or car (80.6%). Future research is needed to document the long-term impact on smoking behavior and health among cigarette smokers who initiate use of e-cigarettes.
Large-Eddy Simulation on Plume Dispersion within Regular Arrays of Cubic Buildings
Nakayama, H.; Jurcakova, K.; Nagai, H.
2010-09-01
There is a potential problem that hazardous and flammable materials are accidentally or intentionally released into the atmosphere, either within or close to populated urban areas. For the assessment of human health hazard from toxic substances, the existence of high concentration peaks in a plume should be considered. For the safety analysis of flammable gas, certain critical threshold levels should be evaluated. Therefore, in such a situation, not only average levels but also instantaneous magnitudes of concentration should be accurately predicted. However, plume dispersion is an extremely complicated process strongly influenced by the existence of buildings. In complex turbulent flows, such as impinging, separated and circulation flows around buildings, plume behaviors can be no longer accurately predicted using empirical Gaussian-type plume model. Therefore, we perform Large-Eddy Simulations (LES) on turbulent flows and plume dispersions within and over regular arrays of cubic buildings with various roughness densities and investigate the influence of the building arrangement pattern on the characteristics of mean and fluctuation concentrations. The basic equations for the LES model are composed of the spatially filtered continuity equation, Navier-Stokes equation and transport equation of concentration. The standard Smagorinsky model (Smagorinsky, 1963) that has enough potential for environment flows is used and its constant is set to 0.12 for estimating the eddy viscosity. The turbulent Schmidt number is 0.5. In our LES model, two computational regions are set up. One is a driver region for generation of inflow turbulence and the other is a main region for LES of plume dispersion within a regular array of cubic buildings. First, inflow turbulence is generated by using Kataoka's method (2002) in the driver region and then, its data are imposed at the inlet of the main computational region at each time step. In this study, the cubic building arrays with λf=0
Initial conditions for turbulent mixing simulations
Directory of Open Access Journals (Sweden)
T. Kaman
2010-01-01
Full Text Available In the context of the classical Rayleigh-Taylor hydrodynamical instability, we examine the much debated question of models for initial conditions and the possible influence of unrecorded long wave length contributions to the instability growth rate α.
International Nuclear Information System (INIS)
Zhong Jian; Huang Si-Xun; Du Hua-Dong; Zhang Liang
2011-01-01
Scatterometer is an instrument which provides all-day and large-scale wind field information, and its application especially to wind retrieval always attracts meteorologists. Certain reasons cause large direction error, so it is important to find where the error mainly comes. Does it mainly result from the background field, the normalized radar cross-section (NRCS) or the method of wind retrieval? It is valuable to research. First, depending on SDP2.0, the simulated ‘true’ NRCS is calculated from the simulated ‘true’ wind through the geophysical model function NSCAT2. The simulated background field is configured by adding a noise to the simulated ‘true’ wind with the non-divergence constraint. Also, the simulated ‘measured’ NRCS is formed by adding a noise to the simulated ‘true’ NRCS. Then, the sensitivity experiments are taken, and the new method of regularization is used to improve the ambiguity removal with simulation experiments. The results show that the accuracy of wind retrieval is more sensitive to the noise in the background than in the measured NRCS; compared with the two-dimensional variational (2DVAR) ambiguity removal method, the accuracy of wind retrieval can be improved with the new method of Tikhonov regularization through choosing an appropriate regularization parameter, especially for the case of large error in the background. The work will provide important information and a new method for the wind retrieval with real data. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)
Montessori, A; Falcucci, G; Prestininzi, P; La Rocca, M; Succi, S
2014-05-01
We investigate the accuracy and performance of the regularized version of the single-relaxation-time lattice Boltzmann equation for the case of two- and three-dimensional lid-driven cavities. The regularized version is shown to provide a significant gain in stability over the standard single-relaxation time, at a moderate computational overhead.
B. Chen (Bohan); J. Blanchet; C.H. Rhee (Chang-Han); A.P. Zwart (Bert)
2017-01-01
textabstractWe propose a class of strongly efficient rare event simulation estimators for random walks and compound Poisson processes with a regularly varying increment/jump-size distribution in a general large deviations regime. Our estimator is based on an importance sampling strategy that hinges
A Coordinated Initialization Process for the Distributed Space Exploration Simulation
Crues, Edwin Z.; Phillips, Robert G.; Dexter, Dan; Hasan, David
2007-01-01
A viewgraph presentation on the federate initialization process for the Distributed Space Exploration Simulation (DSES) is described. The topics include: 1) Background: DSES; 2) Simulation requirements; 3) Nine Step Initialization; 4) Step 1: Create the Federation; 5) Step 2: Publish and Subscribe; 6) Step 3: Create Object Instances; 7) Step 4: Confirm All Federates Have Joined; 8) Step 5: Achieve initialize Synchronization Point; 9) Step 6: Update Object Instances With Initial Data; 10) Step 7: Wait for Object Reflections; 11) Step 8: Set Up Time Management; 12) Step 9: Achieve startup Synchronization Point; and 13) Conclusions
Should tsunami simulations include a nonzero initial horizontal velocity?
Lotto, Gabriel C.; Nava, Gabriel; Dunham, Eric M.
2017-08-01
Tsunami propagation in the open ocean is most commonly modeled by solving the shallow water wave equations. These equations require initial conditions on sea surface height and depth-averaged horizontal particle velocity or, equivalently, horizontal momentum. While most modelers assume that initial velocity is zero, Y.T. Song and collaborators have argued for nonzero initial velocity, claiming that horizontal displacement of a sloping seafloor imparts significant horizontal momentum to the ocean. They show examples in which this effect increases the resulting tsunami height by a factor of two or more relative to models in which initial velocity is zero. We test this claim with a "full-physics" integrated dynamic rupture and tsunami model that couples the elastic response of the Earth to the linearized acoustic-gravitational response of a compressible ocean with gravity; the model self-consistently accounts for seismic waves in the solid Earth, acoustic waves in the ocean, and tsunamis (with dispersion at short wavelengths). Full-physics simulations of subduction zone megathrust ruptures and tsunamis in geometries with a sloping seafloor confirm that substantial horizontal momentum is imparted to the ocean. However, almost all of that initial momentum is carried away by ocean acoustic waves, with negligible momentum imparted to the tsunami. We also compare tsunami propagation in each simulation to that predicted by an equivalent shallow water wave simulation with varying assumptions regarding initial velocity. We find that the initial horizontal velocity conditions proposed by Song and collaborators consistently overestimate the tsunami amplitude and predict an inconsistent wave profile. Finally, we determine tsunami initial conditions that are rigorously consistent with our full-physics simulations by isolating the tsunami waves from ocean acoustic and seismic waves at some final time, and backpropagating the tsunami waves to their initial state by solving the
Directory of Open Access Journals (Sweden)
Tushar Kanti Bera
2011-06-01
Full Text Available A Block Matrix based Multiple Regularization (BMMR technique is proposed for improving conductivity image quality in EIT. The response matrix (JTJ has been partitioned into several sub-block matrices and the highest eigenvalue of each sub-block matrices has been chosen as regularization parameter for the nodes contained by that sub-block. Simulated boundary data are generated for circular domain with circular inhomogeneity and the conductivity images are reconstructed in a Model Based Iterative Image Reconstruction (MoBIIR algorithm. Conductivity images are reconstructed with BMMR technique and the results are compared with the Single-step Tikhonov Regularization (STR and modified Levenberg-Marquardt Regularization (LMR methods. It is observed that the BMMR technique reduces the projection error and solution error and improves the conductivity reconstruction in EIT. Result show that the BMMR method also improves the image contrast and inhomogeneity conductivity profile and hence the reconstructed image quality is enhanced. ;doi:10.5617/jeb.170 J Electr Bioimp, vol. 2, pp. 33-47, 2011
Analysing initial attack on wildland fires using stochastic simulation.
Jeremy S. Fried; J. Keith Gilless; James. Spero
2006-01-01
Stochastic simulation models of initial attack on wildland fire can be designed to reflect the complexity of the environmental, administrative, and institutional context in which wildland fire protection agencies operate, but such complexity may come at the cost of a considerable investment in data acquisition and management. This cost may be well justified when it...
Relativistic initial conditions for N-body simulations
Energy Technology Data Exchange (ETDEWEB)
Fidler, Christian [Catholic University of Louvain—Center for Cosmology, Particle Physics and Phenomenology (CP3) 2, Chemin du Cyclotron, B-1348 Louvain-la-Neuve (Belgium); Tram, Thomas; Crittenden, Robert; Koyama, Kazuya; Wands, David [Institute of Cosmology and Gravitation, University of Portsmouth, Portsmouth PO1 3FX (United Kingdom); Rampf, Cornelius, E-mail: christian.fidler@uclouvain.be, E-mail: thomas.tram@port.ac.uk, E-mail: rampf@thphys.uni-heidelberg.de, E-mail: robert.crittenden@port.ac.uk, E-mail: kazuya.koyama@port.ac.uk, E-mail: david.wands@port.ac.uk [Institut für Theoretische Physik, Universität Heidelberg, Philosophenweg 16, D–69120 Heidelberg (Germany)
2017-06-01
Initial conditions for (Newtonian) cosmological N-body simulations are usually set by re-scaling the present-day power spectrum obtained from linear (relativistic) Boltzmann codes to the desired initial redshift of the simulation. This back-scaling method can account for the effect of inhomogeneous residual thermal radiation at early times, which is absent in the Newtonian simulations. We analyse this procedure from a fully relativistic perspective, employing the recently-proposed Newtonian motion gauge framework. We find that N-body simulations for ΛCDM cosmology starting from back-scaled initial conditions can be self-consistently embedded in a relativistic space-time with first-order metric potentials calculated using a linear Boltzmann code. This space-time coincides with a simple ''N-body gauge'' for z < 50 for all observable modes. Care must be taken, however, when simulating non-standard cosmologies. As an example, we analyse the back-scaling method in a cosmology with decaying dark matter, and show that metric perturbations become large at early times in the back-scaling approach, indicating a breakdown of the perturbative description. We suggest a suitable ''forwards approach' for such cases.
Kauffman, James M.
Proposals for restructuring and integration of special and general education, known as the regular education initiative (REI), represent a revolution in the basic concepts related to the education of handicapped students that have provided the foundation of special education for over a century. Education policy, as presented by Presidents Reagan…
Accurate initial conditions in mixed Dark Matter--Baryon simulations
Valkenburg, Wessel
2017-06-01
We quantify the error in the results of mixed baryon--dark-matter hydrodynamic simulations, stemming from outdated approximations for the generation of initial conditions. The error at redshift 0 in contemporary large simulations, is of the order of few to ten percent in the power spectra of baryons and dark matter, and their combined total-matter power spectrum. After describing how to properly assign initial displacements and peculiar velocities to multiple species, we review several approximations: (1) {using the total-matter power spectrum to compute displacements and peculiar velocities of both fluids}, (2) scaling the linear redshift-zero power spectrum back to the initial power spectrum using the Newtonian growth factor ignoring homogeneous radiation, (3) using longitudinal-gauge velocities with synchronous-gauge densities, and (4) ignoring the phase-difference in the Fourier modes for the offset baryon grid, relative to the dark-matter grid. Three of these approximations do not take into account that ...
Nararidh, Niti
2013-11-01
Choanoflagellates are unicellular organisms whose intriguing morphology includes a set of collars/microvilli emanating from the cell body, surrounding the beating flagellum. We investigated the role of the microvilli in the feeding and swimming behavior of the organism using a three-dimensional model based on the method of regularized Stokeslets. This model allows us to examine the velocity generated around the feeding organism tethered in place, as well as to predict the paths of surrounding free flowing particles. In particular, we can depict the effective capture of nutritional particles and bacteria in the fluid, showing the hydrodynamic cooperation between the cell, flagellum, and microvilli of the organism. Funding Source: Murchison Undergraduate Research Fellowship.
DEFF Research Database (Denmark)
Hejlesen, Mads Mølholm; Spietz, Henrik J.; Walther, Jens Honore
2014-01-01
, unbounded particle-mesh based vortex method is used to simulate the instability, transition to turbulence and eventual destruction of a single vortex ring. From the simulation data a novel method on analyzing the dynamics of the enstrophy is presented based on the alignment of the vorticity vector...... with the principal axis of the strain rate tensor. We find that the dynamics of the enstrophy density is dominated by the local flow deformation and axis of rotation, which is used to infer some concrete tendencies related to the topology of the vorticity field....
Directory of Open Access Journals (Sweden)
Tushar Kanti Bera
2011-03-01
Full Text Available A Projection Error Propagation-based Regularization (PEPR method is proposed and the reconstructed image quality is improved in Electrical Impedance Tomography (EIT. A projection error is produced due to the misfit of the calculated and measured data in the reconstruction process. The variation of the projection error is integrated with response matrix in each iterations and the reconstruction is carried out in EIDORS. The PEPR method is studied with the simulated boundary data for different inhomogeneity geometries. Simulated results demonstrate that the PEPR technique improves image reconstruction precision in EIDORS and hence it can be successfully implemented to increase the reconstruction accuracy in EIT.>doi:10.5617/jeb.158 J Electr Bioimp, vol. 2, pp. 2-12, 2011
Kangas, Julie L.; Baldwin, Austin S.; Rosenfield, David; Smits, Jasper A. J.; Rethorst, Chad D.
2016-01-01
Objective People with depressive symptoms typically report lower levels of exercise self-efficacy and are more likely to discontinue regular exercise than others, but it is unclear how depressive symptoms affect people’s exercise self-efficacy. Among potential sources of self-efficacy, engaging in the relevant behavior is the strongest (Bandura, 1997). Thus, we sought to clarify how depressive symptoms affect the same-day relation between engaging in exercise and self-efficacy during the initiation of regular exercise. Methods Participants (N=116) were physically inactive adults (35% reported clinically significant depressive symptoms at baseline) who initiated regular exercise and completed daily assessments of exercise minutes and self-efficacy for four weeks. We tested whether (a) self-efficacy differed on days when exercise did and did not occur, and (b) the difference was moderated by depressive symptoms. Mixed linear models were used to examine these relations. Results An interaction between exercise occurrence and depressive symptoms (pself-efficacy was lower on days when no exercise occurred, but this difference was significantly larger for people with high depressive symptoms. People with high depressive symptoms had lower self-efficacy than those with low depressive symptoms on days when no exercise occurred (p=.03), but self-efficacy did not differ on days when exercise occurred (p=.34). Conclusions During the critical period of initiating regular exercise, daily self-efficacy for people with high depressive symptoms is more sensitive to whether they exercised than for people with low depressive symptoms. This may partially explain why people with depression tend to have difficulty maintaining regular exercise. PMID:25110850
Initial experience with AcQsim CT simulator
International Nuclear Information System (INIS)
Michalski, Jeff M.; Gerber, Russell; Bosch, Walter R.; Harms, William; Matthews, John W.; Purdy, James A.; Perez, Carlos A.
1995-01-01
Purpose: We recently replaced our university developed CT simulator prototype with a commercial grade spiral CT simulator (Picker AcQsim) that is networked with three independent virtual simulation workstations and our 3D radiation therapy planning (3D-RTP) system multiple workstations. This presentation will report our initial experience with this CT simulation device and define criteria for optimum clinical use as well as describe some potential drawbacks of the current system. Methods and Materials: Over a 10 month period, 210 patients underwent CT simulation using the AcQsim. An additional 127 patients had a volumetric CT scan done on the device with their CT data and target and normal tissue contours ultimately transferred to our 3D-RTP system. We currently perform the initial patient localization and immobilization in the CT simulation suite by using CT topograms and a fiducial laser marking system. Immobilization devices, required for all patients undergoing CT simulation, are constructed and registered to a device that defines the treatment table coordinates. Orthogonal anterior and lateral CT topograms document patient alignment and the position of a reference coordinate center. The volumetric CT scan with appropriate CT contrast materials administered is obtained while the patient is in the immobilization device. On average, more than 100 CT slices are obtained per study. Contours defining tumor, target, and normal tissues are drawn on a slice by slice basis. Isocenter definition can be automatically defined within the target volume and marked on the patient and immobilization device before leaving the initial CT simulation session. Virtual simulation is then performed on the patient data set with the assistance of predefined target volumes and normal tissue contours displayed on rapidly computed digital reconstructed radiographs (DRRs) in a manner similar to a conventional fluoroscopic radiotherapy simulator. Lastly, a verification simulation is
Berg, Carla J.; Barr, Dana Boyd; Stratton, Erin; Escoffery, Cam; Kegler, Michelle
2014-01-01
Objectives We examined 1) changes in smoking and vaping behavior and associated cotinine levels and health status among regular smokers who were first-time e-cigarette purchasers and 2) attitudes, intentions, and restrictions regarding e-cigarettes. Methods We conducted a pilot longitudinal study with assessments of the aforementioned factors and salivary cotinine at weeks 0, 4, and 8. Eligibility criteria included being ?18 years old, smoking ?25 of the last 30 days, smoking ?5 cigarettes pe...
Directory of Open Access Journals (Sweden)
Nabanita Basu
2016-09-01
Full Text Available The dataset developed consists of 108 blood drip stains developed with fresh porcine blood, blood admixed with different dosage of Warfarin and Heparin, respectively. For each particular blood type (i.e. fresh blood, blood admixed with Warfarin at different dosage and blood admixed with Heparin at varied dosage stain patterns were created by passive dripping of blood from a 2.5 cm3 subcutaneous syringe with needle filled to capacity, at 30°, 60° and 90° angle of impact with corresponding fall height of 20, 40 and 60 cm respectively. In the other dataset of 162 datapoints, 81 regular drip stains were formed from blood that had dripped passively from a subcutaneous syringe without needle at the aforementioned angle of impact and fall height, while the other stains were formed as a result of dripping of blood from a subcutaneous syringe with needle. In order to compare stains formed, all stains were recorded on the same representative, non-porous, smooth target surface under similar physical conditions. The interpretations relevant to the dataset are available in the article titled ‘2D Source Area prediction based on physical characteristics of a regular, passive blood drip stain’ (Basu and Bandyopadhyay, 2016 [7]. An image pre-processing algorithm for extracting ROI has also been incorporated in this article. Keywords: Drip stain, Bloodstain Pattern Analysis, Source Dimension prediction
An Initial Examination for Verifying Separation Algorithms by Simulation
White, Allan L.; Neogi, Natasha; Herencia-Zapana, Heber
2012-01-01
An open question in algorithms for aircraft is what can be validated by simulation where the simulation shows that the probability of undesirable events is below some given level at some confidence level. The problem is including enough realism to be convincing while retaining enough efficiency to run the large number of trials needed for high confidence. The paper first proposes a goal based on the number of flights per year in several regions. The paper examines the probabilistic interpretation of this goal and computes the number of trials needed to establish it at an equivalent confidence level. Since any simulation is likely to consider the algorithms for only one type of event and there are several types of events, the paper examines under what conditions this separate consideration is valid. This paper is an initial effort, and as such, it considers separation maneuvers, which are elementary but include numerous aspects of aircraft behavior. The scenario includes decisions under uncertainty since the position of each aircraft is only known to the other by broadcasting where GPS believes each aircraft to be (ADS-B). Each aircraft operates under feedback control with perturbations. It is shown that a scenario three or four orders of magnitude more complex is feasible. The question of what can be validated by simulation remains open, but there is reason to be optimistic.
Suppression of the initial transient in Monte Carlo criticality simulations
International Nuclear Information System (INIS)
Richet, Y.
2006-12-01
Criticality Monte Carlo calculations aim at estimating the effective multiplication factor (k-effective) for a fissile system through iterations simulating neutrons propagation (making a Markov chain). Arbitrary initialization of the neutron population can deeply bias the k-effective estimation, defined as the mean of the k-effective computed at each iteration. A simplified model of this cycle k-effective sequence is built, based on characteristics of industrial criticality Monte Carlo calculations. Statistical tests, inspired by Brownian bridge properties, are designed to discriminate stationarity of the cycle k-effective sequence. The initial detected transient is, then, suppressed in order to improve the estimation of the system k-effective. The different versions of this methodology are detailed and compared, firstly on a plan of numerical tests fitted on criticality Monte Carlo calculations, and, secondly on real criticality calculations. Eventually, the best methodologies observed in these tests are selected and allow to improve industrial Monte Carlo criticality calculations. (author)
Multi-Scale Initial Conditions For Cosmological Simulations
Energy Technology Data Exchange (ETDEWEB)
Hahn, Oliver; /KIPAC, Menlo Park; Abel, Tom; /KIPAC, Menlo Park /ZAH, Heidelberg /HITS, Heidelberg
2011-11-04
We discuss a new algorithm to generate multi-scale initial conditions with multiple levels of refinements for cosmological 'zoom-in' simulations. The method uses an adaptive convolution of Gaussian white noise with a real-space transfer function kernel together with an adaptive multi-grid Poisson solver to generate displacements and velocities following first- (1LPT) or second-order Lagrangian perturbation theory (2LPT). The new algorithm achieves rms relative errors of the order of 10{sup -4} for displacements and velocities in the refinement region and thus improves in terms of errors by about two orders of magnitude over previous approaches. In addition, errors are localized at coarse-fine boundaries and do not suffer from Fourier-space-induced interference ringing. An optional hybrid multi-grid and Fast Fourier Transform (FFT) based scheme is introduced which has identical Fourier-space behaviour as traditional approaches. Using a suite of re-simulations of a galaxy cluster halo our real-space-based approach is found to reproduce correlation functions, density profiles, key halo properties and subhalo abundances with per cent level accuracy. Finally, we generalize our approach for two-component baryon and dark-matter simulations and demonstrate that the power spectrum evolution is in excellent agreement with linear perturbation theory. For initial baryon density fields, it is suggested to use the local Lagrangian approximation in order to generate a density field for mesh-based codes that is consistent with the Lagrangian perturbation theory instead of the current practice of using the Eulerian linearly scaled densities.
Annual Report: Carbon Capture Simulation Initiative (CCSI) (30 September 2012)
Energy Technology Data Exchange (ETDEWEB)
Miller, David C. [National Energy Technology Lab. (NETL), Morgantown, WV (United States); Syamlal, Madhava [National Energy Technology Lab. (NETL), Morgantown, WV (United States); Cottrell, Roger [URS Corporation. (URS), San Francisco, CA (United States); National Energy Technology Lab. (NETL), Morgantown, WV (United States); Kress, Joel D. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Sun, Xin [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Sundaresan, S. [Princeton Univ., NJ (United States); Sahinidis, Nikolaos V. [Carnegie Mellon Univ., Pittsburgh, PA (United States); National Energy Technology Lab. (NETL), Morgantown, WV (United States); Zitney, Stephen E. [NETL; Bhattacharyya, D. [West Virginia Univ., Morgantown, WV (United States); National Energy Technology Lab. (NETL), Morgantown, WV (United States); Agarwal, Deb [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Tong, Charles [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Lin, Guang [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Dale, Crystal [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Engel, Dave [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Calafiura, Paolo [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Beattie, Keith [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Shinn, John [SynPatEco. Pleasant Hill, CA (United States)
2012-09-30
The Carbon Capture Simulation Initiative (CCSI) is a partnership among national laboratories, industry and academic institutions that is developing and deploying state-of-the-art computational modeling and simulation tools to accelerate the commercialization of carbon capture technologies from discovery to development, demonstration, and ultimately the widespread deployment to hundreds of power plants. The CCSI Toolset will provide end users in industry with a comprehensive, integrated suite of scientifically validated models, with uncertainty quantification (UQ), optimization, risk analysis and decision making capabilities. The CCSI Toolset incorporates commercial and open-source software currently in use by industry and is also developing new software tools as necessary to fill technology gaps identified during execution of the project. Ultimately, the CCSI Toolset will (1) enable promising concepts to be more quickly identified through rapid computational screening of devices and processes; (2) reduce the time to design and troubleshoot new devices and processes; (3) quantify the technical risk in taking technology from laboratory-scale to commercial-scale; and (4) stabilize deployment costs more quickly by replacing some of the physical operational tests with virtual power plant simulations. CCSI is organized into 8 technical elements that fall under two focus areas. The first focus area (Physicochemical Models and Data) addresses the steps necessary to model and simulate the various technologies and processes needed to bring a new Carbon Capture and Storage (CCS) technology into production. The second focus area (Analysis & Software) is developing the software infrastructure to integrate the various components and implement the tools that are needed to make quantifiable decisions regarding the viability of new CCS technologies. CCSI also has an Industry Advisory Board (IAB). By working closely with industry from the inception of the project to identify
Annual Report: Carbon Capture Simulation Initiative (CCSI) (30 September 2013)
Energy Technology Data Exchange (ETDEWEB)
Miller, David C. [National Energy Technology Lab. (NETL), Pittsburgh, PA, (United States); Syamlal, Madhava [National Energy Technology Lab. (NETL), Pittsburgh, PA, (United States); Cottrell, Roger [URS Corporation. (URS), San Francisco, CA (United States); National Energy Technology Lab. (NETL), Morgantown, WV (United States); Kress, Joel D. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Sundaresan, S. [Princeton Univ., NJ (United States); Sun, Xin [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Storlie, C. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Bhattacharyya, D. [West Virginia Univ., Morgantown, WV (United States); National Energy Technology Lab. (NETL), Morgantown, WV (United States); Tong, Charles [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Zitney, Stephen E [National Energy Technology Lab. (NETL), Pittsburgh, PA, (United States); Dale, Crystal [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Engel, Dave [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Agarwal, Deb [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Calafiura, Paolo [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Shinn, John [SynPatEco, Pleasant Hill, CA (United States)
2013-09-30
The Carbon Capture Simulation Initiative (CCSI) is a partnership among national laboratories, industry and academic institutions that is developing and deploying state-of-the-art computational modeling and simulation tools to accelerate the commercialization of carbon capture technologies from discovery to development, demonstration, and ultimately the widespread deployment to hundreds of power plants. The CCSI Toolset will provide end users in industry with a comprehensive, integrated suite of scientifically validated models, with uncertainty quantification (UQ), optimization, risk analysis and decision making capabilities. The CCSI Toolset incorporates commercial and open-source software currently in use by industry and is also developing new software tools as necessary to fill technology gaps identified during execution of the project. Ultimately, the CCSI Toolset will (1) enable promising concepts to be more quickly identified through rapid computational screening of devices and processes; (2) reduce the time to design and troubleshoot new devices and processes; (3) quantify the technical risk in taking technology from laboratory-scale to commercial-scale; and (4) stabilize deployment costs more quickly by replacing some of the physical operational tests with virtual power plant simulations. CCSI is led by the National Energy Technology Laboratory (NETL) and leverages the Department of Energy (DOE) national laboratories’ core strengths in modeling and simulation, bringing together the best capabilities at NETL, Los Alamos National Laboratory (LANL), Lawrence Berkeley National Laboratory (LBNL), Lawrence Livermore National Laboratory (LLNL), and Pacific Northwest National Laboratory (PNNL). The CCSI’s industrial partners provide representation from the power generation industry, equipment manufacturers, technology providers and engineering and construction firms. The CCSI’s academic participants (Carnegie Mellon University, Princeton University, West
Generation of initial geometries for the simulation of the physical system in the DualPHYsics code
International Nuclear Information System (INIS)
Segura Q, E.
2013-01-01
In the diverse research areas of the Instituto Nacional de Investigaciones Nucleares (ININ) are different activities related to science and technology, one of great interest is the study and treatment of the collection and storage of radioactive waste. Therefore at ININ the draft on the simulation of the pollutants diffusion in the soil through a porous medium (third stage) has this problem inherent aspects, hence a need for such a situation is to generate the initial geometry of the physical system For the realization of the simulation method is implemented smoothed particle hydrodynamics (SPH). This method runs in DualSPHysics code, which has great versatility and ability to simulate phenomena of any physical system where hydrodynamic aspects combine. In order to simulate a physical system DualSPHysics code, you need to preset the initial geometry of the system of interest, then this is included in the input file of the code. The simulation sets the initial geometry through regular geometric bodies positioned at different points in space. This was done through a programming language (Fortran, C + +, Java, etc..). This methodology will provide the basis to simulate more complex geometries future positions and form. (Author)
Accuracy of MHD simulations: Effects of simulation initialization in GUMICS-4
Lakka, Antti; Pulkkinen, Tuija; Dimmock, Andrew; Osmane, Adnane; Palmroth, Minna; Honkonen, Ilja
2016-04-01
We conducted a study aimed at revealing how different global magnetohydrodynamic (MHD) simulation initialization methods affect the dynamics in different parts of the Earth's magnetosphere-ionosphere system. While such magnetosphere-ionosphere coupling codes have been used for more than two decades, their testing still requires significant work to identify the optimal numerical representation of the physical processes. We used the Grand Unified Magnetosphere-Ionosphere Coupling Simulation (GUMICS-4), the only European global MHD simulation being developed by the Finnish Meteorological Institute. GUMICS-4 was put to a test that included two stages: 1) a 10 day Omni data interval was simulated and the results were validated by comparing both the bow shock and the magnetopause spatial positions predicted by the simulation to actual measurements and 2) the validated 10 day simulation run was used as a reference in a comparison of five 3 + 12 hour (3 hour synthetic initialisation + 12 hour actual simulation) simulation runs. The 12 hour input was not only identical in each simulation case but it also represented a subset of the 10 day input thus enabling quantifying the effects of different synthetic initialisations on the magnetosphere-ionosphere system. The used synthetic initialisation data sets were created using stepwise, linear and sinusoidal functions. Switching the used input from the synthetic to real Omni data was immediate. The results show that the magnetosphere forms in each case within an hour after the switch to real data. However, local dissimilarities are found in the magnetospheric dynamics after formation depending on the used initialisation method. This is evident especially in the inner parts of the lobe.
Modeling initial contact dynamics during ambulation with dynamic simulation.
Meyer, Andrew R; Wang, Mei; Smith, Peter A; Harris, Gerald F
2007-04-01
Ankle-foot orthoses are frequently used interventions to correct pathological gait. Their effects on the kinematics and kinetics of the proximal joints are of great interest when prescribing ankle-foot orthoses to specific patient groups. Mathematical Dynamic Model (MADYMO) is developed to simulate motor vehicle crash situations and analyze tissue injuries of the occupants based multibody dynamic theories. Joint kinetics output from an inverse model were perturbed and input to the forward model to examine the effects of changes in the internal sagittal ankle moment on knee and hip kinematics following heel strike. Increasing the internal ankle moment (augmentation, equivalent to gastroc-soleus contraction) produced less pronounced changes in kinematic results at the hip, knee and ankle than decreasing the moment (attenuation, equivalent to gastroc-soleus relaxation). Altering the internal ankle moment produced two distinctly different kinematic curve morphologies at the hip. Decreased internal ankle moments increased hip flexion, peaking at roughly 8% of the gait cycle. Increasing internal ankle moments decreased hip flexion to a lesser degree, and approached normal at the same point in the gait cycle. Increasing the internal ankle moment produced relatively small, well-behaved extension-biased kinematic results at the knee. Decreasing the internal ankle moment produced more substantial changes in knee kinematics towards flexion that increased with perturbation magnitude. Curve morphologies were similar to those at the hip. Immediately following heel strike, kinematic results at the ankle showed movement in the direction of the internal moment perturbation. Increased internal moments resulted in kinematic patterns that rapidly approach normal after initial differences. When the internal ankle moment was decreased, differences from normal were much greater and did not rapidly decrease. This study shows that MADYMO can be successfully applied to accomplish forward
Kangas, Julie L; Baldwin, Austin S; Rosenfield, David; Smits, Jasper A J; Rethorst, Chad D
2015-05-01
People with depressive symptoms report lower levels of exercise self-efficacy and are more likely to discontinue regular exercise than others, but it is unclear how depressive symptoms affect the relation between exercise and self-efficacy. We sought to clarify whether depressive symptoms moderate the relations between exercise and same-day self-efficacy, and between self-efficacy and next-day exercise. Participants (n = 116) were physically inactive adults (35% reported clinically significant depressive symptoms) who initiated regular exercise and completed daily assessments for 4 weeks. Mixed linear models were used to test whether (a) self-efficacy differed on days when exercise did and did not occur, (b) self-efficacy predicted next-day exercise, and (c) these relations were moderated by depressive symptoms. First, self-efficacy was lower on days when no exercise occurred, but this difference was larger for people with high depressive symptoms (p self-efficacy than people with low depressive symptoms on days when no exercise occurred (p = .03), but self-efficacy did not differ on days when exercise occurred (p = .34). Second, self-efficacy predicted greater odds of next-day exercise, OR = 1.12, 95% [1.04, 1.21], but depressive symptoms did not moderate this relation, OR = 1.00, 95% CI [.99, 1.01]. During exercise initiation, daily self-efficacy is more strongly related to exercise occurrence for people with high depressive symptoms than those with low depressive symptoms, but self-efficacy predicts next-day exercise regardless of depressive symptoms. The findings specify how depressive symptoms affect the relations between exercise and self-efficacy and underscore the importance of targeting self-efficacy in exercise interventions, particularly among people with depressive symptoms. (PsycINFO Database Record (c) 2015 APA, all rights reserved).
Initial Development of a Quadcopter Simulation Environment for Auralization
Christian, Andrew; Lawrence, Joseph
2016-01-01
This paper describes a recently created computer simulation of quadcopter flight dynamics for the NASA DELIVER project. The goal of this effort is to produce a simulation that includes a number of physical effects that are not usually found in other dynamics simulations (e.g., those used for flight controller development). These effects will be shown to have a significant impact on the fidelity of auralizations - entirely synthetic time-domain predictions of sound - based on this simulation when compared to a recording. High-fidelity auralizations are an important precursor to human subject tests that seek to understand the impact of vehicle configurations on noise and annoyance.
Bartolo, Ramón; Merchant, Hugo
2015-03-18
β oscillations in the basal ganglia have been associated with interval timing. We recorded the putaminal local field potentials (LFPs) from monkeys performing a synchronization-continuation task (SCT) and a serial reaction-time task (RTT), where the animals produced regularly and irregularly paced tapping sequences, respectively. We compared the activation profile of β oscillations between tasks and found transient bursts of β activity in both the RTT and SCT. During the RTT, β power was higher at the beginning of the task, especially when LFPs were aligned to the stimuli. During the SCT, β was higher during the internally driven continuation phase, especially for tap-aligned LFPs. Interestingly, a set of LFPs showed an initial burst of β at the beginning of the SCT, similar to the RTT, followed by a decrease in β oscillations during the synchronization phase, to finally rebound during the continuation phase. The rebound during the continuation phase of the SCT suggests that the corticostriatal circuit is involved in the control of internally driven motor sequences. In turn, the transient bursts of β activity at the beginning of both tasks suggest that the basal ganglia produce a general initiation signal that engages the motor system in different sequential behaviors. Copyright © 2015 the authors 0270-6474/15/354635-06$15.00/0.
Effects of the initial conditions on cosmological $N$-body simulations
L'Huillier, Benjamin; Park, Changbom; Kim, Juhan
2014-01-01
Cosmology is entering an era of percent level precision due to current large observational surveys. This precision in observation is now demanding more accuracy from numerical methods and cosmological simulations. In this paper, we study the accuracy of $N$-body numerical simulations and their dependence on changes in the initial conditions and in the simulation algorithms. For this purpose, we use a series of cosmological $N$-body simulations with varying initial conditions. We test the infl...
Persistence of Initial Conditions in Continental Scale Air Quality Simulations
U.S. Environmental Protection Agency — This dataset contains the data used in Figures 1 – 6 and Table 2 of the technical note "Persistence of Initial Conditions in Continental Scale Air Quality...
Sensitivity of a Simulated Derecho Event to Model Initial Conditions
Wang, Wei
2014-05-01
Since 2003, the MMM division at NCAR has been experimenting cloud-permitting scale weather forecasting using Weather Research and Forecasting (WRF) model. Over the years, we've tested different model physics, and tried different initial and boundary conditions. Not surprisingly, we found that the model's forecasts are more sensitive to the initial conditions than model physics. In 2012 real-time experiment, WRF-DART (Data Assimilation Research Testbed) at 15 km was employed to produce initial conditions for twice-a-day forecast at 3 km. On June 29, this forecast system captured one of the most destructive derecho event on record. In this presentation, we will examine forecast sensitivity to different model initial conditions, and try to understand the important features that may contribute to the success of the forecast.
Primary Connections: Simulating the Classroom in Initial Teacher Education
Hume, Anne Christine
2012-01-01
The challenge of preparing novice primary teachers for teaching in an educational environment, where science education has low status and many teachers have limited science content knowledge and lack the confidence to teach science, is great. This paper reports on an innovation involving a sustained simulation in an undergraduate science education…
Ozone-Initiated Chemistry in an Occupied Simulated Aircraft Cabin
DEFF Research Database (Denmark)
Weschler, Charles J.; Wisthaler, Armin; Cowlin, Shannon
2007-01-01
We have used multiple analytical methods to characterize the gas-phase products formed when ozone was added to cabin air during simulated 4-hour flights that were conducted in a reconstructed section of a B-767 aircraft containing human occupants. Two separate groups of 16 females were each expos...
DEFF Research Database (Denmark)
Hansen, Lars Kai; Rasmussen, Carl Edward; Svarer, C.
1994-01-01
Regularization, e.g., in the form of weight decay, is important for training and optimization of neural network architectures. In this work the authors provide a tool based on asymptotic sampling theory, for iterative estimation of weight decay parameters. The basic idea is to do a gradient desce...
Initial porosity of random packing : Computer simulation of grain rearrangement
Alberts, L.J.H.
2005-01-01
The initial porosity of clastic sediments is poorly defined. In spite of this, it is an important parameter in many models that describe the diagenetic processes taking place during the burial of sediments and which are responsible for the transition from sand to sandstone. Diagenetic models are of
A Coordinated Initialization Process for the Distributed Space Exploration Simulation (DSES)
Phillips, Robert; Dexter, Dan; Hasan, David; Crues, Edwin Z.
2007-01-01
This document describes the federate initialization process that was developed at the NASA Johnson Space Center with the HIIA Transfer Vehicle Flight Controller Trainer (HTV FCT) simulations and refined in the Distributed Space Exploration Simulation (DSES). These simulations use the High Level Architecture (HLA) IEEE 1516 to provide the communication and coordination between the distributed parts of the simulation. The purpose of the paper is to describe a generic initialization sequence that can be used to create a federate that can: 1. Properly initialize all HLA objects, object instances, interactions, and time management 2. Check for the presence of all federates 3. Coordinate startup with other federates 4. Robustly initialize and share initial object instance data with other federates.
Conservative Initial Mapping For Multidimensional Simulations of Stellar Explosions
International Nuclear Information System (INIS)
Chen, Ke-Jung; Heger, Alexander; Almgren, Ann
2012-01-01
Mapping one-dimensional stellar profiles onto multidimensional grids as initial conditions for hydrodynamics calculations can lead to numerical artifacts, one of the most severe of which is the violation of conservation laws for physical quantities such as energy and mass. Here we introduce a numerical scheme for mapping one-dimensional spherically-symmetric data onto multidimensional meshes so that these physical quantities are conserved. We validate our scheme by porting a realistic 1D Lagrangian stellar profile to the new multidimensional Eulerian hydro code CASTRO. Our results show that all important features in the profiles are reproduced on the new grid and that conservation laws are enforced at all resolutions after mapping.
Simulations of roughness initiation and growth on railway rails
Sheng, X.; Thompson, D. J.; Jones, C. J. C.; Xie, G.; Iwnicki, S. D.; Allen, P.; Hsu, S. S.
2006-06-01
A model for the prediction of the initiation and growth of roughness on the rail is presented. The vertical interaction between a train and the track is calculated as a time history for single or multiple wheels moving on periodically supported rails, using a wavenumber-based approach. This vertical dynamic wheel/rail force arises from the varying stiffness due to discrete supports (i.e. parametric excitation) and the roughness excitation on the railhead. The tangential contact problem between the wheel and rail is modelled using an unsteady two-dimensional approach and also using the three-dimensional contact model, FASTSIM. This enables the slip and stick regions in the contact patch to be identified from the input geometry and creepage between the wheel and rail. The long-term wear growth is then predicted by applying repeated passages of the vehicle wheelsets, as part of an iterative solution.
Ozone-initiated chemistry in an occupied simulated aircraft cabin.
Weschler, Charles J; Wisthaler, Armin; Cowlin, Shannon; Tamás, Gyöngyi; Strøm-Tejsen, Peter; Hodgson, Alfred T; Destaillats, Hugo; Herrington, Jason; Zhang, Junfeng; Nazaroff, William W
2007-09-01
We have used multiple analytical methods to characterize the gas-phase products formed when ozone was added to cabin air during simulated 4-hour flights that were conducted in a reconstructed section of a B-767 aircraft containing human occupants. Two separate groups of 16 females were each exposed to four conditions: low air exchange (4.4 (h-1)), exchange, 61-64 ppb ozone; high air exchange (8.8 h(-1)), exchange, 73-77 ppb ozone. The addition of ozone to the cabin air increased the levels of identified byproducts from approximately 70 to 130 ppb at the lower air exchange rate and from approximately 30 to 70 ppb at the higher air exchange rate. Most of the increase was attributable to acetone, nonanal, decanal, 4-oxopentanal (4-OPA), 6-methyl-5-hepten-2-one (6-MHO), formic acid, and acetic acid, with 0.25-0.30 mol of quantified product volatilized per mol of ozone consumed. Several of these compounds reached levels above their reported odor thresholds. Most byproducts were derived from surface reactions with occupants and their clothing, consistent with the inference that occupants were responsible for the removal of >55% of the ozone in the cabin. The observations made in this study have implications for other indoor settings. Whenever human beings and ozone are simultaneously present, one anticipates production of acetone, nonanal, decanal, 6-MHO, geranyl acetone, and 4-OPA.
Initial Operation of the Nuclear Thermal Rocket Element Environmental Simulator
Emrich, William J., Jr.; Pearson, J. Boise; Schoenfeld, Michael P.
2015-01-01
The Nuclear Thermal Rocket Element Environmental Simulator (NTREES) facility is designed to perform realistic non-nuclear testing of nuclear thermal rocket (NTR) fuel elements and fuel materials. Although the NTREES facility cannot mimic the neutron and gamma environment of an operating NTR, it can simulate the thermal hydraulic environment within an NTR fuel element to provide critical information on material performance and compatibility. The NTREES facility has recently been upgraded such that the power capabilities of the facility have been increased significantly. At its present 1.2 MW power level, more prototypical fuel element temperatures nay now be reached. The new 1.2 MW induction heater consists of three physical units consisting of a transformer, rectifier, and inverter. This multiunit arrangement facilitated increasing the flexibility of the induction heater by more easily allowing variable frequency operation. Frequency ranges between 20 and 60 kHz can accommodated in the new induction heater allowing more representative power distributions to be generated within the test elements. The water cooling system was also upgraded to so as to be capable of removing 100% of the heat generated during testing In this new higher power configuration, NTREES will be capable of testing fuel elements and fuel materials at near-prototypic power densities. As checkout testing progressed and as higher power levels were achieved, several design deficiencies were discovered and fixed. Most of these design deficiencies were related to stray RF energy causing various components to encounter unexpected heating. Copper shielding around these components largely eliminated these problems. Other problems encountered involved unexpected movement in the coil due to electromagnetic forces and electrical arcing between the coil and a dummy test article. The coil movement and arcing which were encountered during the checkout testing effectively destroyed the induction coil in use at
GLISSANDO: GLauber Initial-State Simulation AND mOre…
Broniowski, Wojciech; Rybczyński, Maciej; Bożek, Piotr
2009-01-01
We present a Monte Carlo generator for a variety of Glauber-like models (the wounded-nucleon model, binary collisions model, mixed model, model with hot spots). These models describe the early stages of relativistic heavy-ion collisions, in particular the spatial distribution of the transverse energy deposition which ultimately leads to production of particles from the interaction region. The original geometric distribution of sources in the transverse plane can be superimposed with a statistical distribution simulating the dispersion in the generated transverse energy in each individual collision. The program generates inter alia the fixed-axes (standard) and variable-axes (participant) two-dimensional profiles of the density of sources in the transverse plane and their azimuthal Fourier components. These profiles can be used in further analysis of physical phenomena, such as the jet quenching, event-by-event hydrodynamics, or analysis of the elliptic flow and its fluctuations. Characteristics of the event (multiplicities, eccentricities, Fourier coefficients, etc.) are stored in a ROOT file and can be analyzed off-line. In particular, event-by-event studies can be carried out in a simple way. A number of ROOT scripts is provided for that purpose. Supplied variants of the code can also be used for the proton-nucleus and deuteron-nucleus collisions. Program summaryProgram title: GLISSANDO Catalogue identifier: AEBS_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEBS_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 4452 No. of bytes in distributed program, including test data, etc.: 34 766 Distribution format: tar.gz Programming language: C++ Computer: any computer with a C++ compiler and the ROOT environment [R. Brun, et al., Root Users Guide 5.16, CERN
Manifold Regularized Reinforcement Learning.
Li, Hongliang; Liu, Derong; Wang, Ding
2018-04-01
This paper introduces a novel manifold regularized reinforcement learning scheme for continuous Markov decision processes. Smooth feature representations for value function approximation can be automatically learned using the unsupervised manifold regularization method. The learned features are data-driven, and can be adapted to the geometry of the state space. Furthermore, the scheme provides a direct basis representation extension for novel samples during policy learning and control. The performance of the proposed scheme is evaluated on two benchmark control tasks, i.e., the inverted pendulum and the energy storage problem. Simulation results illustrate the concepts of the proposed scheme and show that it can obtain excellent performance.
Initialization of high resolution surface wind simulations using NWS gridded data
J. Forthofer; K. Shannon; Bret Butler
2010-01-01
WindNinja is a standalone computer model designed to provide the user with simulations of surface wind flow. It is deterministic and steady state. It is currently being modified to allow the user to initialize the flow calculation using National Digital Forecast Database. It essentially allows the user to downscale the coarse scale simulations from meso-scale models to...
Least squares approach for initial data recovery in dynamic data-driven applications simulations
Douglas, C.; Efendiev, Y.; Ewing, R.; Ginting, V.; Lazarov, R.; Cole, M.; Jones, G.
2010-01-01
In this paper, we consider the initial data recovery and the solution update based on the local measured data that are acquired during simulations. Each time new data is obtained, the initial condition, which is a representation of the solution at a
Kangas, J.L.; Baldwin, A.S.; Rosenfield, D.; Smits, J.A.J.; Rethorst, C.D.
2015-01-01
Objective: People with depressive symptoms report lower levels of exercise self-efficacy and are more likely to discontinue regular exercise than others, but it is unclear how depressive symptoms affect the relation between exercise and self-efficacy. We sought to clarify whether depressive symptoms
Least squares approach for initial data recovery in dynamic data-driven applications simulations
Douglas, C.
2010-12-01
In this paper, we consider the initial data recovery and the solution update based on the local measured data that are acquired during simulations. Each time new data is obtained, the initial condition, which is a representation of the solution at a previous time step, is updated. The update is performed using the least squares approach. The objective function is set up based on both a measurement error as well as a penalization term that depends on the prior knowledge about the solution at previous time steps (or initial data). Various numerical examples are considered, where the penalization term is varied during the simulations. Numerical examples demonstrate that the predictions are more accurate if the initial data are updated during the simulations. © Springer-Verlag 2011.
Energy Technology Data Exchange (ETDEWEB)
Kumagai, Tomo' omi; Mudd, Ryan; Miyazawa, Yoshiyuki; Liu, Wen; Giambelluca, Thomas; Kobayashi, N.; Lim, Tiva Khan; Jomura, Mayuko; Matsumoto, Kazuho; Huang, Maoyi; Chen, Qi; Ziegler, Alan; Yin, Song
2013-09-10
We developed a soil-vegetation-atmosphere transfer (SVAT) model applicable to simulating CO2 and H2O fluxes from the canopies of rubber plantations, which are characterized by distinct canopy clumping produced by regular spacing of plantation trees. Rubber (Hevea brasiliensis Müll. Arg.) plantations, which are rapidly expanding into both climatically optimal and sub-optimal environments throughout mainland Southeast Asia, potentially change the partitioning of water, energy, and carbon at multiple scales, compared with traditional land covers it is replacing. Describing the biosphere-atmosphere exchange in rubber plantations via SVAT modeling is therefore essential to understanding the impacts on environmental processes. The regular spacing of plantation trees creates a peculiar canopy structure that is not well represented in most SVAT models, which generally assumes a non-uniform spacing of vegetation. Herein we develop a SVAT model applicable to rubber plantation and an evaluation method for its canopy structure, and examine how the peculiar canopy structure of rubber plantations affects canopy CO2 and H2O exchanges. Model results are compared with measurements collected at a field site in central Cambodia. Our findings suggest that it is crucial to account for intensive canopy clumping in order to reproduce observed rubber plantation fluxes. These results suggest a potentially optimal spacing of rubber trees to produce high productivity and water use efficiency.
Influence of changes in initial conditions for the simulation of dynamic systems
Energy Technology Data Exchange (ETDEWEB)
Kotyrba, Martin [Department of Informatics and Computers, University of Ostrava, 30 dubna 22, Ostrava (Czech Republic)
2015-03-10
Chaos theory is a field of study in mathematics, with applications in several disciplines including meteorology, sociology, physics, engineering, economics, biology, and philosophy. Chaos theory studies the behavior of dynamical systems that are highly sensitive to initial conditions—a paradigm popularly referred to as the butterfly effect. Small differences in initial conditions field widely diverging outcomes for such dynamical systems, rendering long-term prediction impossible in general. This happens even though these systems are deterministic, meaning that their future behavior is fully determined by their initial conditions, with no random elements involved. In this paperinfluence of changes in initial conditions will be presented for the simulation of Lorenz system.
Niroula, Sundar; Halder, Subhadeep; Ghosh, Subimal
2018-06-01
Real time hydrologic forecasting requires near accurate initial condition of soil moisture; however, continuous monitoring of soil moisture is not operational in many regions, such as, in Ganga basin, extended in Nepal, India and Bangladesh. Here, we examine the impacts of perturbation/error in the initial soil moisture conditions on simulated soil moisture and streamflow in Ganga basin and its propagation, during the summer monsoon season (June to September). This provides information regarding the required minimum duration of model simulation for attaining the model stability. We use the Variable Infiltration Capacity model for hydrological simulations after validation. Multiple hydrologic simulations are performed, each of 21 days, initialized on every 5th day of the monsoon season for deficit, surplus and normal monsoon years. Each of these simulations is performed with the initial soil moisture condition obtained from long term runs along with positive and negative perturbations. The time required for the convergence of initial errors is obtained for all the cases. We find a quick convergence for the year with high rainfall as well as for the wet spells within a season. We further find high spatial variations in the time required for convergence; the region with high precipitation such as Lower Ganga basin attains convergence at a faster rate. Furthermore, deeper soil layers need more time for convergence. Our analysis is the first attempt on understanding the sensitivity of hydrological simulations of Ganga basin on initial soil moisture conditions. The results obtained here may be useful in understanding the spin-up requirements for operational hydrologic forecasts.
Initial particle loadings for a nonuniform simulation plasma in a magnetic field
International Nuclear Information System (INIS)
Naitou, Hiroshi; Kamimura, Tetsuo; Tokuda, Sinji.
1978-09-01
Improved methods for initially loading particles in a magnetized simulation plasma with nonuniform density and temperature distributions are proposed. In the usual guiding center loading (GCL), a charge separation coming from finite Larmor radius effects remains due to the difference between the guiding center density and the actual density. The modified guiding center loading (MGCL) presented here eliminates the electric field so generated and can be used for arbitrary density and temperature profiles. Some applications of these methods to actual simulations are given for comparison. The significance of these methods of initial particle loadings is also discussed. (author)
Directory of Open Access Journals (Sweden)
Dong-Hoon Jeong
2017-07-01
Full Text Available Naval ships are assigned many and varied missions. Their performance is critical for mission success, and depends on the specifications of the components. This is why performance analyses of naval ships are required at the initial design stage. Since the design and construction of naval ships take a very long time and incurs a huge cost, Modeling and Simulation (M & S is an effective method for performance analyses. Thus in this study, a simulation core is proposed to analyze the performance of naval ships considering their specifications. This simulation core can perform the engineering level of simulations, considering the mathematical models for naval ships, such as maneuvering equations and passive sonar equations. Also, the simulation models of the simulation core follow Discrete EVent system Specification (DEVS and Discrete Time System Specification (DTSS formalisms, so that simulations can progress over discrete events and discrete times. In addition, applying DEVS and DTSS formalisms makes the structure of simulation models flexible and reusable. To verify the applicability of this simulation core, such a simulation core was applied to simulations for the performance analyses of a submarine in an Anti-SUrface Warfare (ASUW mission. These simulations were composed of two scenarios. The first scenario of submarine diving carried out maneuvering performance analysis by analyzing the pitch angle variation and depth variation of the submarine over time. The second scenario of submarine detection carried out detection performance analysis by analyzing how well the sonar of the submarine resolves adjacent targets. The results of these simulations ensure that the simulation core of this study could be applied to the performance analyses of naval ships considering their specifications.
The Matter Bispectrum in N-body Simulations with non-Gaussian Initial Conditions
Sefusatti, Emiliano; Crocce, Martin; Desjacques, Vincent
2010-01-01
We present measurements of the dark matter bispectrum in N-body simulations with non-Gaussian initial conditions of the local kind for a large variety of triangular configurations and compare them with predictions from Eulerian perturbation theory up to one-loop corrections. We find that the effects of primordial non-Gaussianity at large scales, when compared to perturbation theory, are well described by the initial component of the matter bispectrum, linearly extrapolated at the redshift of ...
2016-11-01
drills has direct and valuable application to training marksmanship skills, for the novice firers in the assessment, more attention was needed to insure... training in a simulator at real time, actual flight seemed to take place at a much faster time frame. However, pilots reported that after practicing in...who used the EST 2000 (Scholtes & Stapp, 1994), where a concern was raised about less attention paid to weapon safety during EST training than on
UNFOLDED REGULAR AND SEMI-REGULAR POLYHEDRA
Directory of Open Access Journals (Sweden)
IONIŢĂ Elena
2015-06-01
Full Text Available This paper proposes a presentation unfolding regular and semi-regular polyhedra. Regular polyhedra are convex polyhedra whose faces are regular and equal polygons, with the same number of sides, and whose polyhedral angles are also regular and equal. Semi-regular polyhedra are convex polyhedra with regular polygon faces, several types and equal solid angles of the same type. A net of a polyhedron is a collection of edges in the plane which are the unfolded edges of the solid. Modeling and unfolding Platonic and Arhimediene polyhedra will be using 3dsMAX program. This paper is intended as an example of descriptive geometry applications.
Energy Technology Data Exchange (ETDEWEB)
Richet, Y
2006-12-15
Criticality Monte Carlo calculations aim at estimating the effective multiplication factor (k-effective) for a fissile system through iterations simulating neutrons propagation (making a Markov chain). Arbitrary initialization of the neutron population can deeply bias the k-effective estimation, defined as the mean of the k-effective computed at each iteration. A simplified model of this cycle k-effective sequence is built, based on characteristics of industrial criticality Monte Carlo calculations. Statistical tests, inspired by Brownian bridge properties, are designed to discriminate stationarity of the cycle k-effective sequence. The initial detected transient is, then, suppressed in order to improve the estimation of the system k-effective. The different versions of this methodology are detailed and compared, firstly on a plan of numerical tests fitted on criticality Monte Carlo calculations, and, secondly on real criticality calculations. Eventually, the best methodologies observed in these tests are selected and allow to improve industrial Monte Carlo criticality calculations. (author)
Canhoto, Ana Isabel; Murphy, Jamie
2016-01-01
Simulations offer engaging learning experiences, via the provision of feedback or the opportunities for experimentation. However, they lack important attributes valued by marketing educators and employers. This article proposes a "back to basics" look at what constitutes an effective experiential learning initiative. Drawing on the…
International Nuclear Information System (INIS)
Garcia-Vela, A.
2002-01-01
A new quantum-type phase-space distribution is proposed in order to sample initial conditions for classical trajectory simulations. The phase-space distribution is obtained as the modulus of a quantum phase-space state of the system, defined as the direct product of the coordinate and momentum representations of the quantum initial state. The distribution is tested by sampling initial conditions which reproduce the initial state of the Ar-HCl cluster prepared by ultraviolet excitation, and by simulating the photodissociation dynamics by classical trajectories. The results are compared with those of a wave packet calculation, and with a classical simulation using an initial phase-space distribution recently suggested. A better agreement is found between the classical and the quantum predictions with the present phase-space distribution, as compared with the previous one. This improvement is attributed to the fact that the phase-space distribution propagated classically in this work resembles more closely the shape of the wave packet propagated quantum mechanically
Simulation study of effects of initial particle size distribution on dissolution
International Nuclear Information System (INIS)
Wang, G.; Xu, D.S.; Ma, N.; Zhou, N.; Payton, E.J.; Yang, R.; Mills, M.J.; Wang, Y.
2009-01-01
Dissolution kinetics of γ' particles in binary Ni-Al alloys with different initial particle size distributions (PSD) is studied using a three-dimensional (3D) quantitative phase field model. By linking model inputs directly to thermodynamic and atomic mobility databases, microstructural evolution during dissolution is simulated in real time and length scales. The model is first validated against analytical solution for dissolution of a single γ' particle in 1D and numerical solution in 3D before it is applied to investigate the effects of initial PSD on dissolution kinetics. Four different types of PSD, uniform, normal, log-normal and bimodal, are considered. The simulation results show that the volume fraction of γ' particles decreases exponentially with time, while the temporal evolution of average particle size depends strongly on the initial PSD
Quantification of discreteness effects in cosmological N-body simulations: Initial conditions
International Nuclear Information System (INIS)
Joyce, M.; Marcos, B.
2007-01-01
The relation between the results of cosmological N-body simulations, and the continuum theoretical models they simulate, is currently not understood in a way which allows a quantification of N dependent effects. In this first of a series of papers on this issue, we consider the quantification of such effects in the initial conditions of such simulations. A general formalism developed in [A. Gabrielli, Phys. Rev. E 70, 066131 (2004).] allows us to write down an exact expression for the power spectrum of the point distributions generated by the standard algorithm for generating such initial conditions. Expanded perturbatively in the amplitude of the input (i.e. theoretical, continuum) power spectrum, we obtain at linear order the input power spectrum, plus two terms which arise from discreteness and contribute at large wave numbers. For cosmological type power spectra, one obtains as expected, the input spectrum for wave numbers k smaller than that characteristic of the discreteness. The comparison of real space correlation properties is more subtle because the discreteness corrections are not as strongly localized in real space. For cosmological type spectra the theoretical mass variance in spheres and two-point correlation function are well approximated above a finite distance. For typical initial amplitudes this distance is a few times the interparticle distance, but it diverges as this amplitude (or, equivalently, the initial redshift of the cosmological simulation) goes to zero, at fixed particle density. We discuss briefly the physical significance of these discreteness terms in the initial conditions, in particular, with respect to the definition of the continuum limit of N-body simulations
Migration kinetics of four photo-initiators from paper food packaging to solid food simulants.
Cai, Huimei; Ji, Shuilin; Zhang, Juzhou; Tao, Gushuai; Peng, Chuanyi; Hou, Ruyan; Zhang, Liang; Sun, Yue; Wan, Xiaochun
2017-09-01
The migration behaviour of four photo-initiators (BP, EHA, MBP and Irgacure 907) was studied by 'printing' onto four different food-packaging materials (Kraft paper, white cardboard, Polyethylene (PE)-coated paper and composite paper) and tracking movement into the food simulant: Tenax-TA (porous polymer 2,6-diphenyl furan resin). The results indicated that the migration of the photo-initiators was related to the molecular weight and log K o/w of each photo-initiator. At different temperatures, the migration rates of the photo-initiators were different in papers with different thicknesses. The amount of each photo-initiator found in the food was closely related to the food matrix. The Weibull model was used to predict the migration load into the food simulants by calculating the parameters τ and β and determining the relationship of the two parameters with temperature and paper thickness. The established Weibull model was then used to predict the migration of each photo-initiator with respect to different foods. A two-parameter Weibull model fitted the actual situation, with some deviation from the actual migration amount.
Jackson, Thomas; Jost, A. M.; Zhang, Ju; Sridharan, P.; Amadio, G.
2017-06-01
In this work we present three-dimensional mesoscale simulations of detonation initiation in energetic materials. We solve the reactive Euler equations, with the energy equation augmented by a power deposition term. The reaction rate at the mesoscale is modelled using a density-based kinetics scheme, adapted from standard Ignition and Growth models. The deposition term is based on previous results of simulations of pore collapse at the microscale, modelled at the mesoscale as hot-spots. We carry out three-dimensional mesoscale simulations of random packs of HMX crystals in a binder, and show that the transition between no-detonation and detonation depends on the number density of the hot-spots, the initial radius of the hot-spot, the post-shock pressure of an imposed shock, and the amplitude of the power deposition term. The trends of transition at lower pressure of the imposed shock for larger number density of pore observed in experiments is reproduced. Initial attempts to improve the agreement between the simulation and experiments through calibration of various parameters will also be made.
Coordinate-invariant regularization
International Nuclear Information System (INIS)
Halpern, M.B.
1987-01-01
A general phase-space framework for coordinate-invariant regularization is given. The development is geometric, with all regularization contained in regularized DeWitt Superstructures on field deformations. Parallel development of invariant coordinate-space regularization is obtained by regularized functional integration of the momenta. As representative examples of the general formulation, the regularized general non-linear sigma model and regularized quantum gravity are discussed. copyright 1987 Academic Press, Inc
3-D simulations to investigate initial condition effects on the growth of Rayleigh-Taylor mixing
Energy Technology Data Exchange (ETDEWEB)
Andrews, Malcolm J [Los Alamos National Laboratory
2008-01-01
The effect of initial conditions on the growth rate of turbulent Rayleigh-Taylor (RT) mixing has been studied using carefully formulated numerical simulations. An integrated large-eddy simulation (ILES) that uses a finite-volume technique was employed to solve the three-dimensional incompressible Euler equations with numerical dissipation. The initial conditions were chosen to test the dependence of the RT growth parameters ({alpha}{sub b}, {alpha}{sub s}) on variations in (a) the spectral bandwidth, (b) the spectral shape, and (c) discrete banded spectra. Our findings support the notion that the overall growth of the RT mixing is strongly dependent on initial conditions. Variation in spectral shapes and bandwidths are found to have a complex effect of the late time development of the RT mixing layer, and raise the question of whether we can design RT transition and turbulence based on our choice of initial conditions. In addition, our results provide a useful database for the initialization and development of closures describing RT transition and turbulence.
Simulation of the Initial 3-D Instability of an Impacting Drop Vortex Ring
DEFF Research Database (Denmark)
Sigurdson, Lorenz; Wiwchar, Justin; Walther, Jens Honore
2013-01-01
, a Rayleigh centrifugal instability, or a vortex breakdown-type instability. Simulations which simply have a perturbed solitary ring result in an instability similar to that seen experimentally. Waviness of the core which would be expected from a Widnall instability is not visible. Adding an opposite......-signed secondary vortex ring or an image vortex ring to the initial conditions, to trigger a Rayleigh or breakdown respectively, does not appear to significantly change the instability from what is seen with a solitary ring. This suggests that a Rayleigh or vortex breakdown-type instability are not likely at work......Computational vortex particle method simulations of a perturbed vortex ring are performed to recreate and understand the instability seen in impacting water drop experiments. Three fundamentally different initial vorticity distributions are used to attempt to trigger a Widnall instability...
On geodesics in low regularity
Sämann, Clemens; Steinbauer, Roland
2018-02-01
We consider geodesics in both Riemannian and Lorentzian manifolds with metrics of low regularity. We discuss existence of extremal curves for continuous metrics and present several old and new examples that highlight their subtle interrelation with solutions of the geodesic equations. Then we turn to the initial value problem for geodesics for locally Lipschitz continuous metrics and generalize recent results on existence, regularity and uniqueness of solutions in the sense of Filippov.
Beam simulations with initial bunch noise in superconducting RF proton linacs
Tückmantel, J
2010-01-01
Circular machines are plagued by coupled bunch instabilities (CBI), driven by impedance peaks, where then all cavity higher order modes (HOMs) are possible drivers. Limiting the CBI growth rate is the fundamental reason that all superconducting rf cavities in circular machines are equipped with HOM dampers. The question arises if for similar reasons HOM damping would not be imperative also in high current superconducting rf proton linacs. Therefore we have simulated the longitudinal bunched beam dynamics in such machines, also including charge and position noise on the injected bunches. Simulations were executed for a generic linac with properties close to the planned SPL at CERN, SNS, or Project X at FNAL. It was found that with strong bunch noise and monopole HOMs with high Qext large beam scatter, possibly exceeding the admittance of a receiving machine, cannot be excluded. A transverse simulation shows similar requirements. Therefore including initial bunch noise in any beam dynamic study on superconducti...
Out-of-pile test of zirconium cladding simulating reactivity initiated accident
Energy Technology Data Exchange (ETDEWEB)
Kim, J. H.; Lee, M. H.; Choi, B. K.; Bang, J. K.; Jung, Y. H. [KAERI, Taejon (Korea, Republic of)
2004-07-01
Mechanical properties of zirconium cladding such as Zircaloy-4 and advanced cladding were evaluated by ring tension test to simulate Reactivity-Initiated Accident (RIA) as an out-pile test. Cladding was hydrided by means of charging hydrogen up to 1000ppm to simulate high-burnup situation, finally fabricated to circumferential tensile specimen. Ring tension test was carried out from 0.01 to 1/sec to keep pace with actual RIA event. The results showed that mechanical strength of zirconium cladding increased at the value of 7.8% but ductility decreased at the 34% as applied strain rate and absorbed hydrogen increased. Further activities regarding out-of-pile testing plans for simulated high-burnup cladding were discussed in this paper.
3D Simulation of Multiple Simultaneous Hydraulic Fractures with Different Initial Lengths in Rock
Tang, X.; Rayudu, N. M.; Singh, G.
2017-12-01
Hydraulic fracturing is widely used technique for extracting shale gas. During this process, fractures with various initial lengths are induced in rock mass with hydraulic pressure. Understanding the mechanism of propagation and interaction between these induced hydraulic cracks is critical for optimizing the fracking process. In this work, numerical results are presented for investigating the effect of in-situ parameters and fluid properties on growth and interaction of multi simultaneous hydraulic fractures. A fully coupled 3D fracture simulator, TOUGH- GFEM is used for simulating the effect of different vital parameters, including in-situ stress, initial fracture length, fracture spacing, fluid viscosity and flow rate on induced hydraulic fractures growth. This TOUGH-GFEM simulator is based on 3D finite volume method (FVM) and partition of unity element method (PUM). Displacement correlation method (DCM) is used for calculating multi - mode (Mode I, II, III) stress intensity factors. Maximum principal stress criteria is used for crack propagation. Key words: hydraulic fracturing, TOUGH, partition of unity element method , displacement correlation method, 3D fracturing simulator
Shan, Tzu-Ray; Wixom, Ryan R; Mattsson, Ann E; Thompson, Aidan P
2013-01-24
The dependence of the reaction initiation mechanism of pentaerythritol tetranitrate (PETN) on shock orientation and shock strength is investigated with molecular dynamics simulations using a reactive force field and the multiscale shock technique. In the simulations, a single crystal of PETN is shocked along the [110], [001], and [100] orientations with shock velocities in the range 3-10 km/s. Reactions occur with shock velocities of 6 km/s or stronger, and reactions initiate through the dissociation of nitro and nitrate groups from the PETN molecules. The most sensitive orientation is [110], while [100] is the most insensitive. For the [001] orientation, PETN decomposition via nitro group dissociation is the dominant reaction initiation mechanism, while for the [110] and [100] orientations the decomposition is via mixed nitro and nitrate group dissociation. For shock along the [001] orientation, we find that CO-NO(2) bonds initially acquire more kinetic energy, facilitating nitro dissociation. For the other two orientations, C-ONO(2) bonds acquire more kinetic energy, facilitating nitrate group dissociation.
Initial condition effects on large scale structure in numerical simulations of plane mixing layers
McMullan, W. A.; Garrett, S. J.
2016-01-01
In this paper, Large Eddy Simulations are performed on the spatially developing plane turbulent mixing layer. The simulated mixing layers originate from initially laminar conditions. The focus of this research is on the effect of the nature of the imposed fluctuations on the large-scale spanwise and streamwise structures in the flow. Two simulations are performed; one with low-level three-dimensional inflow fluctuations obtained from pseudo-random numbers, the other with physically correlated fluctuations of the same magnitude obtained from an inflow generation technique. Where white-noise fluctuations provide the inflow disturbances, no spatially stationary streamwise vortex structure is observed, and the large-scale spanwise turbulent vortical structures grow continuously and linearly. These structures are observed to have a three-dimensional internal geometry with branches and dislocations. Where physically correlated provide the inflow disturbances a "streaky" streamwise structure that is spatially stationary is observed, with the large-scale turbulent vortical structures growing with the square-root of time. These large-scale structures are quasi-two-dimensional, on top of which the secondary structure rides. The simulation results are discussed in the context of the varying interpretations of mixing layer growth that have been postulated. Recommendations are made concerning the data required from experiments in order to produce accurate numerical simulation recreations of real flows.
Magnetohydrodynamic simulation of solid-deuterium-initiated Z-pinch experiments
International Nuclear Information System (INIS)
Sheehey, P.T.
1994-02-01
Solid-deuterium-initiated Z-pinch experiments are numerically simulated using a two-dimensional resistive magnetohydrodynamic model, which includes many important experimental details, such as ''cold-start'' initial conditions, thermal conduction, radiative energy loss, actual discharge current vs. time, and grids of sufficient size and resolution to allow realistic development of the plasma. The alternating-direction-implicit numerical technique used meets the substantial demands presented by such a computational task. Simulations of fiber-initiated experiments show that when the fiber becomes fully ionized rapidly developing m=0 instabilities, which originated in the coronal plasma generated from the ablating fiber, drive intense non-uniform heating and rapid expansion of the plasma column. The possibility that inclusion of additional physical effects would improve stability is explored. Finite-Larmor-radius-ordered Hall and diamagnetic pressure terms in the magnetic field evolution equation, corresponding energy equation terms, and separate ion and electron energy equations are included; these do not change the basic results. Model diagnostics, such as shadowgrams and interferograms, generated from simulation results, are in good agreement with experiment. Two alternative experimental approaches are explored: high-current magnetic implosion of hollow cylindrical deuterium shells, and ''plasma-on-wire'' (POW) implosion of low-density plasma onto a central deuterium fiber. By minimizing instability problems, these techniques may allow attainment of higher temperatures and densities than possible with bare fiber-initiated Z-pinches. Conditions for significant D-D or D-T fusion neutron production may be realizable with these implosion-based approaches
Lowe, Graham
2011-01-01
This thesis reports on a mixed methods research project into the emerging area of computer simulation in Initial Teacher Education (ITE). Some areas where simulation has become a staple of initial or ongoing education and training, i.e. in health care and military applications, are examined to provide a context. The research explores the attitudes of a group of ITE students towards the use of a recently developed simulation tool and in particular considers the question of whether they view co...
Agent-based model of angiogenesis simulates capillary sprout initiation in multicellular networks.
Walpole, J; Chappell, J C; Cluceru, J G; Mac Gabhann, F; Bautch, V L; Peirce, S M
2015-09-01
Many biological processes are controlled by both deterministic and stochastic influences. However, efforts to model these systems often rely on either purely stochastic or purely rule-based methods. To better understand the balance between stochasticity and determinism in biological processes a computational approach that incorporates both influences may afford additional insight into underlying biological mechanisms that give rise to emergent system properties. We apply a combined approach to the simulation and study of angiogenesis, the growth of new blood vessels from existing networks. This complex multicellular process begins with selection of an initiating endothelial cell, or tip cell, which sprouts from the parent vessels in response to stimulation by exogenous cues. We have constructed an agent-based model of sprouting angiogenesis to evaluate endothelial cell sprout initiation frequency and location, and we have experimentally validated it using high-resolution time-lapse confocal microscopy. ABM simulations were then compared to a Monte Carlo model, revealing that purely stochastic simulations could not generate sprout locations as accurately as the rule-informed agent-based model. These findings support the use of rule-based approaches for modeling the complex mechanisms underlying sprouting angiogenesis over purely stochastic methods.
Johnson, B. T.; Olson, W. S.; Skofronick-Jackson, G.
2016-01-01
A simplified approach is presented for assessing the microwave response to the initial melting of realistically shaped ice particles. This paper is divided into two parts: (1) a description of the Single Particle Melting Model (SPMM), a heuristic melting simulation for ice-phase precipitation particles of any shape or size (SPMM is applied to two simulated aggregate snow particles, simulating melting up to 0.15 melt fraction by mass), and (2) the computation of the single-particle microwave scattering and extinction properties of these hydrometeors, using the discrete dipole approximation (via DDSCAT), at the following selected frequencies: 13.4, 35.6, and 94.0GHz for radar applications and 89, 165.0, and 183.31GHz for radiometer applications. These selected frequencies are consistent with current microwave remote-sensing platforms, such as CloudSat and the Global Precipitation Measurement (GPM) mission. Comparisons with calculations using variable-density spheres indicate significant deviations in scattering and extinction properties throughout the initial range of melting (liquid volume fractions less than 0.15). Integration of the single-particle properties over an exponential particle size distribution provides additional insight into idealized radar reflectivity and passive microwave brightness temperature sensitivity to variations in size/mass, shape, melt fraction, and particle orientation.
Simulating mixed-phase Arctic stratus clouds: sensitivity to ice initiation mechanisms
Directory of Open Access Journals (Sweden)
G. McFarquhar
2009-07-01
Full Text Available The importance of Arctic mixed-phase clouds on radiation and the Arctic climate is well known. However, the development of mixed-phase cloud parameterization for use in large scale models is limited by lack of both related observations and numerical studies using multidimensional models with advanced microphysics that provide the basis for understanding the relative importance of different microphysical processes that take place in mixed-phase clouds. To improve the representation of mixed-phase cloud processes in the GISS GCM we use the GISS single-column model coupled to a bin resolved microphysics (BRM scheme that was specially designed to simulate mixed-phase clouds and aerosol-cloud interactions. Using this model with the microphysical measurements obtained from the DOE ARM Mixed-Phase Arctic Cloud Experiment (MPACE campaign in October 2004 at the North Slope of Alaska, we investigate the effect of ice initiation processes and Bergeron-Findeisen process (BFP on glaciation time and longevity of single-layer stratiform mixed-phase clouds. We focus on observations taken during 9–10 October, which indicated the presence of a single-layer mixed-phase clouds. We performed several sets of 12-h simulations to examine model sensitivity to different ice initiation mechanisms and evaluate model output (hydrometeors' concentrations, contents, effective radii, precipitation fluxes, and radar reflectivity against measurements from the MPACE Intensive Observing Period. Overall, the model qualitatively simulates ice crystal concentration and hydrometeors content, but it fails to predict quantitatively the effective radii of ice particles and their vertical profiles. In particular, the ice effective radii are overestimated by at least 50%. However, using the same definition as used for observations, the effective radii simulated and that observed were more comparable. We find that for the single-layer stratiform mixed-phase clouds simulated, process
Simulating mixed-phase Arctic stratus clouds: sensitivity to ice initiation mechanisms
Sednev, I.; Menon, S.; McFarquhar, G.
2009-07-01
The importance of Arctic mixed-phase clouds on radiation and the Arctic climate is well known. However, the development of mixed-phase cloud parameterization for use in large scale models is limited by lack of both related observations and numerical studies using multidimensional models with advanced microphysics that provide the basis for understanding the relative importance of different microphysical processes that take place in mixed-phase clouds. To improve the representation of mixed-phase cloud processes in the GISS GCM we use the GISS single-column model coupled to a bin resolved microphysics (BRM) scheme that was specially designed to simulate mixed-phase clouds and aerosol-cloud interactions. Using this model with the microphysical measurements obtained from the DOE ARM Mixed-Phase Arctic Cloud Experiment (MPACE) campaign in October 2004 at the North Slope of Alaska, we investigate the effect of ice initiation processes and Bergeron-Findeisen process (BFP) on glaciation time and longevity of single-layer stratiform mixed-phase clouds. We focus on observations taken during 9-10 October, which indicated the presence of a single-layer mixed-phase clouds. We performed several sets of 12-h simulations to examine model sensitivity to different ice initiation mechanisms and evaluate model output (hydrometeors' concentrations, contents, effective radii, precipitation fluxes, and radar reflectivity) against measurements from the MPACE Intensive Observing Period. Overall, the model qualitatively simulates ice crystal concentration and hydrometeors content, but it fails to predict quantitatively the effective radii of ice particles and their vertical profiles. In particular, the ice effective radii are overestimated by at least 50%. However, using the same definition as used for observations, the effective radii simulated and that observed were more comparable. We find that for the single-layer stratiform mixed-phase clouds simulated, process of ice phase initiation
Simulation of surface crack initiation induced by slip localization and point defects kinetics
International Nuclear Information System (INIS)
Sauzay, Maxime; Liu, Jia; Rachdi, Fatima
2014-01-01
Crack initiation along surface persistent slip bands (PSBs) has been widely observed and modelled. Nevertheless, from our knowledge, no physically-based fracture modelling has been proposed and validated with respect to the numerous recent experimental data showing the strong relationship between extrusion and microcrack initiation. The whole FE modelling accounts for: - localized plastic slip in PSBs; - production and annihilation of vacancies induced by cyclic slip. If temperature is high enough, point defects may diffuse in the surrounding matrix due to large concentration gradients, allowing continuous extrusion growth in agreement with Polak's model. At each cycle, the additional atoms diffusing from the matrix are taken into account by imposing an incremental free dilatation; - brittle fracture at the interfaces between PSBs and their surrounding matrix which is simulated using cohesive zone modelling. Any inverse fitting of parameter is avoided. Only experimental single crystal data are used such as hysteresis loops and resistivity values. Two fracture parameters are required: the {111} surface energy which depends on environment and the cleavage stress which is predicted by the universal binding energy relationship. The predicted extrusion growth curves agree rather well with the experimental data published for copper and the 316L steel. A linear dependence with respect to PSB length, thickness and slip plane angle is predicted in agreement with recent AFM measurement results. Crack initiation simulations predict fairly well the effects of PSB length and environment for copper single and poly-crystals. (authors)
Numerical simulation of shock initiation of Ni/Al multilayered composites
Energy Technology Data Exchange (ETDEWEB)
Sraj, Ihab; Knio, Omar M., E-mail: omar.knio@duke.edu [Department of Mechanical Engineering and Materials Science, Duke University, 144 Hudson Hall, Durham, North Carolina 27708 (United States); Specht, Paul E.; Thadhani, Naresh N. [School of Materials Science and Engineering, Georgia Institute of Technology, 771 Ferst Drive, Atlanta, Georgia 30332 (United States); Weihs, Timothy P. [Department of Materials Science and Engineering, The Johns Hopkins University, 3400 North Charles Street, Baltimore, Maryland 21218 (United States)
2014-01-14
The initiation of chemical reaction in cold-rolled Ni/Al multilayered composites by shock compression is investigated numerically. A simplified approach is adopted that exploits the disparity between the reaction and shock loading timescales. The impact of shock compression is modeled using CTH simulations that yield pressure, strain, and temperature distributions within the composites due to the shock propagation. The resulting temperature distribution is then used as initial condition to simulate the evolution of the subsequent shock-induced mixing and chemical reaction. To this end, a reduced reaction model is used that expresses the local atomic mixing and heat release rates in terms of an evolution equation for a dimensionless time scale reflecting the age of the mixed layer. The computations are used to assess the effect of bilayer thickness on the reaction, as well as the impact of shock velocity and orientation with respect to the layering. Computed results indicate that initiation and evolution of the reaction are substantially affected by both the shock velocity and the bilayer thickness. In particular, at low impact velocity, Ni/Al multilayered composites with thick bilayers react completely in 100 ms while at high impact velocity and thin bilayers, reaction time was less than 100 μs. Quantitative trends for the dependence of the reaction time on the shock velocity are also determined, for different bilayer thickness and shock orientation.
The Ozone Budget in the Upper Troposphere from Global Modeling Initiative (GMI)Simulations
Rodriquez, J.; Duncan, Bryan N.; Logan, Jennifer A.
2006-01-01
Ozone concentrations in the upper troposphere are influenced by in-situ production, long-range tropospheric transport, and influx of stratospheric ozone, as well as by photochemical removal. Since ozone is an important greenhouse gas in this region, it is particularly important to understand how it will respond to changes in anthropogenic emissions and changes in stratospheric ozone fluxes.. This response will be determined by the relative balance of the different production, loss and transport processes. Ozone concentrations calculated by models will differ depending on the adopted meteorological fields, their chemical scheme, anthropogenic emissions, and treatment of the stratospheric influx. We performed simulations using the chemical-transport model from the Global Modeling Initiative (GMI) with meteorological fields from (It)h e NASA Goddard Institute for Space Studies (GISS) general circulation model (GCM), (2) the atmospheric GCM from NASA's Global Modeling and Assimilation Office(GMAO), and (3) assimilated winds from GMAO . These simulations adopt the same chemical mechanism and emissions, and adopt the Synthetic Ozone (SYNOZ) approach for treating the influx of stratospheric ozone -. In addition, we also performed simulations for a coupled troposphere-stratosphere model with a subset of the same winds. Simulations were done for both 4degx5deg and 2degx2.5deg resolution. Model results are being tested through comparison with a suite of atmospheric observations. In this presentation, we diagnose the ozone budget in the upper troposphere utilizing the suite of GMI simulations, to address the sensitivity of this budget to: a) the different meteorological fields used; b) the adoption of the SYNOZ boundary condition versus inclusion of a full stratosphere; c) model horizontal resolution. Model results are compared to observations to determine biases in particular simulations; by examining these comparisons in conjunction with the derived budgets, we may pinpoint
Cosmological Simulations with Scale-Free Initial Conditions. I. Adiabatic Hydrodynamics
International Nuclear Information System (INIS)
Owen, J.M.; Weinberg, D.H.; Evrard, A.E.; Hernquist, L.; Katz, N.
1998-01-01
We analyze hierarchical structure formation based on scale-free initial conditions in an Einstein endash de Sitter universe, including a baryonic component with Ω bary = 0.05. We present three independent, smoothed particle hydrodynamics (SPH) simulations, performed at two resolutions (32 3 and 64 3 dark matter and baryonic particles) and with two different SPH codes (TreeSPH and P3MSPH). Each simulation is based on identical initial conditions, which consist of Gaussian-distributed initial density fluctuations that have a power spectrum P(k) ∝ k -1 . The baryonic material is modeled as an ideal gas subject only to shock heating and adiabatic heating and cooling; radiative cooling and photoionization heating are not included. The evolution is expected to be self-similar in time, and under certain restrictions we identify the expected scalings for many properties of the distribution of collapsed objects in all three realizations. The distributions of dark matter masses, baryon masses, and mass- and emission-weighted temperatures scale quite reliably. However, the density estimates in the central regions of these structures are determined by the degree of numerical resolution. As a result, mean gas densities and Bremsstrahlung luminosities obey the expected scalings only when calculated within a limited dynamic range in density contrast. The temperatures and luminosities of the groups show tight correlations with the baryon masses, which we find can be well represented by power laws. The Press-Schechter (PS) approximation predicts the distribution of group dark matter and baryon masses fairly well, though it tends to overestimate the baryon masses. Combining the PS mass distribution with the measured relations for T(M) and L(M) predicts the temperature and luminosity distributions fairly accurately, though there are some discrepancies at high temperatures/luminosities. In general the three simulations agree well for the properties of resolved groups, where a group
Lee, Chris P; Chertow, Glenn M; Zenios, Stefanos A
2006-01-01
Patients with end-stage renal disease (ESRD) require dialysis to maintain survival. The optimal timing of dialysis initiation in terms of cost-effectiveness has not been established. We developed a simulation model of individuals progressing towards ESRD and requiring dialysis. It can be used to analyze dialysis strategies and scenarios. It was embedded in an optimization frame worked to derive improved strategies. Actual (historical) and simulated survival curves and hospitalization rates were virtually indistinguishable. The model overestimated transplantation costs (10%) but it was related to confounding by Medicare coverage. To assess the model's robustness, we examined several dialysis strategies while input parameters were perturbed. Under all 38 scenarios, relative rankings remained unchanged. An improved policy for a hypothetical patient was derived using an optimization algorithm. The model produces reliable results and is robust. It enables the cost-effectiveness analysis of dialysis strategies.
Schott, Eric; Brautigam, Robert T; Smola, Jacqueline; Burns, Karyl J
2012-04-01
Leadership skills of senior residents, trauma fellows, and a nurse practitioner were assessed during simulation training for the initial management of blunt trauma. This was a pilot, observational study, that in addition to skill development and assessment also sought to determine the need for a dedicated leadership training course for surgical residents. The study evaluated the leadership skills and adherence to Advance Trauma Life Support (ATLS) guidelines of the team leaders during simulation training. The team leaders' performances on criteria regarding prearrival planning, critical actions based on ATLS, injury identification, patient management, and communication were evaluated for each of five blunt-trauma scenarios. Although there was a statistically significant increase in leadership skills for performing ATLS critical actions, P skills for team leadership willbe a worthwhile endeavor at our institution.
Energy Technology Data Exchange (ETDEWEB)
Phelan, J.M.; Webb, S.W.
1997-06-01
The fate and transport of chemical signature molecules that emanate from buried landmines is strongly influenced by physical chemical properties and by environmental conditions of the specific chemical compounds. Published data have been evaluated as the input parameters that are used in the simulation of the fate and transport processes. A one-dimensional model developed for screening agricultural pesticides was modified and used to simulate the appearance of a surface flux above a buried landmine, estimate the subsurface total concentration, and show the phase specific concentrations at the ground surface. The physical chemical properties of TNT cause a majority of the mass released to the soil system to be bound to the solid phase soil particles. The majority of the transport occurs in the liquid phase with diffusion and evaporation driven advection of soil water as the primary mechanisms for the flux to the ground surface. The simulations provided herein should only be used for initial conceptual designs of chemical pre-concentration subsystems or complete detection systems. The physical processes modeled required necessary simplifying assumptions to allow for analytical solutions. Emerging numerical simulation tools will soon be available that should provide more realistic estimates that can be used to predict the success of landmine chemical detection surveys based on knowledge of the chemical and soil properties, and environmental conditions where the mines are buried. Additional measurements of the chemical properties in soils are also needed before a fully predictive approach can be confidently applied.
Xue, Min; Rios, Joseph
2017-01-01
Small Unmanned Aerial Vehicles (sUAVs), typically 55 lbs and below, are envisioned to play a major role in surveilling critical assets, collecting important information, and delivering goods. Large scale small UAV operations are expected to happen in low altitude airspace in the near future. Many static and dynamic constraints exist in low altitude airspace because of manned aircraft or helicopter activities, various wind conditions, restricted airspace, terrain and man-made buildings, and conflict-avoidance among sUAVs. High sensitivity and high maneuverability are unique characteristics of sUAVs that bring challenges to effective system evaluations and mandate such a simulation platform different from existing simulations that were built for manned air traffic system and large unmanned fixed aircraft. NASA's Unmanned aircraft system Traffic Management (UTM) research initiative focuses on enabling safe and efficient sUAV operations in the future. In order to help define requirements and policies for a safe and efficient UTM system to accommodate a large amount of sUAV operations, it is necessary to develop a fast-time simulation platform that can effectively evaluate requirements, policies, and concepts in a close-to-reality environment. This work analyzed the impacts of some key factors including aforementioned sUAV's characteristics and demonstrated the importance of these factors in a successful UTM fast-time simulation platform.
Large-scale tropospheric transport in the Chemistry-Climate Model Initiative (CCMI) simulations
Orbe, Clara; Yang, Huang; Waugh, Darryn W.; Zeng, Guang; Morgenstern, Olaf; Kinnison, Douglas E.; Lamarque, Jean-Francois; Tilmes, Simone; Plummer, David A.; Scinocca, John F.; Josse, Beatrice; Marecal, Virginie; Jöckel, Patrick; Oman, Luke D.; Strahan, Susan E.; Deushi, Makoto; Tanaka, Taichu Y.; Yoshida, Kohei; Akiyoshi, Hideharu; Yamashita, Yousuke; Stenke, Andreas; Revell, Laura; Sukhodolov, Timofei; Rozanov, Eugene; Pitari, Giovanni; Visioni, Daniele; Stone, Kane A.; Schofield, Robyn; Banerjee, Antara
2018-05-01
Understanding and modeling the large-scale transport of trace gases and aerosols is important for interpreting past (and projecting future) changes in atmospheric composition. Here we show that there are large differences in the global-scale atmospheric transport properties among the models participating in the IGAC SPARC Chemistry-Climate Model Initiative (CCMI). Specifically, we find up to 40 % differences in the transport timescales connecting the Northern Hemisphere (NH) midlatitude surface to the Arctic and to Southern Hemisphere high latitudes, where the mean age ranges between 1.7 and 2.6 years. We show that these differences are related to large differences in vertical transport among the simulations, in particular to differences in parameterized convection over the oceans. While stronger convection over NH midlatitudes is associated with slower transport to the Arctic, stronger convection in the tropics and subtropics is associated with faster interhemispheric transport. We also show that the differences among simulations constrained with fields derived from the same reanalysis products are as large as (and in some cases larger than) the differences among free-running simulations, most likely due to larger differences in parameterized convection. Our results indicate that care must be taken when using simulations constrained with analyzed winds to interpret the influence of meteorology on tropospheric composition.
Large-scale tropospheric transport in the Chemistry–Climate Model Initiative (CCMI simulations
Directory of Open Access Journals (Sweden)
C. Orbe
2018-05-01
Full Text Available Understanding and modeling the large-scale transport of trace gases and aerosols is important for interpreting past (and projecting future changes in atmospheric composition. Here we show that there are large differences in the global-scale atmospheric transport properties among the models participating in the IGAC SPARC Chemistry–Climate Model Initiative (CCMI. Specifically, we find up to 40 % differences in the transport timescales connecting the Northern Hemisphere (NH midlatitude surface to the Arctic and to Southern Hemisphere high latitudes, where the mean age ranges between 1.7 and 2.6 years. We show that these differences are related to large differences in vertical transport among the simulations, in particular to differences in parameterized convection over the oceans. While stronger convection over NH midlatitudes is associated with slower transport to the Arctic, stronger convection in the tropics and subtropics is associated with faster interhemispheric transport. We also show that the differences among simulations constrained with fields derived from the same reanalysis products are as large as (and in some cases larger than the differences among free-running simulations, most likely due to larger differences in parameterized convection. Our results indicate that care must be taken when using simulations constrained with analyzed winds to interpret the influence of meteorology on tropospheric composition.
Takizawa, Yuumi; Shimomura, Takeshi; Miura, Toshiaki
2013-05-23
We study the initial nucleation dynamics of poly(3-hexylthiophene) (P3HT) in solution, focusing on the relationship between the ordering process of main chains and that of side chains. We carried out Langevin dynamics simulation and found that the initial nucleation processes consist of three steps: the ordering of ring orientation, the ordering of main-chain vectors, and the ordering of side chains. At the start, the normal vectors of thiophene rings aligned in a very short time, followed by alignment of main-chain end-to-end vectors. The flexible side-chain ordering took almost 5 times longer than the rigid-main-chain ordering. The simulation results indicated that the ordering of side chains was induced after the formation of the regular stack structure of main chains. This slow ordering dynamics of flexible side chains is one of the factors that cause anisotropic nuclei growth, which would be closely related to the formation of nanofiber structures without external flow field. Our simulation results revealed how the combined structure of the planar and rigid-main-chain backbones and the sparse flexible side chains lead to specific ordering behaviors that are not observed in ordinary linear polymer crystallization processes.
Precursor evolution and SCC initiation of cold-worked alloy 690 in simulated PWR primary water
Energy Technology Data Exchange (ETDEWEB)
Zhai, Ziqing; Kruska, Karen; Toloczko, Mychailo B.; Bruemmer, Stephen M.
2017-03-27
Stress corrosion crack initiation of two thermally-treated, cold-worked (CW) alloy 690 materials was investigated in 360oC simulated PWR primary water using constant load tensile (CLT) tests and blunt notch compact tension (BNCT) tests equipped with direct current potential drop (DCPD) for in-situ detection of cracking. SCC initiation was not detected by DCPD for the 21% and 31%CW CLT specimens loaded at their yield stress after ~9,220 h, however intergranular (IG) precursor damage and isolated surface cracks were observed on the specimens. The two 31%CW BNCT specimens loaded at moderate stress intensity after several cyclic loading ramps showed DCPD-indicated crack initiation after 10,400h exposure at constant stress intensity, which resulted from significant growth of IG cracks. The 21%CW BNCT specimens only exhibited isolated small IG surface cracks and showed no apparent DCPD change throughout the test. Interestingly, post-test cross-section examinations revealed many grain boundary (GB) nano-cavities in the bulk of all the CLT and BNCT specimens particularly for the 31%CW materials. Cavities were also found along GBs extending to the surface suggesting an important role in crack nucleation. This paper provides an overview of the evolution of GB cavities and will discuss their effects on crack initiation in CW alloy 690.
Numerical simulation of Hanford Tank 241-SY-101 jet initiated fluid dynamics
International Nuclear Information System (INIS)
Trent, D.S.; Michener, T.E.
1994-01-01
The episodic Gas Release Events (GREs) that have characterized the behavior of Hanford tank 241-SY-101 for the past several years are thought to result from the entrapment of gases generated in the settled solids, i.e., sludge, layer of the tank. Gases consisting of about 36% hydrogen by volume, which are generated by complicated and poorly understood radiological and chemical processes, are apparently trapped in the settled solids layer until their accumulation initiates a buoyant upset of this layer, abruptly releasing large quantities of gas. Once concept for preventing the gas accumulation is to mobilize the settled materials with jet mixing. It is suggested that continual agitation of the settled solids using a mixer pump would free the gas bubbles so that they could continually escape, thus mitigating the potential for accumulation of flammable concentrations of hydrogen in the tank dome space following a GRE. A pump test is planned to evaluate the effectiveness of the jet mixing mitigation concept. The pump will circulate liquid from the upper layer of the tank, discharging it through two horizontal jets located approximately 2 1/2 ft above the tank floor. To prepare for start-up of this pump test, technical, operation, and safety questions concerning an anticipated gas release were addressed by numerical simulation using the TEMPEST computer code. Simulations of the pump initiated gas release revealed that the amount of gas that could potentially be released to the tank dome space is very sensitive to the initial conditions assumed for the amount and distribution of gas in the sludge layer. Calculations revealed that within the assumptions regarding gas distribution and content, the pump might initiate a rollover--followed by a significant gas release--if the sludge layer contains more than about 13 to 14% gas distributed with constant volume fraction
Whitfill, Travis; Gawel, Marcie; Auerbach, Marc
2017-07-17
The National Pediatric Readiness Project Pediatric Readiness Survey (PRS) measured pediatric readiness in 4149 US emergency departments (EDs) and noted an average score of 69 on a 100-point scale. This readiness score consists of 6 domains: coordination of pediatric patient care (19/100), physician/nurse staffing and training (10/100), quality improvement activities (7/100), patient safety initiatives (14/100), policies and procedures (17/100), and availability of pediatric equipment (33/100). We aimed to assess and improve pediatric emergency readiness scores across Connecticut's hospitals. The aim of this study was to compare the National Pediatric Readiness Project readiness score before and after an in situ simulation-based assessment and quality improvement program in Connecticut hospitals. We leveraged in situ simulations to measure the quality of resuscitative care provided by interprofessional teams to 3 simulated patients (infant septic shock, infant seizure, and child cardiac arrest) presenting to their ED resuscitation bay. Assessments of EDs were made based on a composite quality score that was measured as the sum of 4 distinct domains: (1) adherence to sepsis guidelines, (2) adherence to cardiac arrest guidelines, (3) performance on seizure resuscitation, and (4) teamwork. After the simulation, a detailed report with scores, comparisons to other EDs, and a gap analysis were provided to sites. Based on this report, a regional children's hospital team worked collaboratively with each ED to develop action items and a timeline for improvements. The National Pediatric Readiness Project PRS scores, the primary outcome of this study, were measured before and after participation. Twelve community EDs in Connecticut participated in this project. The PRS scores were assessed before and after the intervention (simulation-based assessment and gap analysis/report-out). The average time between PRS assessments was 21 months. The PRS scores significantly improved 12
Thompson, Aidan
2013-06-01
Initiation in energetic materials is fundamentally dependent on the interaction between a host of complex chemical and mechanical processes, occurring on scales ranging from intramolecular vibrations through molecular crystal plasticity up to hydrodynamic phenomena at the mesoscale. A variety of methods (e.g. quantum electronic structure methods (QM), non-reactive classical molecular dynamics (MD), mesoscopic continuum mechanics) exist to study processes occurring on each of these scales in isolation, but cannot describe how these processes interact with each other. In contrast, the ReaxFF reactive force field, implemented in the LAMMPS parallel MD code, allows us to routinely perform multimillion-atom reactive MD simulations of shock-induced initiation in a variety of energetic materials. This is done either by explicitly driving a shock-wave through the structure (NEMD) or by imposing thermodynamic constraints on the collective dynamics of the simulation cell e.g. using the Multiscale Shock Technique (MSST). These MD simulations allow us to directly observe how energy is transferred from the shockwave into other processes, including intramolecular vibrational modes, plastic deformation of the crystal, and hydrodynamic jetting at interfaces. These processes in turn cause thermal excitation of chemical bonds leading to initial chemical reactions, and ultimately to exothermic formation of product species. Results will be presented on the application of this approach to several important energetic materials, including pentaerythritol tetranitrate (PETN) and ammonium nitrate/fuel oil (ANFO). In both cases, we validate the ReaxFF parameterizations against QM and experimental data. For PETN, we observe initiation occurring via different chemical pathways, depending on the shock direction. For PETN containing spherical voids, we observe enhanced sensitivity due to jetting, void collapse, and hotspot formation, with sensitivity increasing with void size. For ANFO, we
Modeling, simulation, and optimal initiation planning for needle insertion into the liver.
Sharifi Sedeh, R; Ahmadian, M T; Janabi-Sharifi, F
2010-04-01
Needle insertion simulation and planning systems (SPSs) will play an important role in diminishing inappropriate insertions into soft tissues and resultant complications. Difficulties in SPS development are due in large part to the computational requirements of the extensive calculations in finite element (FE) models of tissue. For clinical feasibility, the computational speed of SPSs must be improved. At the same time, a realistic model of tissue properties that reflects large and velocity-dependent deformations must be employed. The purpose of this study is to address the aforementioned difficulties by presenting a cost-effective SPS platform for needle insertions into the liver. The study was constrained to planar (2D) cases, but can be extended to 3D insertions. To accommodate large and velocity-dependent deformations, a hyperviscoelastic model was devised to produce an FE model of liver tissue. Material constants were identified by a genetic algorithm applied to the experimental results of unconfined compressions of bovine liver. The approach for SPS involves B-spline interpolations of sample data generated from the FE model of liver. Two interpolation-based models are introduced to approximate puncture times and to approximate the coordinates of FE model nodes interacting with the needle tip as a function of the needle initiation pose; the latter was also a function of postpuncture time. A real-time simulation framework is provided, and its computational benefit is highlighted by comparing its performance with the FE method. A planning algorithm for optimal needle initiation was designed, and its effectiveness was evaluated by analyzing its accuracy in reaching a random set of targets at different resolutions of sampled data using the FE model. The proposed simulation framework can easily surpass haptic rates (>500 Hz), even with a high pose resolution level ( approximately 30). The computational time required to update the coordinates of the node at the
Fluids density functional theory and initializing molecular dynamics simulations of block copolymers
Brown, Jonathan R.; Seo, Youngmi; Maula, Tiara Ann D.; Hall, Lisa M.
2016-03-01
Classical, fluids density functional theory (fDFT), which can predict the equilibrium density profiles of polymeric systems, and coarse-grained molecular dynamics (MD) simulations, which are often used to show both structure and dynamics of soft materials, can be implemented using very similar bead-based polymer models. We aim to use fDFT and MD in tandem to examine the same system from these two points of view and take advantage of the different features of each methodology. Additionally, the density profiles resulting from fDFT calculations can be used to initialize the MD simulations in a close to equilibrated structure, speeding up the simulations. Here, we show how this method can be applied to study microphase separated states of both typical diblock and tapered diblock copolymers in which there is a region with a gradient in composition placed between the pure blocks. Both methods, applied at constant pressure, predict a decrease in total density as segregation strength or the length of the tapered region is increased. The predictions for the density profiles from fDFT and MD are similar across materials with a wide range of interfacial widths.
The halo bispectrum in N-body simulations with non-Gaussian initial conditions
Sefusatti, E.; Crocce, M.; Desjacques, V.
2012-10-01
We present measurements of the bispectrum of dark matter haloes in numerical simulations with non-Gaussian initial conditions of local type. We show, in the first place, that the overall effect of primordial non-Gaussianity on the halo bispectrum is larger than on the halo power spectrum when all measurable configurations are taken into account. We then compare our measurements with a tree-level perturbative prediction, finding good agreement at large scales when the constant Gaussian bias parameter, both linear and quadratic, and their constant non-Gaussian corrections are fitted for. The best-fitting values of the Gaussian bias factors and their non-Gaussian, scale-independent corrections are in qualitative agreement with the peak-background split expectations. In particular, we show that the effect of non-Gaussian initial conditions on squeezed configurations is fairly large (up to 30 per cent for fNL = 100 at redshift z = 0.5) and results from contributions of similar amplitude induced by the initial matter bispectrum, scale-dependent bias corrections as well as from non-linear matter bispectrum corrections. We show, in addition, that effects at second order in fNL are irrelevant for the range of values allowed by cosmic microwave background and galaxy power spectrum measurements, at least on the scales probed by our simulations (k > 0.01 h Mpc-1). Finally, we present a Fisher matrix analysis to assess the possibility of constraining primordial non-Gaussianity with future measurements of the galaxy bispectrum. We find that a survey with a volume of about 10 h-3 Gpc3 at mean redshift z ≃ 1 could provide an error on fNL of the order of a few. This shows the relevance of a joint analysis of galaxy power spectrum and bispectrum in future redshift surveys.
Evolvement simulation of the probability of neutron-initiating persistent fission chain
International Nuclear Information System (INIS)
Wang Zhe; Hong Zhenying
2014-01-01
Background: Probability of neutron-initiating persistent fission chain, which has to be calculated in analysis of critical safety, start-up of reactor, burst waiting time on pulse reactor, bursting time on pulse reactor, etc., is an inherent parameter in a multiplying assembly. Purpose: We aim to derive time-dependent integro-differential equation for such probability in relative velocity space according to the probability conservation, and develop the deterministic code Dynamic Segment Number Probability (DSNP) based on the multi-group S N method. Methods: The reliable convergence of dynamic calculation was analyzed and numerical simulation of the evolvement process of dynamic probability for varying concentration was performed under different initial conditions. Results: On Highly Enriched Uranium (HEU) Bare Spheres, when the time is long enough, the results of dynamic calculation approach to those of static calculation. The most difference of such results between DSNP and Partisn code is less than 2%. On Baker model, over the range of about 1 μs after the first criticality, the most difference between the dynamic and static calculation is about 300%. As for a super critical system, the finite fission chains decrease and the persistent fission chains increase as the reactivity aggrandizes, the dynamic evolvement curve of initiation probability is close to the static curve within the difference of 5% when the K eff is more than 1.2. The cumulative probability curve also indicates that the difference of integral results between the dynamic calculation and the static calculation decreases from 35% to 5% as the K eff increases. This demonstrated that the ability of initiating a self-sustaining fission chain reaction approaches stabilization, while the former difference (35%) showed the important difference of the dynamic results near the first criticality with the static ones. The DSNP code agrees well with Partisn code. Conclusions: There are large numbers of
van Dam, Edwin R.; Koolen, Jack H.; Tanaka, Hajime
2016-01-01
This is a survey of distance-regular graphs. We present an introduction to distance-regular graphs for the reader who is unfamiliar with the subject, and then give an overview of some developments in the area of distance-regular graphs since the monograph 'BCN'[Brouwer, A.E., Cohen, A.M., Neumaier,
Nijholt, Antinus
1980-01-01
Culik II and Cogen introduced the class of LR-regular grammars, an extension of the LR(k) grammars. In this paper we consider an analogous extension of the LL(k) grammars called the LL-regular grammars. The relation of this class of grammars to other classes of grammars will be shown. Any LL-regular
Regular Expression Pocket Reference
Stubblebine, Tony
2007-01-01
This handy little book offers programmers a complete overview of the syntax and semantics of regular expressions that are at the heart of every text-processing application. Ideal as a quick reference, Regular Expression Pocket Reference covers the regular expression APIs for Perl 5.8, Ruby (including some upcoming 1.9 features), Java, PHP, .NET and C#, Python, vi, JavaScript, and the PCRE regular expression libraries. This concise and easy-to-use reference puts a very powerful tool for manipulating text and data right at your fingertips. Composed of a mixture of symbols and text, regular exp
Simulation of rod drop experiments in the initial cores of Loviisa and Mochovce
International Nuclear Information System (INIS)
Kaloinen, E.; Kyrki-Rajamaeki, R.; Wasastjerna, F.
1999-01-01
Interpretation of rod drop measurements during startup tests of the Loviisa reactors has earlier been studied with two-dimensional core calculations using a spatial prompt jump approximation. In these calculations the prediction for the reactivity meter reading was lower than the measured values by 25%. Another approach to solve the problem is simulation of the rod drop experiment with dynamic core calculations coupled with out of core calculations to estimate the response of ex-core ionization chambers for the reactivity meter. This report described the calculations performed with the three-dimensional dynamic code HEXTRAN for prediction of the reactivity meter readings in rod drop experiments in initial cores of the WWER-440 reactors. (Authors)
Adaptive regularization of noisy linear inverse problems
DEFF Research Database (Denmark)
Hansen, Lars Kai; Madsen, Kristoffer Hougaard; Lehn-Schiøler, Tue
2006-01-01
In the Bayesian modeling framework there is a close relation between regularization and the prior distribution over parameters. For prior distributions in the exponential family, we show that the optimal hyper-parameter, i.e., the optimal strength of regularization, satisfies a simple relation: T......: The expectation of the regularization function, i.e., takes the same value in the posterior and prior distribution. We present three examples: two simulations, and application in fMRI neuroimaging....
Energy Technology Data Exchange (ETDEWEB)
Lee, Myung Ho; Kim, Jun Hwan; Choi, Byoung Kwon; Jeong, Young Hwan [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)
2004-07-01
The ejection or drop of a control rod in a reactivity initiated accident (RIA) causes a sudden increase in reactor power and in turn deposits a large amount of energy into the fuel. In a RIA, cladding tubes bear thermal expansion due to sudden reactivity and may fail from the resulting mechanical damage. Thus, RIA can be one of the safety margin reducers because the oxide on the tubes makes their thickness to support the load less as well as hydrides from the corrosion reduce the ductility of the tubes. In a RIA, the peak of reactor power from reactivity change is about 0.1m second and the temperature of the cladding tubes increases up to 1000 .deg. C in several seconds. Although it is hard to fully simulate the situation, several attempts to measure the change of mechanical properties under a RIA situation has done using a reduction coil, ring tension tests with high speed. This research was done to see the effect of oxide on the change of circumferential strength and ductility of Zircaloy-4 tubes in a RIA. The ring stretch tensile tests were performed with the strain rate of 1/sec and 0.01/s to simulate a transient of the cladding tube under a RIA. Since the test results of the ring tensile test are very sensitive to the lubricant, the tests were also carried out to select a suitable lubricant before the test of oxided specimens.
Springer, H. Keo; Tarver, Craig; Bastea, Sorin
2015-06-01
We perform reactive mesoscale simulations to study shock initiation in HMX over a range of pore morphologies and sizes, porosities, and loading conditions in order to improve our understanding of structure-performance relationships. These relationships are important because they guide the development of advanced macroscale models incorporating hot spot mechanisms and the optimization of novel energetic material microstructures. Mesoscale simulations are performed using the multiphysics hydrocode, ALE3D. Spherical, elliptical, polygonal, and crack-like pore geometries 0.1, 1, 10, and 100 microns in size and 2, 5, 10, and 14% porosity are explored. Loading conditions are realized with shock pressures of 6, 10, 20, 38, and 50 GPa. A Cheetah-based tabular model, including temperature-dependent heat capacity, is used for the unreacted and the product equation-of-state. Also, in-line Cheetah is used to probe chemical species evolution. The influence of microstructure and shock loading on shock-to-detonation-transition run distance, reaction rate and product gas species evolution are discussed. This work performed under the auspices of the U.S. DOE by LLNL under Contract DE-AC52-07NA27344. This work is funded by the Joint DoD-DOE Munitions Program.
Numerical simulation of laser shock in the presence of the initial state due to welding
International Nuclear Information System (INIS)
Julan, Emricka
2014-01-01
Surface treatments as laser shock peening offer the possibility to reduce tensile stresses or to generate compressive stresses in order to prevent crack initiation or reduce crack growth rate in particular in the areas where tension weld residual stresses are present. Laser shock peening may be applied on different metallic components to prevent stress corrosion cracking of Inconel 600 and high cycle thermal fatigue of austenitic stainless steels. The main aim of the PhD thesis is to develop the numerical simulation of laser peening. In the first section, axisymmetrical and 3D numerical models for one or several pulses have been developed in Code Aster and Europlexus softwares. These models were validated by experimental tests carried out in PIMM-ENSAM laboratory. Parameters identification of Johnson-Cook constitutive law was carried out for Inconel 600 at high strain rates. Moreover a new test was proposed which allowed proving the isotropic behavior of Inconel 600 at high strain rates. A modification of the Johnson-Cook constitutive law was also proposed, to take into account in a new way the sensitivity of the law to high strain rates. The second section of the thesis concerns a study on the effect of an initial state of welding on residual stresses after application of laser peening. We could conclude that this initial state has no strong influence on final residual stresses. Finally, a qualitative study on the effect of strain hardening induced by laser peening on fatigue life of stainless steels was undertaken, which shows the advantage of laser peening on shot peening due to smaller strain hardening created by laser peening. (author)
Liao, Chuan-Chieh; Hsiao, Wen-Wei; Lin, Ting-Yu; Lin, Chao-An
2015-06-01
Numerical investigations are carried out for the drafting, kissing and tumbling (DKT) phenomenon of two freely falling spheres within a long container by using an immersed-boundary method. The method is first validated with flows induced by a sphere settling under gravity in a small container for which experimental data are available. The hydrodynamic interactions of two spheres are then studied with different sizes and initial configurations. When a regular sphere is placed below the larger one, the duration of kissing decreases in pace with the increase in diameter ratio. On the other hand, the time duration of the kissing stage increases in tandem with the increase in diameter ratio as the large sphere is placed below the regular one, and there is no DKT interactions beyond threshold diameter ratio. Also, the gap between homogeneous spheres remains constant at the terminal velocity, whereas the gaps between the inhomogeneous spheres increase due to the differential terminal velocity.
Numerical simulation of hydrogen-assisted crack initiation in austenitic-ferritic duplex steels
International Nuclear Information System (INIS)
Mente, Tobias
2015-01-01
Duplex stainless steels have been used for a long time in the offshore industry, since they have higher strength than conventional austenitic stainless steels and they exhibit a better ductility as well as an improved corrosion resistance in harsh environments compared to ferritic stainless steels. However, despite these good properties the literature shows some failure cases of duplex stainless steels in which hydrogen plays a crucial role for the cause of the damage. Numerical simulations can give a significant contribution in clarifying the damage mechanisms. Because they help to interpret experimental results as well as help to transfer results from laboratory tests to component tests and vice versa. So far, most numerical simulations of hydrogen-assisted material damage in duplex stainless steels were performed at the macroscopic scale. However, duplex stainless steels consist of approximately equal portions of austenite and δ-ferrite. Both phases have different mechanical properties as well as hydrogen transport properties. Thus, the sensitivity for hydrogen-assisted damage is different in both phases, too. Therefore, the objective of this research was to develop a numerical model of a duplex stainless steel microstructure enabling simulation of hydrogen transport, mechanical stresses and strains as well as crack initiation and propagation in both phases. Additionally, modern X-ray diffraction experiments were used in order to evaluate the influence of hydrogen on the phase specific mechanical properties. For the numerical simulation of the hydrogen transport it was shown, that hydrogen diffusion strongly depends on the alignment of austenite and δ-ferrite in the duplex stainless steel microstructure. Also, it was proven that the hydrogen transport is mainly realized by the ferritic phase and hydrogen is trapped in the austenitic phase. The numerical analysis of phase specific mechanical stresses and strains revealed that if the duplex stainless steel is
WRF Simulation over the Eastern Africa by use of Land Surface Initialization
Sakwa, V. N.; Case, J.; Limaye, A. S.; Zavodsky, B.; Kabuchanga, E. S.; Mungai, J.
2014-12-01
to quantify possible improvements in simulated temperature, moisture and precipitation resulting from the experimental land surface initialization. These MET tools enable KMS to monitor model forecast accuracy in near real time. This study highlights verification results of WRF runs over East Africa using the LIS land surface initialization.
Herring, Stuart Davis
Microscopic defects may dramatically affect the susceptibility of high explosives to shock initiation. Such defects redirect the shock's energy and become hotspots (concentrations of stress and heat) that can initiate chemical reactions. Sufficiently large or numerous defects may produce a self-sustaining deflagration or even detonation from a shock notably too weak to detonate defect-free samples. The effects of circular or spherical voids on the shock sensitivity of a model (two- or three-dimensional) high explosive crystal are considered. We simulate a piston impact using molecular dynamics with a Reactive Empirical Bond Order (REBO) model potential for a sub-micron, sub-ns exothermic reaction in a diatomic molecular solid. In both dimensionalities, the probability of initiating chemical reactions rises more suddenly with increasing piston velocity for larger voids that collapse more deterministically. A void of even 10 nm radius (˜39 interatomic spacings) reduces the minimum initiating velocity by a factor of 4 (8 in 3D). The transition at larger velocities to detonation is studied in micron-long samples with a single void (and its periodic images). Reactions during the shock traversal increase rapidly with velocity, then become a reliable detonation. In 2D, a void of radius 2.5 nm reduces the critical velocity by 10% from the perfect crystal; a Pop plot of the detonation delays at higher velocities shows a characteristic pressure dependence. 3D samples are more likely to react but less to detonate. In square lattices of voids, reducing the (common) void radius or increasing the porosity without changing the other parameter causes the hotspots to consume the material faster and detonation to occur sooner and at lower velocities. Early behavior is seen to follow a very simple ignition and growth model; the pressure exponents are more realistic than with single voids. The hotspots collectively develop a broad pressure wave (a sonic, diffuse deflagration front
Energy Technology Data Exchange (ETDEWEB)
Segura Q, E.
2013-07-01
In the diverse research areas of the Instituto Nacional de Investigaciones Nucleares (ININ) are different activities related to science and technology, one of great interest is the study and treatment of the collection and storage of radioactive waste. Therefore at ININ the draft on the simulation of the pollutants diffusion in the soil through a porous medium (third stage) has this problem inherent aspects, hence a need for such a situation is to generate the initial geometry of the physical system For the realization of the simulation method is implemented smoothed particle hydrodynamics (SPH). This method runs in DualSPHysics code, which has great versatility and ability to simulate phenomena of any physical system where hydrodynamic aspects combine. In order to simulate a physical system DualSPHysics code, you need to preset the initial geometry of the system of interest, then this is included in the input file of the code. The simulation sets the initial geometry through regular geometric bodies positioned at different points in space. This was done through a programming language (Fortran, C + +, Java, etc..). This methodology will provide the basis to simulate more complex geometries future positions and form. (Author)
Ensemble manifold regularization.
Geng, Bo; Tao, Dacheng; Xu, Chao; Yang, Linjun; Hua, Xian-Sheng
2012-06-01
We propose an automatic approximation of the intrinsic manifold for general semi-supervised learning (SSL) problems. Unfortunately, it is not trivial to define an optimization function to obtain optimal hyperparameters. Usually, cross validation is applied, but it does not necessarily scale up. Other problems derive from the suboptimality incurred by discrete grid search and the overfitting. Therefore, we develop an ensemble manifold regularization (EMR) framework to approximate the intrinsic manifold by combining several initial guesses. Algorithmically, we designed EMR carefully so it 1) learns both the composite manifold and the semi-supervised learner jointly, 2) is fully automatic for learning the intrinsic manifold hyperparameters implicitly, 3) is conditionally optimal for intrinsic manifold approximation under a mild and reasonable assumption, and 4) is scalable for a large number of candidate manifold hyperparameters, from both time and space perspectives. Furthermore, we prove the convergence property of EMR to the deterministic matrix at rate root-n. Extensive experiments over both synthetic and real data sets demonstrate the effectiveness of the proposed framework.
Regularization by External Variables
DEFF Research Database (Denmark)
Bossolini, Elena; Edwards, R.; Glendinning, P. A.
2016-01-01
Regularization was a big topic at the 2016 CRM Intensive Research Program on Advances in Nonsmooth Dynamics. There are many open questions concerning well known kinds of regularization (e.g., by smoothing or hysteresis). Here, we propose a framework for an alternative and important kind of regula......Regularization was a big topic at the 2016 CRM Intensive Research Program on Advances in Nonsmooth Dynamics. There are many open questions concerning well known kinds of regularization (e.g., by smoothing or hysteresis). Here, we propose a framework for an alternative and important kind...
Goyvaerts, Jan
2009-01-01
This cookbook provides more than 100 recipes to help you crunch data and manipulate text with regular expressions. Every programmer can find uses for regular expressions, but their power doesn't come worry-free. Even seasoned users often suffer from poor performance, false positives, false negatives, or perplexing bugs. Regular Expressions Cookbook offers step-by-step instructions for some of the most common tasks involving this tool, with recipes for C#, Java, JavaScript, Perl, PHP, Python, Ruby, and VB.NET. With this book, you will: Understand the basics of regular expressions through a
Initial reconstruction results from a simulated adaptive small animal C shaped PET/MR insert
Energy Technology Data Exchange (ETDEWEB)
Efthimiou, Nikos [Technological Educational Institute of Athens (Greece); Kostou, Theodora; Papadimitroulas, Panagiotis [Technological Educational Institute of Athens (Greece); Department of Medical Physics, School of Medicine, University of Patras (Greece); Charalampos, Tsoumpas [Division of Biomedical Imaging, University of Leeds, Leeds (United Kingdom); Loudos, George [Technological Educational Institute of Athens (Greece)
2015-05-18
Traditionally, most clinical and preclinical PET scanners, rely on full cylindrical geometry for whole body as well as dedicated organ scans, which is not optimized with regards to sensitivity and resolution. Several groups proposed the construction of dedicated PET inserts for MR scanners, rather than the construction of new integrated PET/MR scanners. The space inside an MR scanner is a limiting factor which can be reduced further with the use of extra coils, and render the use of non-flexible cylindrical PET scanners difficult if not impossible. The incorporation of small SiPM arrays, can provide the means to design adaptive PET scanners to fit in tight locations, which, makes imaging possible and improve the sensitivity, due to the closer approximation to the organ of interest. In order to assess the performance of such a device we simulated the geometry of a C shaped PET, using GATE. The design of the C-PET was based on a realistic SiPM-BGO scenario. In order reconstruct the simulated data, with STIR, we had to calculate system probability matrix which corresponds to this non standard geometry. For this purpose we developed an efficient multi threaded ray tracing technique to calculate the line integral paths in voxel arrays. One of the major features is the ability to automatically adjust the size of FOV according to the geometry of the detectors. The initial results showed that the sensitivity improved as the angle between the detector arrays increases, thus better angular sampling the scanner's field of view (FOV). The more complete angular coverage helped in improving the shape of the source in the reconstructed images, as well. Furthermore, by adapting the FOV to the closer to the size of the source, the sensitivity per voxel is improved.
Initial reconstruction results from a simulated adaptive small animal C shaped PET/MR insert
International Nuclear Information System (INIS)
Efthimiou, Nikos; Kostou, Theodora; Papadimitroulas, Panagiotis; Charalampos, Tsoumpas; Loudos, George
2015-01-01
Traditionally, most clinical and preclinical PET scanners, rely on full cylindrical geometry for whole body as well as dedicated organ scans, which is not optimized with regards to sensitivity and resolution. Several groups proposed the construction of dedicated PET inserts for MR scanners, rather than the construction of new integrated PET/MR scanners. The space inside an MR scanner is a limiting factor which can be reduced further with the use of extra coils, and render the use of non-flexible cylindrical PET scanners difficult if not impossible. The incorporation of small SiPM arrays, can provide the means to design adaptive PET scanners to fit in tight locations, which, makes imaging possible and improve the sensitivity, due to the closer approximation to the organ of interest. In order to assess the performance of such a device we simulated the geometry of a C shaped PET, using GATE. The design of the C-PET was based on a realistic SiPM-BGO scenario. In order reconstruct the simulated data, with STIR, we had to calculate system probability matrix which corresponds to this non standard geometry. For this purpose we developed an efficient multi threaded ray tracing technique to calculate the line integral paths in voxel arrays. One of the major features is the ability to automatically adjust the size of FOV according to the geometry of the detectors. The initial results showed that the sensitivity improved as the angle between the detector arrays increases, thus better angular sampling the scanner's field of view (FOV). The more complete angular coverage helped in improving the shape of the source in the reconstructed images, as well. Furthermore, by adapting the FOV to the closer to the size of the source, the sensitivity per voxel is improved.
Low-cost autonomous orbit control about Mars: Initial simulation results
Dawson, S. D.; Early, L. W.; Potterveld, C. W.; Königsmann, H. J.
1999-11-01
Interest in studying the possibility of extraterrestrial life has led to the re-emergence of the Red Planet as a major target of planetary exploration. Currently proposed missions in the post-2000 period are routinely calling for rendezvous with ascent craft, long-term orbiting of, and sample-return from Mars. Such missions would benefit greatly from autonomous orbit control as a means to reduce operations costs and enable contact with Mars ground stations out of view of the Earth. This paper present results from initial simulations of autonomously controlled orbits around Mars, and points out possible uses of the technology and areas of routine Mars operations where such cost-conscious and robust autonomy could prove most effective. These simulations have validated the approach and control philosophies used in the development of this autonomous orbit controller. Future work will refine the controller, accounting for systematic and random errors in the navigation of the spacecraft from the sensor suite, and will produce prototype flight code for inclusion on future missions. A modified version of Microcosm's commercially available High Precision Orbit Propagator (HPOP) was used in the preparation of these results due to its high accuracy and speed of operation. Control laws were developed to allow an autonomously controlled spacecraft to continuously control to a pre-defined orbit about Mars with near-optimal propellant usage. The control laws were implemented as an adjunct to HPOP. The GSFC-produced 50 × 50 field model of the Martian gravitational potential was used in all simulations. The Martian atmospheric drag was modeled using an exponentially decaying atmosphere based on data from the Mars-GRAM NASA Ames model. It is hoped that the simple atmosphere model that was implemented can be significantly improved in the future so as to approach the fidelity of the Mars-GRAM model in its predictions of atmospheric density at orbital altitudes. Such additional work
Sandbox Simulations of the Evolution of a Subduction Wedge following Subduction Initiation
Brandon, M. T.; Ma, K. F.; DeWolf, W.
2012-12-01
Subduction wedges at accreting subduction zones are bounded by a landward dipping pro-shear zone (= subduction thrust) and a seaward-dipping retro-shear zone in the overriding plate. For the Cascadia subduction zone, the surface trace of the retro-shear zone corresponds to the east side of the Coast Ranges of Oregon and Washington and the Insular Mountains of Vancouver Island. This coastal high or forearc high shows clear evidence of long-term uplift and erosion along its entire length, indicating that it is an active part of the Cascadia subduction wedge. The question addressed here is what controls the location of the retro-shear zone? In the popular double-sided wedge model of Willet et al (Geology 1993), the retro-shear zone remains pinned to the S point, which is interpreted to represent where the upper-plate Moho intersects the subduction zone. For this interpretation, the relatively strong mantle is considered to operate as a flat backstop. That model, however. is somewhat artificial in that the two plates collide in a symmetric fashion with equal crustal thicknesses on both sides. Using sandbox experiments, we explore a more realistic configuration where the upper and lower plate are separated by a gentle dipping (10 degree) pro-shear zone, to simulate the initial asymmetric geometry of the subduction thrust immediately after initiation of subduction. The entire lithosphere must fail along some plane for subduction to begin and this failure plane must dip in the direction of subduction. Thus, the initial geometry of the overriding plate is better approximated as a tapered wedge than as a layer of uniform thickness, as represented in the Willett et al models. We demonstrate this model using time-lapse movies of a sand wedge above a mylar subducting plate. We use particle image velocimetry (PIV) to show the evolution of strain and structure within the overriding plate. Material accreted to the tapered end of the overriding plate drives deformation and causes
Morfa, Carlos Recarey; Cortés, Lucía Argüelles; Farias, Márcio Muniz de; Morales, Irvin Pablo Pérez; Valera, Roberto Roselló; Oñate, Eugenio
2018-07-01
A methodology that comprises several characterization properties for particle packings is proposed in this paper. The methodology takes into account factors such as dimension and shape of particles, space occupation, homogeneity, connectivity and isotropy, among others. This classification and integration of several properties allows to carry out a characterization process to systemically evaluate the particle packings in order to guarantee the quality of the initial meshes in discrete element simulations, in both the micro- and the macroscales. Several new properties were created, and improvements in existing ones are presented. Properties from other disciplines were adapted to be used in the evaluation of particle systems. The methodology allows to easily characterize media at the level of the microscale (continuous geometries—steels, rocks microstructures, etc., and discrete geometries) and the macroscale. A global, systemic and integral system for characterizing and evaluating particle sets, based on fuzzy logic, is presented. Such system allows researchers to have a unique evaluation criterion based on the aim of their research. Examples of applications are shown.
Calculation of free-energy differences from computer simulations of initial and final states
International Nuclear Information System (INIS)
Hummer, G.; Szabo, A.
1996-01-01
A class of simple expressions of increasing accuracy for the free-energy difference between two states is derived based on numerical thermodynamic integration. The implementation of these formulas requires simulations of the initial and final (and possibly a few intermediate) states. They involve higher free-energy derivatives at these states which are related to the moments of the probability distribution of the perturbation. Given a specified number of such derivatives, these integration formulas are optimal in the sense that they are exact to the highest possible order of free-energy perturbation theory. The utility of this approach is illustrated for the hydration free energy of water. This problem provides a quite stringent test because the free energy is a highly nonlinear function of the charge so that even fourth order perturbation theory gives a very poor estimate of the free-energy change. Our results should prove most useful for complex, computationally demanding problems where free-energy differences arise primarily from changes in the electrostatic interactions (e.g., electron transfer, charging of ions, protonation of amino acids in proteins). copyright 1996 American Institute of Physics
Morfa, Carlos Recarey; Cortés, Lucía Argüelles; Farias, Márcio Muniz de; Morales, Irvin Pablo Pérez; Valera, Roberto Roselló; Oñate, Eugenio
2017-10-01
A methodology that comprises several characterization properties for particle packings is proposed in this paper. The methodology takes into account factors such as dimension and shape of particles, space occupation, homogeneity, connectivity and isotropy, among others. This classification and integration of several properties allows to carry out a characterization process to systemically evaluate the particle packings in order to guarantee the quality of the initial meshes in discrete element simulations, in both the micro- and the macroscales. Several new properties were created, and improvements in existing ones are presented. Properties from other disciplines were adapted to be used in the evaluation of particle systems. The methodology allows to easily characterize media at the level of the microscale (continuous geometries—steels, rocks microstructures, etc., and discrete geometries) and the macroscale. A global, systemic and integral system for characterizing and evaluating particle sets, based on fuzzy logic, is presented. Such system allows researchers to have a unique evaluation criterion based on the aim of their research. Examples of applications are shown.
Regularities of Multifractal Measures
Indian Academy of Sciences (India)
First, we prove the decomposition theorem for the regularities of multifractal Hausdorff measure and packing measure in R R d . This decomposition theorem enables us to split a set into regular and irregular parts, so that we can analyze each separately, and recombine them without affecting density properties. Next, we ...
Stochastic analytic regularization
International Nuclear Information System (INIS)
Alfaro, J.
1984-07-01
Stochastic regularization is reexamined, pointing out a restriction on its use due to a new type of divergence which is not present in the unregulated theory. Furthermore, we introduce a new form of stochastic regularization which permits the use of a minimal subtraction scheme to define the renormalized Green functions. (author)
Directory of Open Access Journals (Sweden)
Steven M. Lund
2009-11-01
Full Text Available Self-consistent Vlasov-Poisson simulations of beams with high space-charge intensity often require specification of initial phase-space distributions that reflect properties of a beam that is well adapted to the transport channel—both in terms of low-order rms (envelope properties as well as the higher-order phase-space structure. Here, we first review broad classes of kinetic distributions commonly in use as initial Vlasov distributions in simulations of unbunched or weakly bunched beams with intense space-charge fields including the following: the Kapchinskij-Vladimirskij (KV equilibrium, continuous-focusing equilibria with specific detailed examples, and various nonequilibrium distributions, such as the semi-Gaussian distribution and distributions formed from specified functions of linear-field Courant-Snyder invariants. Important practical details necessary to specify these distributions in terms of standard accelerator inputs are presented in a unified format. Building on this presentation, a new class of approximate initial kinetic distributions are constructed using transformations that preserve linear focusing, single-particle Courant-Snyder invariants to map initial continuous-focusing equilibrium distributions to a form more appropriate for noncontinuous focusing channels. Self-consistent particle-in-cell simulations are employed to show that the approximate initial distributions generated in this manner are better adapted to the focusing channels for beams with high space-charge intensity. This improved capability enables simulations that more precisely probe intrinsic stability properties and machine performance.
Theilen, Ulf; Fraser, Laura; Jones, Patricia; Leonard, Paul; Simpson, Dave
2017-06-01
The introduction of a paediatric Medical Emergency Team (pMET) was accompanied by weekly in-situ simulation team training. Key ward staff participated in team training, focusing on recognition of the deteriorating child, teamwork and early involvement of senior staff. Following an earlier study [1], this investigation aimed to evaluate the long-term impact of ongoing regular team training on hospital response to deteriorating ward patients, patient outcome and financial implications. Prospective cohort study of all deteriorating in-patients in a tertiary paediatric hospital requiring admission to paediatric intensive care (PICU) the year before, 1year after and 3 years after the introduction of pMET and team training. Deteriorating patients were recognised more promptly (before/1year after/3years after pMET; median time 4/1.5/0.5h, pIntroduction of pMET coincided with significantly reduced hospital mortality (p<0.001). These results indicate that lessons learnt by ward staff during team training led to sustained improvements in the hospital response to critically deteriorating in-patients, significantly improved patient outcomes and substantial savings. Integration of regular in-situ simulation training of medical emergency teams, including key ward staff, in routine clinical care has potential application in all acute specialties. Copyright © 2017. Published by Elsevier B.V.
García-Vela, A.
2000-05-01
A definition of a quantum-type phase-space distribution is proposed in order to represent the initial state of the system in a classical dynamics simulation. The central idea is to define an initial quantum phase-space state of the system as the direct product of the coordinate and momentum representations of the quantum initial state. The phase-space distribution is then obtained as the square modulus of this phase-space state. The resulting phase-space distribution closely resembles the quantum nature of the system initial state. The initial conditions are sampled with the distribution, using a grid technique in phase space. With this type of sampling the distribution of initial conditions reproduces more faithfully the shape of the original phase-space distribution. The method is applied to generate initial conditions describing the three-dimensional state of the Ar-HCl cluster prepared by ultraviolet excitation. The photodissociation dynamics is simulated by classical trajectories, and the results are compared with those of a wave packet calculation. The classical and quantum descriptions are found in good agreement for those dynamical events less subject to quantum effects. The classical result fails to reproduce the quantum mechanical one for the more strongly quantum features of the dynamics. The properties and applicability of the phase-space distribution and the sampling technique proposed are discussed.
Sparse structure regularized ranking
Wang, Jim Jing-Yan; Sun, Yijun; Gao, Xin
2014-01-01
Learning ranking scores is critical for the multimedia database retrieval problem. In this paper, we propose a novel ranking score learning algorithm by exploring the sparse structure and using it to regularize ranking scores. To explore the sparse
Regular expression containment
DEFF Research Database (Denmark)
Henglein, Fritz; Nielsen, Lasse
2011-01-01
We present a new sound and complete axiomatization of regular expression containment. It consists of the conventional axiomatiza- tion of concatenation, alternation, empty set and (the singleton set containing) the empty string as an idempotent semiring, the fixed- point rule E* = 1 + E × E......* for Kleene-star, and a general coin- duction rule as the only additional rule. Our axiomatization gives rise to a natural computational inter- pretation of regular expressions as simple types that represent parse trees, and of containment proofs as coercions. This gives the axiom- atization a Curry......-Howard-style constructive interpretation: Con- tainment proofs do not only certify a language-theoretic contain- ment, but, under our computational interpretation, constructively transform a membership proof of a string in one regular expres- sion into a membership proof of the same string in another regular expression. We...
Supersymmetric dimensional regularization
International Nuclear Information System (INIS)
Siegel, W.; Townsend, P.K.; van Nieuwenhuizen, P.
1980-01-01
There is a simple modification of dimension regularization which preserves supersymmetry: dimensional reduction to real D < 4, followed by analytic continuation to complex D. In terms of component fields, this means fixing the ranges of all indices on the fields (and therefore the numbers of Fermi and Bose components). For superfields, it means continuing in the dimensionality of x-space while fixing the dimensionality of theta-space. This regularization procedure allows the simple manipulation of spinor derivatives in supergraph calculations. The resulting rules are: (1) First do all algebra exactly as in D = 4; (2) Then do the momentum integrals as in ordinary dimensional regularization. This regularization procedure needs extra rules before one can say that it is consistent. Such extra rules needed for superconformal anomalies are discussed. Problems associated with renormalizability and higher order loops are also discussed
Regularized maximum correntropy machine
Wang, Jim Jing-Yan; Wang, Yunji; Jing, Bing-Yi; Gao, Xin
2015-01-01
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
Regularized maximum correntropy machine
Wang, Jim Jing-Yan
2015-02-12
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
Rai, Nirmal Kumar; Schmidt, Martin J.; Udaykumar, H. S.
2017-04-01
Void collapse in energetic materials leads to hot spot formation and enhanced sensitivity. Much recent work has been directed towards simulation of collapse-generated reactive hot spots. The resolution of voids in calculations to date has varied as have the resulting predictions of hot spot intensity. Here we determine the required resolution for reliable cylindrical void collapse calculations leading to initiation of chemical reactions. High-resolution simulations of collapse provide new insights into the mechanism of hot spot generation. It is found that initiation can occur in two different modes depending on the loading intensity: Either the initiation occurs due to jet impact at the first collapse instant or it can occur at secondary lobes at the periphery of the collapsed void. A key observation is that secondary lobe collapse leads to large local temperatures that initiate reactions. This is due to a combination of a strong blast wave from the site of primary void collapse and strong colliding jets and vortical flows generated during the collapse of the secondary lobes. The secondary lobe collapse results in a significant lowering of the predicted threshold for ignition of the energetic material. The results suggest that mesoscale simulations of void fields may suffer from significant uncertainty in threshold predictions because unresolved calculations cannot capture the secondary lobe collapse phenomenon. The implications of this uncertainty for mesoscale simulations are discussed in this paper.
Terzioglu, Fusun; Tuna, Zahide; Duygulu, Sergul; Boztepe, Handan; Kapucu, Sevgisun; Ozdemir, Leyla; Akdemir, Nuran; Kocoglu, Deniz; Alinier, Guillaume; Festini, Filippo
2013-01-01
Aim: The aim of this paper is to share the initial experiences on a European Union (EU) Lifelong Learning Programme Leonardo Da Vinci Transfer of Innovation Project related to the use of simulation-based learning with nursing students from Turkey. The project started at the end of the 2010 involving 7 partners from 3 different countries including…
An adaptive regularization parameter choice strategy for multispectral bioluminescence tomography
Energy Technology Data Exchange (ETDEWEB)
Feng Jinchao; Qin Chenghu; Jia Kebin; Han Dong; Liu Kai; Zhu Shouping; Yang Xin; Tian Jie [Medical Image Processing Group, Institute of Automation, Chinese Academy of Sciences, P. O. Box 2728, Beijing 100190 (China); College of Electronic Information and Control Engineering, Beijing University of Technology, Beijing 100124 (China); Medical Image Processing Group, Institute of Automation, Chinese Academy of Sciences, P. O. Box 2728, Beijing 100190 (China); Medical Image Processing Group, Institute of Automation, Chinese Academy of Sciences, P. O. Box 2728, Beijing 100190 (China) and School of Life Sciences and Technology, Xidian University, Xi' an 710071 (China)
2011-11-15
Purpose: Bioluminescence tomography (BLT) provides an effective tool for monitoring physiological and pathological activities in vivo. However, the measured data in bioluminescence imaging are corrupted by noise. Therefore, regularization methods are commonly used to find a regularized solution. Nevertheless, for the quality of the reconstructed bioluminescent source obtained by regularization methods, the choice of the regularization parameters is crucial. To date, the selection of regularization parameters remains challenging. With regards to the above problems, the authors proposed a BLT reconstruction algorithm with an adaptive parameter choice rule. Methods: The proposed reconstruction algorithm uses a diffusion equation for modeling the bioluminescent photon transport. The diffusion equation is solved with a finite element method. Computed tomography (CT) images provide anatomical information regarding the geometry of the small animal and its internal organs. To reduce the ill-posedness of BLT, spectral information and the optimal permissible source region are employed. Then, the relationship between the unknown source distribution and multiview and multispectral boundary measurements is established based on the finite element method and the optimal permissible source region. Since the measured data are noisy, the BLT reconstruction is formulated as l{sub 2} data fidelity and a general regularization term. When choosing the regularization parameters for BLT, an efficient model function approach is proposed, which does not require knowledge of the noise level. This approach only requests the computation of the residual and regularized solution norm. With this knowledge, we construct the model function to approximate the objective function, and the regularization parameter is updated iteratively. Results: First, the micro-CT based mouse phantom was used for simulation verification. Simulation experiments were used to illustrate why multispectral data were used
Wiese, C H R; Bosse, G; Schröder, T; Lassen, C L; Bundscherer, A C; Graf, B M; Zausig, Y A
2015-01-01
Palliative emergencies describe an acute situation in patients with a life-limiting illness. At present defined curricula for prehospital emergency physician training for palliative emergencies are limited. Simulation-based training (SBT) for such palliative emergency situations is an exception both nationally and internationally. This article presents the preparation of recommendations in the training and development of palliative care emergency situations. A selected literature search was performed using PubMed, EMBASE, Medline and the Cochrane database (1990-2013). Reference lists of included articles were checked by two reviewers. Data of the included articles were extracted, evaluated und summarized. In the second phase the participants of two simulated scenarios of palliative emergencies were asked to complete an anonymous 15-item questionnaire. The results of the literature search and the questionnaire-based investigation were compared and recommendations were formulated based on the results. Altogether 30 eligible national and international articles were included. Overall, training curricula in palliative emergencies are currently being developed nationally and internationally but are not yet widely integrated into emergency medical training and education. In the second part of the investigation, 25 participants (9 male, 16 female, 20 physicians and 5 nurses) were included in 4 multiprofessional emergency medical simulation training sessions. The most important interests of the participants were the problems for training and further education concerning palliative emergencies described in the national and international literature. The literature review and the expectations of the participants underlined that the development and characteristics of palliative emergencies will become increasingly more important in outpatient emergency medicine. All participants considered palliative care to be very important concerning the competency for end-of-life decisions
International Nuclear Information System (INIS)
Valkenburg, Wessel; Hu, Bin
2015-01-01
We present a description for setting initial particle displacements and field values for simulations of arbitrary metric theories of gravity, for perfect and imperfect fluids with arbitrary characteristics. We extend the Zel'dovich Approximation to nontrivial theories of gravity, and show how scale dependence implies curved particle paths, even in the entirely linear regime of perturbations. For a viable choice of Effective Field Theory of Modified Gravity, initial conditions set at high redshifts are affected at the level of up to 5% at Mpc scales, which exemplifies the importance of going beyond Λ-Cold Dark Matter initial conditions for modifications of gravity outside of the quasi-static approximation. In addition, we show initial conditions for a simulation where a scalar modification of gravity is modelled in a Lagrangian particle-like description. Our description paves the way for simulations and mock galaxy catalogs under theories of gravity beyond the standard model, crucial for progress towards precision tests of gravity and cosmology
International Nuclear Information System (INIS)
Benham, R.A.; Mathews, F.H.; Higgins, P.B.
1976-01-01
Laboratory nuclear effects testing allows the study of reentry vehicle response to simulated exoatmospheric x-ray encounters. Light-initiated explosive produces the nearly simultaneous impulse loading of a structure by using a spray painted coating of explosive which is detonated by an intense flash of light. A lateral impulse test on a full scale reentry vehicle is described which demonstrates that the light-initiated explosive technique can be extended to the lateral loading of very large systems involving load discontinuities. This experiment required the development of a diagnostic method for verifying the applied impulse, and development of a large light source for simultaneously initiating the explosive over the surface of the vehicle. Acceptable comparison between measured strain response and code predictions is obtained. The structural capability and internal response of a vehicle subjected to an x-ray environment was determined from a light-initiated explosive test
Strahan, Susan E.; Douglass, Anne R.
2004-01-01
The Global Modeling Initiative (GMI) has integrated two 36-year simulations of an ozone recovery scenario with an offline chemistry and tra nsport model using two different meteorological inputs. Physically ba sed diagnostics, derived from satellite and aircraft data sets, are d escribed and then used to evaluate the realism of temperature and transport processes in the simulations. Processes evaluated include barri er formation in the subtropics and polar regions, and extratropical w ave-driven transport. Some diagnostics are especially relevant to sim ulation of lower stratospheric ozone, but most are applicable to any stratospheric simulation. The global temperature evaluation, which is relevant to gas phase chemical reactions, showed that both sets of me teorological fields have near climatological values at all latitudes and seasons at 30 hPa and below. Both simulations showed weakness in upper stratospheric wave driving. The simulation using input from a g eneral circulation model (GMI(GCM)) showed a very good residual circulation in the tropics and Northern Hemisphere. The simulation with inp ut from a data assimilation system (GMI(DAS)) performed better in the midlatitudes than it did at high latitudes. Neither simulation forms a realistic barrier at the vortex edge, leading to uncertainty in the fate of ozone-depleted vortex air. Overall, tracer transport in the offline GML(GCM) has greater fidelity throughout the stratosphere tha n it does in the GMI(DAS)
2015-09-01
This literature review and reference scanning focuses on the use of driver simulators for semiautonomous (or shared control) vehicle systems (2012present), including related research from other modes of transportation (e.g., rail or aviation). Foc...
1983-09-01
The present study employed auditory startle to simulate the principal components (unexpectedness, fear, and physiological arousal) that are common to many types of sudden emergencies and compared performance recovery following startle with recovery f...
Using simulation to educate police about mental illness: A collaborative initiative
Directory of Open Access Journals (Sweden)
Wendy Stanyon
2014-06-01
Full Text Available Mental illness is a major public health concern in Canada and also globally. According to the World Health Organization, five of the top ten disabilities worldwide are mental health disorders. Within Canada, one in five individuals is living with mental illness each year. Currently, there are 6.7 million Canadians living with mental illness and over 1 million Canadian youth living with mental illness. Police are frequently the first responders to situations in the community involving people with mental illness, and police services are increasingly aware of the need to provide officers with additional training and strategies for effectively interacting with these citizens. This study examined the effectiveness of four online, interactive video-based simulations designed to educate police officers about mental illness and strategies for interacting with people with mental illness. The simulations were created through the efforts of a unique partnership involving a police service, a mental health facility and two postsecondary institutions. Frontline police officers from Ontario were divided into one of three groups (simulation, face to face, control. Using a pre- and post-test questionnaire, the groups were compared on their level of knowledge and understanding of mental illness. In addition, focus groups explored the impact of the simulations on officers’ level of confidence in engaging with individuals with mental illness and officers’ perceptions of the simulations’ ease of use and level of realism. The study’s findings determined that the simulations were just as effective as face-to-face learning, and the officers reported the simulations were easy to use and reflected real-life scenarios they had encountered on the job. As mental health continues to be a major public concern, not only in Canada but also globally, interactive simulations may provide an effective and affordable education resource not only for police officers but for
International Nuclear Information System (INIS)
Babich, L. P.; Bochkov, E. I.; Kutsyk, I. M.
2011-01-01
The mechanism of lightning initiation due to electric field enhancement by the polarization of a conducting channel produced by relativistic runaway electron avalanches triggered by background cosmic radiation has been simulated numerically. It is shown that the fields at which the start of a lightning leader is possible even in the absence of precipitations are locally realized for realistic thundercloud configurations and charges. The computational results agree with the in-situ observations of penetrating radiation enhancement in thunderclouds.
Energy Technology Data Exchange (ETDEWEB)
Babich, L. P., E-mail: babich@elph.vniief.ru; Bochkov, E. I.; Kutsyk, I. M. [All-Russian Research Institute of Experimental Physics, Russian Federal Nuclear Center (Russian Federation)
2011-05-15
The mechanism of lightning initiation due to electric field enhancement by the polarization of a conducting channel produced by relativistic runaway electron avalanches triggered by background cosmic radiation has been simulated numerically. It is shown that the fields at which the start of a lightning leader is possible even in the absence of precipitations are locally realized for realistic thundercloud configurations and charges. The computational results agree with the in-situ observations of penetrating radiation enhancement in thunderclouds.
International Nuclear Information System (INIS)
Reuhl, R.
1997-01-01
During initial training of some 50 young reactor operators and shift supervisors in the last 5 years in Biblis it was found that it takes some time before trainees gain a food overview of the most important plant systems and develop a ''feeling'' of the dynamic plant behaviour which is an important prerequisite for the first full-scope simulator training courses. To enhance this, a PC software training tool was developed SIMULA - C. (author)
Energy Technology Data Exchange (ETDEWEB)
Syuhada, Ibnu, E-mail: ibnu-syuhada-p3@yahoo.com; Rosikhin, Ahmad, E-mail: aulia-fikri-h@yahoo.co.id; Fikri, Aulia, E-mail: a.rosikhin86@yahoo.co.id; Noor, Fatimah A., E-mail: fatimah@fi.itb.ac.id; Winata, Toto, E-mail: toto@fi.itb.ac.id [Departement of Physics, Institute of Technology Bandung, Tamansari 64 Street, East Java (Indonesia)
2016-02-08
In this study, atomic simulation for graphene growth on Ni (100) at initial stage via chemical vapor deposition method has been developed. The C-C atoms interaction was performed by Terasoff potential mean while Ni-Ni interaction was specified by EAM (Embedded Atom Modified). On the other hand, we used very simple interatomic potential to describe Ni-C interaction during deposition process. From this simulation, it shows that the formation of graphene is not occurs through a combined deposition mechanism on Ni substrate but via C segregation. It means, Ni-C amorphous is source for graphene growth when cooling down of Ni substrate. This result is appropriate with experiments, tight binding and quantum mechanics simulation.
Simulation of Fatigue Crack Initiation at Corrosion Pits With EDM Notches
Smith, Stephen W.; Newman, John A.; Piascik, Robert S.
2003-01-01
Uniaxial fatigue tests were conducted to compare the fatigue life of laboratory produced corrosion pits, similar to those observed in the shuttle main landing gear wheel bolt-hole, and an electro-discharged-machined (EDM) flaw. EDM Jaws are used to simulate corrosion pits during shuttle wheel (dynamometer) testing. The aluminum alloy, (AA 7050) laboratory fatigue tests were conducted to simulate the local stress level contained in the wheel bolt-hole. Under this high local stress condition, the EDM notch produced a fatigue life similar to test specimens containing corrosion pits of similar size. Based on the laboratory fatigue test results, the EDM Jaw (semi-circular disc shaped) produces a local stress state similar to corrosion pits and can be used to simulate a corrosion pit during the shuttle wheel dynamometer tests.
Simulation and initial experiments of a high power pulsed TEA CO2 laser
Torabi, R.; Saghafifar, H.; Koushki, A. M.; Ganjovi, A. A.
2016-01-01
In this paper, the output characteristics of a UV pin array pre-ionized TEA CO2 laser have been simulated and compared with the associated experimental data. In our simulation, a new theoretical model has been improved for transient behavior analysis of the discharge current pulse. The laser discharge tube was modeled by a nonlinear RLC electric circuit as a real model for electron density calculation. This model was coupled with a six-temperature model (6TM) in order to simulation dynamic emission processes of the TEA CO2 laser. The equations were solved numerically by the fourth order Runge-Kutta numerical method and some important variables such as current and voltage of the main discharge, resistance of the plasma column and electron density in the main discharge region, were calculated as functions of time. The effects of non-dissociation factor, rotational quantum number and output coupler reflectivity were also studied theoretically. The experimental and simulation results are in good agreement.
Products of Ozone-Initiated Chemistry in a Simulated Aircraft Environment
DEFF Research Database (Denmark)
Wisthaler, Armin; Tamás, Gyöngyi; Wyon, David P.
2005-01-01
We used proton-transfer-reaction mass spectrometry (PTR-MS) to examine the products formed when ozone reacted with the materials in a simulated aircraft cabin, including a loaded high-efficiency particulate air (HEPA) filter in the return air system. Four conditions were examined: cabin (baseline...
Influence of the initial conditions for the numerical simulation of two-phase slug flow
Energy Technology Data Exchange (ETDEWEB)
Pachas Napa, Alex A.; Morales, Rigoberto E.M.; Medina, Cesar D. Perea
2010-07-01
Multiphase flows in pipelines commonly show several patterns depending on the flow rate, geometry and physical properties of the phases. In oil production, the slug flow pattern is the most common among the others. This flow pattern is characterized by an intermittent succession in space and time of an aerated liquid slug and an elongated gas bubble with a liquid film. Slug flow is studied through the slug tracking model described as one-dimensional and Lagrangian frame referenced. In the model, the mass and the momentum balance equations are applied in control volumes constituted by the gas bubble and the liquid slug. Initial conditions must be determined, which need to reproduce the intermittence of the flow pattern. These initial conditions are given by a sequence of flow properties for each unit cell. Properties of the unit cell in initial conditions should reflect the intermittence, for which they can be analyzed in statistical terms. Therefore, statistical distributions should be obtained for the slug flow variables. Distributions are complemented with the mass balance and the bubble design model. The objective of the present work is to obtain initial conditions for the slug tracking model that reproduce a better adjustment of the fluctuating properties for different pipe inclinations (horizontal, vertical or inclined). The numerical results are compared with experimental data obtained by PFG/FEM/UNICAMP for air-water flow at 0 deg, 45 deg and 90 deg and good agreement is observed. (author)
"Gaze Leading": Initiating Simulated Joint Attention Influences Eye Movements and Choice Behavior
Bayliss, Andrew P.; Murphy, Emily; Naughtin, Claire K.; Kritikos, Ada; Schilbach, Leonhard; Becker, Stefanie I.
2013-01-01
Recent research in adults has made great use of the gaze cuing paradigm to understand the behavior of the follower in joint attention episodes. We implemented a gaze leading task to investigate the initiator--the other person in these triadic interactions. In a series of gaze-contingent eye-tracking studies, we show that fixation dwell time upon…
Diverse Regular Employees and Non-regular Employment (Japanese)
MORISHIMA Motohiro
2011-01-01
Currently there are high expectations for the introduction of policies related to diverse regular employees. These policies are a response to the problem of disparities between regular and non-regular employees (part-time, temporary, contract and other non-regular employees) and will make it more likely that workers can balance work and their private lives while companies benefit from the advantages of regular employment. In this paper, I look at two issues that underlie this discussion. The ...
Sparse structure regularized ranking
Wang, Jim Jing-Yan
2014-04-17
Learning ranking scores is critical for the multimedia database retrieval problem. In this paper, we propose a novel ranking score learning algorithm by exploring the sparse structure and using it to regularize ranking scores. To explore the sparse structure, we assume that each multimedia object could be represented as a sparse linear combination of all other objects, and combination coefficients are regarded as a similarity measure between objects and used to regularize their ranking scores. Moreover, we propose to learn the sparse combination coefficients and the ranking scores simultaneously. A unified objective function is constructed with regard to both the combination coefficients and the ranking scores, and is optimized by an iterative algorithm. Experiments on two multimedia database retrieval data sets demonstrate the significant improvements of the propose algorithm over state-of-the-art ranking score learning algorithms.
'Regular' and 'emergency' repair
International Nuclear Information System (INIS)
Luchnik, N.V.
1975-01-01
Experiments on the combined action of radiation and a DNA inhibitor using Crepis roots and on split-dose irradiation of human lymphocytes lead to the conclusion that there are two types of repair. The 'regular' repair takes place twice in each mitotic cycle and ensures the maintenance of genetic stability. The 'emergency' repair is induced at all stages of the mitotic cycle by high levels of injury. (author)
Regularization of divergent integrals
Felder, Giovanni; Kazhdan, David
2016-01-01
We study the Hadamard finite part of divergent integrals of differential forms with singularities on submanifolds. We give formulae for the dependence of the finite part on the choice of regularization and express them in terms of a suitable local residue map. The cases where the submanifold is a complex hypersurface in a complex manifold and where it is a boundary component of a manifold with boundary, arising in string perturbation theory, are treated in more detail.
Regularizing portfolio optimization
International Nuclear Information System (INIS)
Still, Susanne; Kondor, Imre
2010-01-01
The optimization of large portfolios displays an inherent instability due to estimation error. This poses a fundamental problem, because solutions that are not stable under sample fluctuations may look optimal for a given sample, but are, in effect, very far from optimal with respect to the average risk. In this paper, we approach the problem from the point of view of statistical learning theory. The occurrence of the instability is intimately related to over-fitting, which can be avoided using known regularization methods. We show how regularized portfolio optimization with the expected shortfall as a risk measure is related to support vector regression. The budget constraint dictates a modification. We present the resulting optimization problem and discuss the solution. The L2 norm of the weight vector is used as a regularizer, which corresponds to a diversification 'pressure'. This means that diversification, besides counteracting downward fluctuations in some assets by upward fluctuations in others, is also crucial because it improves the stability of the solution. The approach we provide here allows for the simultaneous treatment of optimization and diversification in one framework that enables the investor to trade off between the two, depending on the size of the available dataset.
Regularizing portfolio optimization
Still, Susanne; Kondor, Imre
2010-07-01
The optimization of large portfolios displays an inherent instability due to estimation error. This poses a fundamental problem, because solutions that are not stable under sample fluctuations may look optimal for a given sample, but are, in effect, very far from optimal with respect to the average risk. In this paper, we approach the problem from the point of view of statistical learning theory. The occurrence of the instability is intimately related to over-fitting, which can be avoided using known regularization methods. We show how regularized portfolio optimization with the expected shortfall as a risk measure is related to support vector regression. The budget constraint dictates a modification. We present the resulting optimization problem and discuss the solution. The L2 norm of the weight vector is used as a regularizer, which corresponds to a diversification 'pressure'. This means that diversification, besides counteracting downward fluctuations in some assets by upward fluctuations in others, is also crucial because it improves the stability of the solution. The approach we provide here allows for the simultaneous treatment of optimization and diversification in one framework that enables the investor to trade off between the two, depending on the size of the available dataset.
Saputra, K. V. I.; Cahyadi, L.; Sembiring, U. A.
2018-01-01
Start in this paper, we assess our traditional elementary statistics education and also we introduce elementary statistics with simulation-based inference. To assess our statistical class, we adapt the well-known CAOS (Comprehensive Assessment of Outcomes in Statistics) test that serves as an external measure to assess the student’s basic statistical literacy. This test generally represents as an accepted measure of statistical literacy. We also introduce a new teaching method on elementary statistics class. Different from the traditional elementary statistics course, we will introduce a simulation-based inference method to conduct hypothesis testing. From the literature, it has shown that this new teaching method works very well in increasing student’s understanding of statistics.
Measurement of initial soil moisture conditions for purposes of rainfall simulation experiments
TEREZA, Davidová; VÁCLAV, David
2015-01-01
The research on rainfall-runoff processes has become even more important in recent decades with respect to both flood and drought events as well as to expected impacts of considered climate changes. It is researched in different ways and at different scales according to the purpose. The rainfall simulator developed at Department of Irrigation, Drainage and Landscape Engineering is being used for purposes of detail analysis of rainfall-runoff process in order to research infiltration process w...
Development and control of a three-axis satellite simulator for the bifocal relay mirror initiative
Chernesky, Vincent S.
2001-01-01
The Three Axis Satellite Simulator (TASS) is a 4-foot diameter octagonal platform supported on a spherical air bearing. The platform hosts several satellite subsystems, including rate gyros, reaction wheels, thrusters, sun sensors, and an onboard control computer. This free-floating design allows for realistic emulation of satellite attitude dynamics in a laboratory environment. The bifocal relay mirror spacecraft system is composed of two optically coupled telescopes used to redirect the las...
Regular Single Valued Neutrosophic Hypergraphs
Directory of Open Access Journals (Sweden)
Muhammad Aslam Malik
2016-12-01
Full Text Available In this paper, we define the regular and totally regular single valued neutrosophic hypergraphs, and discuss the order and size along with properties of regular and totally regular single valued neutrosophic hypergraphs. We also extend work on completeness of single valued neutrosophic hypergraphs.
The geometry of continuum regularization
International Nuclear Information System (INIS)
Halpern, M.B.
1987-03-01
This lecture is primarily an introduction to coordinate-invariant regularization, a recent advance in the continuum regularization program. In this context, the program is seen as fundamentally geometric, with all regularization contained in regularized DeWitt superstructures on field deformations
Multiple graph regularized protein domain ranking
Wang, Jim Jing-Yan
2012-11-19
Background: Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods.Results: To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods.Conclusion: The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications. 2012 Wang et al; licensee BioMed Central Ltd.
Multiple graph regularized protein domain ranking
Wang, Jim Jing-Yan; Bensmail, Halima; Gao, Xin
2012-01-01
Background: Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods.Results: To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods.Conclusion: The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications. 2012 Wang et al; licensee BioMed Central Ltd.
Multiple graph regularized protein domain ranking.
Wang, Jim Jing-Yan; Bensmail, Halima; Gao, Xin
2012-11-19
Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications.
Multiple graph regularized protein domain ranking
Directory of Open Access Journals (Sweden)
Wang Jim
2012-11-01
Full Text Available Abstract Background Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. Results To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. Conclusion The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications.
Equilibrium initial data for moving puncture simulations: the stationary 1 + log slicing
International Nuclear Information System (INIS)
Baumgarte, T W; Matera, K; Etienne, Z B; Liu, Y T; Shapiro, S L; Taniguchi, K; Murchadha, N O
2009-01-01
We discuss a 'stationary 1 + log' slicing condition for the construction of solutions to Einstein's constraint equations. For stationary spacetimes, these initial data give a stationary foliation when evolved with 'moving puncture' gauge conditions that are often used in black hole evolutions. The resulting slicing is time independent and agrees with the slicing generated by being dragged along a timelike Killing vector of the spacetime. When these initial data are evolved with moving puncture gauge conditions, numerical errors arising from coordinate evolution should be minimized. While these properties appear very promising, suggesting that this slicing condition should be an attractive alternative to, for example, maximal slicing, we demonstrate in this paper that solutions can be constructed only for a small class of problems. For binary black hole initial data, in particular, it is often assumed that there exists an approximate helical Killing vector that generates the binary's orbit. We show that 1 + log slices that are stationary with respect to such a helical Killing vector cannot be asymptotically flat, unless the spacetime possesses an additional axial Killing vector.
Kim, Seokpum; Wei, Yaochi; Horie, Yasuyuki; Zhou, Min
2018-05-01
The design of new materials requires establishment of macroscopic measures of material performance as functions of microstructure. Traditionally, this process has been an empirical endeavor. An approach to computationally predict the probabilistic ignition thresholds of polymer-bonded explosives (PBXs) using mesoscale simulations is developed. The simulations explicitly account for microstructure, constituent properties, and interfacial responses and capture processes responsible for the development of hotspots and damage. The specific mechanisms tracked include viscoelasticity, viscoplasticity, fracture, post-fracture contact, frictional heating, and heat conduction. The probabilistic analysis uses sets of statistically similar microstructure samples to directly mimic relevant experiments for quantification of statistical variations of material behavior due to inherent material heterogeneities. The particular thresholds and ignition probabilities predicted are expressed in James type and Walker-Wasley type relations, leading to the establishment of explicit analytical expressions for the ignition probability as function of loading. Specifically, the ignition thresholds corresponding to any given level of ignition probability and ignition probability maps are predicted for PBX 9404 for the loading regime of Up = 200-1200 m/s where Up is the particle speed. The predicted results are in good agreement with available experimental measurements. A parametric study also shows that binder properties can significantly affect the macroscopic ignition behavior of PBXs. The capability to computationally predict the macroscopic engineering material response relations out of material microstructures and basic constituent and interfacial properties lends itself to the design of new materials as well as the analysis of existing materials.
Annotation of Regular Polysemy
DEFF Research Database (Denmark)
Martinez Alonso, Hector
Regular polysemy has received a lot of attention from the theory of lexical semantics and from computational linguistics. However, there is no consensus on how to represent the sense of underspecified examples at the token level, namely when annotating or disambiguating senses of metonymic words...... and metonymic. We have conducted an analysis in English, Danish and Spanish. Later on, we have tried to replicate the human judgments by means of unsupervised and semi-supervised sense prediction. The automatic sense-prediction systems have been unable to find empiric evidence for the underspecified sense, even...
Regularity of Minimal Surfaces
Dierkes, Ulrich; Tromba, Anthony J; Kuster, Albrecht
2010-01-01
"Regularity of Minimal Surfaces" begins with a survey of minimal surfaces with free boundaries. Following this, the basic results concerning the boundary behaviour of minimal surfaces and H-surfaces with fixed or free boundaries are studied. In particular, the asymptotic expansions at interior and boundary branch points are derived, leading to general Gauss-Bonnet formulas. Furthermore, gradient estimates and asymptotic expansions for minimal surfaces with only piecewise smooth boundaries are obtained. One of the main features of free boundary value problems for minimal surfaces is t
Regularities of radiation heredity
International Nuclear Information System (INIS)
Skakov, M.K.; Melikhov, V.D.
2001-01-01
One analyzed regularities of radiation heredity in metals and alloys. One made conclusion about thermodynamically irreversible changes in structure of materials under irradiation. One offers possible ways of heredity transmittance of radiation effects at high-temperature transformations in the materials. Phenomenon of radiation heredity may be turned to practical use to control structure of liquid metal and, respectively, structure of ingot via preliminary radiation treatment of charge. Concentration microheterogeneities in material defect structure induced by preliminary irradiation represent the genetic factor of radiation heredity [ru
Caporali, E.; Chiarello, V.; Galeati, G.
2014-12-01
Peak discharges estimates for a given return period are of primary importance in engineering practice for risk assessment and hydraulic structure design. Different statistical methods are chosen here for the assessment of flood frequency curve: one indirect technique based on the extreme rainfall event analysis, the Peak Over Threshold (POT) model and the Annual Maxima approach as direct techniques using river discharge data. In the framework of the indirect method, a Monte Carlo simulation approach is adopted to determine a derived frequency distribution of peak runoff using a probabilistic formulation of the SCS-CN method as stochastic rainfall-runoff model. A Monte Carlo simulation is used to generate a sample of different runoff events from different stochastic combination of rainfall depth, storm duration, and initial loss inputs. The distribution of the rainfall storm events is assumed to follow the GP law whose parameters are estimated through GEV's parameters of annual maximum data. The evaluation of the initial abstraction ratio is investigated since it is one of the most questionable assumption in the SCS-CN model and plays a key role in river basin characterized by high-permeability soils, mainly governed by infiltration excess mechanism. In order to take into account the uncertainty of the model parameters, this modified approach, that is able to revise and re-evaluate the original value of the initial abstraction ratio, is implemented. In the POT model the choice of the threshold has been an essential issue, mainly based on a compromise between bias and variance. The Generalized Extreme Value (GEV) distribution fitted to the annual maxima discharges is therefore compared with the Pareto distributed peaks to check the suitability of the frequency of occurrence representation. The methodology is applied to a large dam in the Serchio river basin, located in the Tuscany Region. The application has shown as Monte Carlo simulation technique can be a useful
Energy Technology Data Exchange (ETDEWEB)
Soltz, R. A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Danagoulian, A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Sheets, S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Korbly, S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Hartouni, E. P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2013-05-22
Theoretical calculations indicate that the value of the Feynman variance, Y2F for the emitted distribution of neutrons from ssionable exhibits a strong monotonic de- pendence on a the multiplication, M, of a quantity of special nuclear material. In 2012 we performed a series of measurements at the Passport Inc. facility using a 9- MeV bremsstrahlung CW beam of photons incident on small quantities of uranium with liquid scintillator detectors. For the set of objects studies we observed deviations in the expected monotonic dependence, and these deviations were later con rmed by MCNP simulations. In this report, we modify the theory to account for the contri- bution from the initial photo- ssion and benchmark the new theory with a series of MCNP simulations on DU, LEU, and HEU objects spanning a wide range of masses and multiplication values.
Initial Self-Consistent 3D Electron-Cloud Simulations of the LHC Beam with the Code WARP+POSINST
International Nuclear Information System (INIS)
Vay, J; Furman, M A; Cohen, R H; Friedman, A; Grote, D P
2005-01-01
We present initial results for the self-consistent beam-cloud dynamics simulations for a sample LHC beam, using a newly developed set of modeling capability based on a merge [1] of the three-dimensional parallel Particle-In-Cell (PIC) accelerator code WARP [2] and the electron-cloud code POSINST [3]. Although the storage ring model we use as a test bed to contain the beam is much simpler and shorter than the LHC, its lattice elements are realistically modeled, as is the beam and the electron cloud dynamics. The simulated mechanisms for generation and absorption of the electrons at the walls are based on previously validated models available in POSINST [3, 4
Energy Technology Data Exchange (ETDEWEB)
Pascau, Javier, E-mail: jpascau@mce.hggm.es [Unidad de Medicina y Cirugia Experimental, Hospital General Universitario Gregorio Maranon, Madrid (Spain); Departamento de Bioingenieria e Ingenieria Aeroespacial, Universidad Carlos III de Madrid, Madrid (Spain); Santos Miranda, Juan Antonio [Servicio de Oncologia Radioterapica, Hospital General Universitario Gregorio Maranon, Madrid (Spain); Facultad de Medicina, Universidad Complutense de Madrid, Madrid (Spain); Calvo, Felipe A. [Servicio de Oncologia Radioterapica, Hospital General Universitario Gregorio Maranon, Madrid (Spain); Facultad de Medicina, Universidad Complutense de Madrid, Madrid (Spain); Departamento de Oncologia, Hospital General Universitario Gregorio Maranon, Madrid (Spain); Bouche, Ana; Morillo, Virgina [Consorcio Hospitalario Provincial de Castellon, Castellon (Spain); Gonzalez-San Segundo, Carmen [Servicio de Oncologia Radioterapica, Hospital General Universitario Gregorio Maranon, Madrid (Spain); Facultad de Medicina, Universidad Complutense de Madrid, Madrid (Spain); Ferrer, Carlos; Lopez Tarjuelo, Juan [Consorcio Hospitalario Provincial de Castellon, Castellon (Spain); and others
2012-06-01
Purpose: Intraoperative electron beam radiation therapy (IOERT) involves a modified strategy of conventional radiation therapy and surgery. The lack of specific planning tools limits the spread of this technique. The purpose of the present study is to describe a new simulation and planning tool and its initial evaluation by clinical users. Methods and Materials: The tool works on a preoperative computed tomography scan. A physician contours regions to be treated and protected and simulates applicator positioning, calculating isodoses and the corresponding dose-volume histograms depending on the selected electron energy. Three radiation oncologists evaluated data from 15 IOERT patients, including different tumor locations. Segmentation masks, applicator positions, and treatment parameters were compared. Results: High parameter agreement was found in the following cases: three breast and three rectal cancer, retroperitoneal sarcoma, and rectal and ovary monotopic recurrences. All radiation oncologists performed similar segmentations of tumors and high-risk areas. The average applicator position difference was 1.2 {+-} 0.95 cm. The remaining cancer sites showed higher deviations because of differences in the criteria for segmenting high-risk areas (one rectal, one pancreas) and different surgical access simulated (two rectal, one Ewing sarcoma). Conclusions: The results show that this new tool can be used to simulate IOERT cases involving different anatomic locations, and that preplanning has to be carried out with specialized surgical input.
Energy Technology Data Exchange (ETDEWEB)
Ni, B Y; Wu, G X, E-mail: g.wu@ucl.ac.uk [College of Shipbuilding Engineering, Harbin Engineering University, Harbin 150001 (China)
2017-08-15
The free water exit of an initially fully submerged buoyant spheroid in an axisymmetric flow, which is driven by the difference between the vertical fluid force and gravity, is investigated. The fluid is assumed to be incompressible and inviscid, and the flow to be irrotational. The velocity potential theory is adopted together with fully nonlinear boundary conditions on the free surface. The surface tension is neglected and the pressure is taken as constant on the free surface. The acceleration of the body at each time step is obtained as part of the solution. Its nonlinear mutual dependence on the fluid force is decoupled through the auxiliary function method. The free-surface breakup by body penetration and water detachment from the body are treated through numerical conditions. The slender body theory based on the zero potential assumption on the undisturbed flat free surface is adopted, through which a condition for full water exit of a spheroid is obtained. Comparison is made between the results from the slender body theory and from the fully nonlinear theory through the boundary-element method, and good agreement is found when the spheroid is slender. Extensive case studies are undertaken to investigate the effects of body density, dimensions and the initial submergence. (paper)
DEFF Research Database (Denmark)
Sousa, Tiago M; Morais, Hugo; Castro, R.
2014-01-01
scheduling problem. Therefore, the use of metaheuristics is required to obtain good solutions in a reasonable amount of time. This paper proposes two new heuristics, called naive electric vehicles charge and discharge allocation and generation tournament based on cost, developed to obtain an initial solution...... to be used in the energy resource scheduling methodology based on simulated annealing previously developed by the authors. The case study considers two scenarios with 1000 and 2000 electric vehicles connected in a distribution network. The proposed heuristics are compared with a deterministic approach...
Wasklewicz, Thad; Zhu, Zhen; Gares, Paul
2017-12-01
Rapid technological advances, sustained funding, and a greater recognition of the value of topographic data have helped develop an increasing archive of topographic data sources. Advances in basic and applied research related to Earth surface changes require researchers to integrate recent high-resolution topography (HRT) data with the legacy datasets. Several technical challenges and data uncertainty issues persist to date when integrating legacy datasets with more recent HRT data. The disparate data sources required to extend the topographic record back in time are often stored in formats that are not readily compatible with more recent HRT data. Legacy data may also contain unknown error or unreported error that make accounting for data uncertainty difficult. There are also cases of known deficiencies in legacy datasets, which can significantly bias results. Finally, scientists are faced with the daunting challenge of definitively deriving the extent to which a landform or landscape has or will continue to change in response natural and/or anthropogenic processes. Here, we examine the question: how do we evaluate and portray data uncertainty from the varied topographic legacy sources and combine this uncertainty with current spatial data collection techniques to detect meaningful topographic changes? We view topographic uncertainty as a stochastic process that takes into consideration spatial and temporal variations from a numerical simulation and physical modeling experiment. The numerical simulation incorporates numerous topographic data sources typically found across a range of legacy data to present high-resolution data, while the physical model focuses on more recent HRT data acquisition techniques. Elevation uncertainties observed from anchor points in the digital terrain models are modeled using "states" in a stochastic estimator. Stochastic estimators trace the temporal evolution of the uncertainties and are natively capable of incorporating sensor
Energy Technology Data Exchange (ETDEWEB)
Zhai, Ziqing [Pacific Northwest National Laboratory, 622 Horn Rapids Road, P.O. Box 999, Richland, Washington 99352.; Toloczko, Mychailo [Pacific Northwest National Laboratory, 622 Horn Rapids Road, P.O. Box 999, Richland, Washington 99352.; Kruska, Karen [Pacific Northwest National Laboratory, 622 Horn Rapids Road, P.O. Box 999, Richland, Washington 99352.; Bruemmer, Stephen [Pacific Northwest National Laboratory, 622 Horn Rapids Road, P.O. Box 999, Richland, Washington 99352.
2017-05-22
Stress corrosion crack initiation of two thermally-treated, cold-worked (CW) alloy 690 (UNS N06690) materials was investigated in 360oC simulated PWR primary water using constant load tensile (CLT) tests and blunt notch compact tension (BNCT) tests equipped with direct current potential drop (DCPD) for in-situ detection of cracking. SCC initiation was not detected by DCPD for either the 21% or 31%CW CLT specimens loaded at their yield stress after ~9,220 hours, however intergranular (IG) precursor damage and isolated surface cracks were observed on the specimens. The two 31%CW BNCT specimens loaded at moderate stress intensity after several cyclic loading ramps showed DCPD-indicated crack initiation after 10,400 hours of exposure at constant stress intensity, which was resulted from significant growth of IG cracks. The 21%CW BNCT specimens only exhibited isolated small IG surface cracks and showed no apparent DCPD change throughout the test. Post-test cross-section examinations revealed many grain boundary (GB) nano-cavities in the bulk of all the CLT and BNCT specimens particularly for the 31%CW materials. Cavities were also found along GBs extending to the surface suggesting an important role in crack nucleation. This paper provides an overview of the evolution of GB cavities and discusses their effects on crack initiation in CW alloy 690.
Celik, Cihangir
-scale technologies. Prevention of SEEs has been studied and applied in the semiconductor industry by including radiation protection precautions in the system architecture or by using corrective algorithms in the system operation. Decreasing 10B content (20%of natural boron) in the natural boron of Borophosphosilicate glass (BPSG) layers that are conventionally used in the fabrication of semiconductor devices was one of the major radiation protection approaches for the system architecture. Neutron interaction in the BPSG layer was the origin of the SEEs because of the 10B (n,alpha) 7Li reaction products. Both of the particles produced have the capability of ionization in the silicon substrate region, whose thickness is comparable to the ranges of these particles. Using the soft error phenomenon in exactly the opposite manner of the semiconductor industry can provide a new neutron detection system based on the SERs in the semiconductor memories. By investigating the soft error mechanisms in the available semiconductor memories and enhancing the soft error occurrences in these devices, one can convert all memory using intelligent systems into portable, power efficient, directiondependent neutron detectors. The Neutron Intercepting Silicon Chip (NISC) project aims to achieve this goal by introducing 10B-enriched BPSG layers to the semiconductor memory architectures. This research addresses the development of a simulation tool, the NISC Soft Error Analysis Tool (NISCSAT), for soft error modeling and analysis in the semiconductor memories to provide basic design considerations for the NISC. NISCSAT performs particle transport and calculates the soft error probabilities, or SER, depending on energy depositions of the particles in a given memory node model of the NISC. Soft error measurements were performed with commercially available, off-the-shelf semiconductor memories and microprocessors to observe soft error variations with the neutron flux and memory supply voltage. Measurement
Initial quality performance results using a phantom to simulate chest computed radiography
Directory of Open Access Journals (Sweden)
Muhogora Wilbroad
2011-01-01
Full Text Available The aim of this study was to develop a homemade phantom for quantitative quality control in chest computed radiography (CR. The phantom was constructed from copper, aluminium, and polymenthylmethacrylate (PMMA plates as well as Styrofoam materials. Depending on combinations, the literature suggests that these materials can simulate the attenuation and scattering characteristics of lung, heart, and mediastinum. The lung, heart, and mediastinum regions were simulated by 10 mm x 10 mm x 0.5 mm, 10 mm x 10 mm x 0.5 mm and 10 mm x 10 mm x 1 mm copper plates, respectively. A test object of 100 mm x 100 mm and 0.2 mm thick copper was positioned to each region for CNR measurements. The phantom was exposed to x-rays generated by different tube potentials that covered settings in clinical use: 110-120 kVp (HVL=4.26-4.66 mm Al at a source image distance (SID of 180 cm. An approach similar to the recommended method in digital mammography was applied to determine the CNR values of phantom images produced by a Kodak CR 850A system with post-processing turned off. Subjective contrast-detail studies were also carried out by using images of Leeds TOR CDR test object acquired under similar exposure conditions as during CNR measurements. For clinical kVp conditions relevant to chest radiography, the CNR was highest over 90-100 kVp range. The CNR data correlated with the results of contrast detail observations. The values of clinical tube potentials at which CNR is the highest are regarded to be optimal kVp settings. The simplicity in phantom construction can offer easy implementation of related quality control program.
Simulation of Electrical Discharge Initiated by a Nanometer-Sized Probe in Atmospheric Conditions
International Nuclear Information System (INIS)
Chen Ran; Chen Chilai; Liu Youjiang; Wang Huanqin; Kong Deyi; Ma Yuan; Cada Michael; Brugger Jürgen
2013-01-01
In this paper, a two-dimensional nanometer scale tip-plate discharge model has been employed to study nanoscale electrical discharge in atmospheric conditions. The field strength distributions in a nanometer scale tip-to-plate electrode arrangement were calculated using the finite element analysis (FEA) method, and the influences of applied voltage amplitude and frequency as well as gas gap distance on the variation of effective discharge range (EDR) on the plate were also investigated and discussed. The simulation results show that the probe with a wide tip will cause a larger effective discharge range on the plate; the field strength in the gap is notably higher than that induced by the sharp tip probe; the effective discharge range will increase linearly with the rise of excitation voltage, and decrease nonlinearly with the rise of gap length. In addition, probe dimension, especially the width/height ratio, affects the effective discharge range in different manners. With the width/height ratio rising from 1:1 to 1:10, the effective discharge range will maintain stable when the excitation voltage is around 50 V. This will increase when the excitation voltage gets higher and decrease as the excitation voltage gets lower. Furthermore, when the gap length is 5 nm and the excitation voltage is below 20 V, the diameter of EDR in our simulation is about 150 nm, which is consistent with the experiment results reported by other research groups. Our work provides a preliminary understanding of nanometer scale discharges and establishes a predictive structure-behavior relationship
A PIC-MCC code RFdinity1d for simulation of discharge initiation by ICRF antenna
Tripský, M.; Wauters, T.; Lyssoivan, A.; Bobkov, V.; Schneider, P. A.; Stepanov, I.; Douai, D.; Van Eester, D.; Noterdaeme, J.-M.; Van Schoor, M.; ASDEX Upgrade Team; EUROfusion MST1 Team
2017-12-01
Discharges produced and sustained by ion cyclotron range of frequency (ICRF) waves in absence of plasma current will be used on ITER for (ion cyclotron-) wall conditioning (ICWC, Te = 3{-}5 eV, ne 18 m-3 ). In this paper, we present the 1D particle-in-cell Monte Carlo collision (PIC-MCC) RFdinity1d for the study the breakdown phase of ICRF discharges, and its dependency on the RF discharge parameters (i) antenna input power P i , (ii) RF frequency f, (iii) shape of the electric field and (iv) the neutral gas pressure pH_2 . The code traces the motion of both electrons and ions in a narrow bundle of magnetic field lines close to the antenna straps. The charged particles are accelerated in the parallel direction with respect to the magnetic field B T by two electric fields: (i) the vacuum RF field of the ICRF antenna E_z^RF and (ii) the electrostatic field E_zP determined by the solution of Poisson’s equation. The electron density evolution in simulations follows exponential increase, {\\dot{n_e} ∼ ν_ion t } . The ionization rate varies with increasing electron density as different mechanisms become important. The charged particles are affected solely by the antenna RF field E_z^RF at low electron density ({ne < 1011} m-3 , {≤ft \\vert E_z^RF \\right \\vert \\gg ≤ft \\vert E_zP \\right \\vert } ). At higher densities, when the electrostatic field E_zP is comparable to the antenna RF field E_z^RF , the ionization frequency reaches the maximum. Plasma oscillations propagating toroidally away from the antenna are observed. The simulated energy distributions of ions and electrons at {ne ∼ 1015} m-3 correspond a power-law Kappa energy distribution. This energy distribution was also observed in NPA measurements at ASDEX Upgrade in ICWC experiments.
FE-simulation of the initial stages of rafting in nickel-base superalloys
International Nuclear Information System (INIS)
Bioermann, H.; Feng Hua; Mughrabi, H.
2000-01-01
In the present work, the model of Socrate and Parks which takes into account the plastic deformation and which applies the method of finite elements (FE) using an energy-perturbation approach will be extended by the introduction of a further contribution to the driving force for rafting, cf. This additional driving force is based on the local variation of the hydrostatic stresses arising from the anisotropic distribution of dislocations after deformation, in combination with the different lattice parameters of the two phases γ and γ'. It is an alternative formulation of the driving force introduced recently. With the present model, the initial stages of rafting and the build-up of internal stresses and strains are determined. (orig.)
MRI reconstruction with joint global regularization and transform learning.
Tanc, A Korhan; Eksioglu, Ender M
2016-10-01
Sparsity based regularization has been a popular approach to remedy the measurement scarcity in image reconstruction. Recently, sparsifying transforms learned from image patches have been utilized as an effective regularizer for the Magnetic Resonance Imaging (MRI) reconstruction. Here, we infuse additional global regularization terms to the patch-based transform learning. We develop an algorithm to solve the resulting novel cost function, which includes both patchwise and global regularization terms. Extensive simulation results indicate that the introduced mixed approach has improved MRI reconstruction performance, when compared to the algorithms which use either of the patchwise transform learning or global regularization terms alone. Copyright © 2016 Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
Zhang, Z F.; Freedman, Vicky L.; White, Mark D.
2003-01-01
In support of CH2M HILL Hanford Group, Inc.'s (CHG) preparation of a Field Investigative Report (FIR) for the closure of the Hanford Site Single-Shell Tank (SST) Waste Management Area (WMA) tank farms, a set of numerical simulations of flow and solute transport was executed to predict the performance of surface barriers for reducing long-term risks from potential groundwater contamination at the C Farm WMA. This report documents the simulation of 14 cases (and two verification cases) involving two-dimensional cross sections through the C Farm WMA tanks C-103 - C-112. Utilizing a unit release scenario at Tank C-112, four different types of leaks were simulated. These simulations assessed the impact of leakage during retrieval, past leaks, and tank residual wastes and tank ancillary equipment following closure activities. . Two transported solutes were considered: uranium-238 (U-238) and technetium-99 (Tc-99). To evaluate the impact of sorption to the subsurface materials, six different retardation coefficients were simulated for U-238. Overall, simulations results for the C Farm WMA showed that only a small fraction of the U-238 with retardation factors greater than 0.6 migrated from the vadose zone in all of the cases. For the conservative solute, Tc-99, results showed that the simulations investigating leakages during retrieval demonstrated the highest WMA peak concentrations and the earliest arrival times due to the high infiltration rate before the use of surface barriers and the addition of water into the system. Simulations investigating past leaks showed similar peaks and arrival times as the retrieval leak cases. Several different release rates were used to investigate contaminant transport from residual tank wastes. All showed similar peak concentrations and arrival times, except for the lowest initial release rate, which was 1,000 times slower than the highest release rate. Past leaks were also investigated with different release rate models, including
International Nuclear Information System (INIS)
Cary, J.R.; Spentzouris, P.; Amundson, J.; McInnes, L.; Borland, M.; Mustapha, B.; Ostroumov, P.; Wang, Y.; Fischer, W.; Fedotov, A.; Ben-Zvi, I.; Ryne, R.; Esarey, E.; Geddes, C.; Qiang, J.; Ng, E.; Li, S.; Ng, C.; Lee, R.; Merminga, L.; Wang, H.; Bruhwiler, D.L.; Dechow, D.; Mullowney, P.; Messmer, P.; Nieter, C.; Ovtchinnikov, S.; Paul, K.; Stoltz, P.; Wade-Stein, D.; Mori, W.B.; Decyk, V.; Huang, C.K.; Lu, W.; Tzoufras, M.; Tsung, F.; Zhou, M.; Werner, G.R.; Antonsen, T.; Katsouleas, T.; Morris, B.
2007-01-01
Accelerators are the largest and most costly scientific instruments of the Department of Energy, with uses across a broad range of science, including colliders for particle physics and nuclear science and light sources and neutron sources for materials studies. COMPASS, the Community Petascale Project for Accelerator Science and Simulation, is a broad, four-office (HEP, NP, BES, ASCR) effort to develop computational tools for the prediction and performance enhancement of accelerators. The tools being developed can be used to predict the dynamics of beams in the presence of optical elements and space charge forces, the calculation of electromagnetic modes and wake fields of cavities, the cooling induced by comoving beams, and the acceleration of beams by intense fields in plasmas generated by beams or lasers. In SciDAC-1, the computational tools had multiple successes in predicting the dynamics of beams and beam generation. In SciDAC-2 these tools will be petascale enabled to allow the inclusion of an unprecedented level of physics for detailed prediction
Directory of Open Access Journals (Sweden)
Joseph L Baker
2013-04-01
Full Text Available Type IV pili are long, protein filaments built from a repeating subunit that protrudes from the surface of a wide variety of infectious bacteria. They are implicated in a vast array of functions, ranging from bacterial motility to microcolony formation to infection. One of the most well-studied type IV filaments is the gonococcal type IV pilus (GC-T4P from Neisseria gonorrhoeae, the causative agent of gonorrhea. Cryo-electron microscopy has been used to construct a model of this filament, offering insights into the structure of type IV pili. In addition, experiments have demonstrated that GC-T4P can withstand very large tension forces, and transition to a force-induced conformation. However, the details of force-generation, and the atomic-level characteristics of the force-induced conformation, are unknown. Here, steered molecular dynamics (SMD simulation was used to exert a force in silico on an 18 subunit segment of GC-T4P to address questions regarding the nature of the interactions that lead to the extraordinary strength of bacterial pili. SMD simulations revealed that the buried pilin α1 domains maintain hydrophobic contacts with one another within the core of the filament, leading to GC-T4P's structural stability. At the filament surface, gaps between pilin globular head domains in both the native and pulled states provide water accessible routes between the external environment and the interior of the filament, allowing water to access the pilin α1 domains as reported for VC-T4P in deuterium exchange experiments. Results were also compared to the experimentally observed force-induced conformation. In particular, an exposed amino acid sequence in the experimentally stretched filament was also found to become exposed during the SMD simulations, suggesting that initial stages of the force induced transition are well captured. Furthermore, a second sequence was shown to be initially hidden in the native filament and became exposed upon
Baker, Joseph L; Biais, Nicolas; Tama, Florence
2013-04-01
Type IV pili are long, protein filaments built from a repeating subunit that protrudes from the surface of a wide variety of infectious bacteria. They are implicated in a vast array of functions, ranging from bacterial motility to microcolony formation to infection. One of the most well-studied type IV filaments is the gonococcal type IV pilus (GC-T4P) from Neisseria gonorrhoeae, the causative agent of gonorrhea. Cryo-electron microscopy has been used to construct a model of this filament, offering insights into the structure of type IV pili. In addition, experiments have demonstrated that GC-T4P can withstand very large tension forces, and transition to a force-induced conformation. However, the details of force-generation, and the atomic-level characteristics of the force-induced conformation, are unknown. Here, steered molecular dynamics (SMD) simulation was used to exert a force in silico on an 18 subunit segment of GC-T4P to address questions regarding the nature of the interactions that lead to the extraordinary strength of bacterial pili. SMD simulations revealed that the buried pilin α1 domains maintain hydrophobic contacts with one another within the core of the filament, leading to GC-T4P's structural stability. At the filament surface, gaps between pilin globular head domains in both the native and pulled states provide water accessible routes between the external environment and the interior of the filament, allowing water to access the pilin α1 domains as reported for VC-T4P in deuterium exchange experiments. Results were also compared to the experimentally observed force-induced conformation. In particular, an exposed amino acid sequence in the experimentally stretched filament was also found to become exposed during the SMD simulations, suggesting that initial stages of the force induced transition are well captured. Furthermore, a second sequence was shown to be initially hidden in the native filament and became exposed upon stretching.
Effective field theory dimensional regularization
International Nuclear Information System (INIS)
Lehmann, Dirk; Prezeau, Gary
2002-01-01
A Lorentz-covariant regularization scheme for effective field theories with an arbitrary number of propagating heavy and light particles is given. This regularization scheme leaves the low-energy analytic structure of Greens functions intact and preserves all the symmetries of the underlying Lagrangian. The power divergences of regularized loop integrals are controlled by the low-energy kinematic variables. Simple diagrammatic rules are derived for the regularization of arbitrary one-loop graphs and the generalization to higher loops is discussed
Effective field theory dimensional regularization
Lehmann, Dirk; Prézeau, Gary
2002-01-01
A Lorentz-covariant regularization scheme for effective field theories with an arbitrary number of propagating heavy and light particles is given. This regularization scheme leaves the low-energy analytic structure of Greens functions intact and preserves all the symmetries of the underlying Lagrangian. The power divergences of regularized loop integrals are controlled by the low-energy kinematic variables. Simple diagrammatic rules are derived for the regularization of arbitrary one-loop graphs and the generalization to higher loops is discussed.
2010-12-07
... FARM CREDIT SYSTEM INSURANCE CORPORATION Regular Meeting AGENCY: Farm Credit System Insurance Corporation Board. ACTION: Regular meeting. SUMMARY: Notice is hereby given of the regular meeting of the Farm Credit System Insurance Corporation Board (Board). Date and Time: The meeting of the Board will be held...
International Nuclear Information System (INIS)
Tatekawa, Takayuki
2014-01-01
We study the initial conditions for cosmological N-body simulations for precision cosmology. In general, Zel'dovich approximation has been applied for the initial conditions of N-body simulations for a long time. These initial conditions provide incorrect higher-order growth. These error caused by setting up the initial conditions by perturbation theory is called transients. We investigated the impact of transient on non-Gaussianity of density field by performing cosmological N-body simulations with initial conditions based on first-, second-, and third-order Lagrangian perturbation theory in previous paper. In this paper, we evaluates the effect of the transverse mode in the third-order Lagrangian perturbation theory for several statistical quantities such as power spectrum and non-Gaussianty. Then we clarified that the effect of the transverse mode in the third-order Lagrangian perturbation theory is quite small
WE-DE-202-01: Connecting Nanoscale Physics to Initial DNA Damage Through Track Structure Simulations
Energy Technology Data Exchange (ETDEWEB)
Schuemann, J. [Massachusetts General Hospital (United States)
2016-06-15
Radiation therapy for the treatment of cancer has been established as a highly precise and effective way to eradicate a localized region of diseased tissue. To achieve further significant gains in the therapeutic ratio, we need to move towards biologically optimized treatment planning. To achieve this goal, we need to understand how the radiation-type dependent patterns of induced energy depositions within the cell (physics) connect via molecular, cellular and tissue reactions to treatment outcome such as tumor control and undesirable effects on normal tissue. Several computational biology approaches have been developed connecting physics to biology. Monte Carlo simulations are the most accurate method to calculate physical dose distributions at the nanometer scale, however simulations at the DNA scale are slow and repair processes are generally not simulated. Alternative models that rely on the random formation of individual DNA lesions within one or two turns of the DNA have been shown to reproduce the clusters of DNA lesions, including single strand breaks (SSBs), double strand breaks (DSBs) without the need for detailed track structure simulations. Efficient computational simulations of initial DNA damage induction facilitate computational modeling of DNA repair and other molecular and cellular processes. Mechanistic, multiscale models provide a useful conceptual framework to test biological hypotheses and help connect fundamental information about track structure and dosimetry at the sub-cellular level to dose-response effects on larger scales. In this symposium we will learn about the current state of the art of computational approaches estimating radiation damage at the cellular and sub-cellular scale. How can understanding the physics interactions at the DNA level be used to predict biological outcome? We will discuss if and how such calculations are relevant to advance our understanding of radiation damage and its repair, or, if the underlying biological
WE-DE-202-01: Connecting Nanoscale Physics to Initial DNA Damage Through Track Structure Simulations
International Nuclear Information System (INIS)
Schuemann, J.
2016-01-01
Radiation therapy for the treatment of cancer has been established as a highly precise and effective way to eradicate a localized region of diseased tissue. To achieve further significant gains in the therapeutic ratio, we need to move towards biologically optimized treatment planning. To achieve this goal, we need to understand how the radiation-type dependent patterns of induced energy depositions within the cell (physics) connect via molecular, cellular and tissue reactions to treatment outcome such as tumor control and undesirable effects on normal tissue. Several computational biology approaches have been developed connecting physics to biology. Monte Carlo simulations are the most accurate method to calculate physical dose distributions at the nanometer scale, however simulations at the DNA scale are slow and repair processes are generally not simulated. Alternative models that rely on the random formation of individual DNA lesions within one or two turns of the DNA have been shown to reproduce the clusters of DNA lesions, including single strand breaks (SSBs), double strand breaks (DSBs) without the need for detailed track structure simulations. Efficient computational simulations of initial DNA damage induction facilitate computational modeling of DNA repair and other molecular and cellular processes. Mechanistic, multiscale models provide a useful conceptual framework to test biological hypotheses and help connect fundamental information about track structure and dosimetry at the sub-cellular level to dose-response effects on larger scales. In this symposium we will learn about the current state of the art of computational approaches estimating radiation damage at the cellular and sub-cellular scale. How can understanding the physics interactions at the DNA level be used to predict biological outcome? We will discuss if and how such calculations are relevant to advance our understanding of radiation damage and its repair, or, if the underlying biological
Liu, Zhen; Bambha, Ray P; Pinto, Joseph P; Zeng, Tao; Boylan, Jim; Huang, Maoyi; Lei, Huimin; Zhao, Chun; Liu, Shishi; Mao, Jiafu; Schwalm, Christopher R; Shi, Xiaoying; Wei, Yaxing; Michelsen, Hope A
2014-04-01
Motivated by the question of whether and how a state-of-the-art regional chemical transport model (CTM) can facilitate characterization of CO2 spatiotemporal variability and verify CO2 fossil-fuel emissions, we for the first time applied the Community Multiscale Air Quality (CMAQ) model to simulate CO2. This paper presents methods, input data, and initial results for CO2 simulation using CMAQ over the contiguous United States in October 2007. Modeling experiments have been performed to understand the roles of fossil-fuel emissions, biosphere-atmosphere exchange, and meteorology in regulating the spatial distribution of CO2 near the surface over the contiguous United States. Three sets of net ecosystem exchange (NEE) fluxes were used as input to assess the impact of uncertainty of NEE on CO2 concentrations simulated by CMAQ. Observational data from six tall tower sites across the country were used to evaluate model performance. In particular, at the Boulder Atmospheric Observatory (BAO), a tall tower site that receives urban emissions from Denver CO, the CMAQ model using hourly varying, high-resolution CO2 fossil-fuel emissions from the Vulcan inventory and Carbon Tracker optimized NEE reproduced the observed diurnal profile of CO2 reasonably well but with a low bias in the early morning. The spatial distribution of CO2 was found to correlate with NO(x), SO2, and CO, because of their similar fossil-fuel emission sources and common transport processes. These initial results from CMAQ demonstrate the potential of using a regional CTM to help interpret CO2 observations and understand CO2 variability in space and time. The ability to simulate a full suite of air pollutants in CMAQ will also facilitate investigations of their use as tracers for CO2 source attribution. This work serves as a proof of concept and the foundation for more comprehensive examinations of CO2 spatiotemporal variability and various uncertainties in the future. Atmospheric CO2 has long been modeled
Selection of regularization parameter for l1-regularized damage detection
Hou, Rongrong; Xia, Yong; Bao, Yuequan; Zhou, Xiaoqing
2018-06-01
The l1 regularization technique has been developed for structural health monitoring and damage detection through employing the sparsity condition of structural damage. The regularization parameter, which controls the trade-off between data fidelity and solution size of the regularization problem, exerts a crucial effect on the solution. However, the l1 regularization problem has no closed-form solution, and the regularization parameter is usually selected by experience. This study proposes two strategies of selecting the regularization parameter for the l1-regularized damage detection problem. The first method utilizes the residual and solution norms of the optimization problem and ensures that they are both small. The other method is based on the discrepancy principle, which requires that the variance of the discrepancy between the calculated and measured responses is close to the variance of the measurement noise. The two methods are applied to a cantilever beam and a three-story frame. A range of the regularization parameter, rather than one single value, can be determined. When the regularization parameter in this range is selected, the damage can be accurately identified even for multiple damage scenarios. This range also indicates the sensitivity degree of the damage identification problem to the regularization parameter.
International Nuclear Information System (INIS)
Hoffman, F.O.; Thiessen, K.M.; Frank, M.L.; Blaylock, B.G.
1992-01-01
Simulated rain containing both soluble radionuclides and insoluble particles labeled with radionuclides was applied to pasture-type vegetation under conditions similar to those found during convective storms. The fraction of material in rain intercepted by vegetation and initially retained was determined for three sizes of insoluble polystyrene microspheres, soluble 7 Be 2+ and soluble 131 I as periodate or iodide, over a range of rainfall amounts of both moderate- and high-intensity precipitation. Values for the interception and initial retention by vegetation (interception fractions) for soluble forms of 131 I in simulated rain are much less than those for insoluble particles and the reactive cation 7 Be 2+ . The interception fraction for soluble 131 I is an inverse function of rain amount. The mass interception factor (the interception fraction normalized for biomass) of 131 I is almost solely dependent on the amount of rain. The 131 I vegetation-to-rain concentration ratio is relatively constant at approximately 2.6 ι kg -1 . For 7 Be 2+ and the insoluble particles, the interception fractions range from 0.1 to 0.6 with geometric means of approximately 0.3. For these materials there is a greater dependence on biomass than on rain amount; the geometric means of the mass interception factors for these substances range from 0.99 to 2.4 m 2 kg -1 . These results indicate that anionic 131 I is essentially removed with the water once the vegetation surface becomes saturated and that the 7 Be cation and the insoluble particles are adsorbed to or settle out on the plant surface. (Author)
Energy Technology Data Exchange (ETDEWEB)
Mente, Tobias
2015-07-01
Duplex stainless steels have been used for a long time in the offshore industry, since they have higher strength than conventional austenitic stainless steels and they exhibit a better ductility as well as an improved corrosion resistance in harsh environments compared to ferritic stainless steels. However, despite these good properties the literature shows some failure cases of duplex stainless steels in which hydrogen plays a crucial role for the cause of the damage. Numerical simulations can give a significant contribution in clarifying the damage mechanisms. Because they help to interpret experimental results as well as help to transfer results from laboratory tests to component tests and vice versa. So far, most numerical simulations of hydrogen-assisted material damage in duplex stainless steels were performed at the macroscopic scale. However, duplex stainless steels consist of approximately equal portions of austenite and δ-ferrite. Both phases have different mechanical properties as well as hydrogen transport properties. Thus, the sensitivity for hydrogen-assisted damage is different in both phases, too. Therefore, the objective of this research was to develop a numerical model of a duplex stainless steel microstructure enabling simulation of hydrogen transport, mechanical stresses and strains as well as crack initiation and propagation in both phases. Additionally, modern X-ray diffraction experiments were used in order to evaluate the influence of hydrogen on the phase specific mechanical properties. For the numerical simulation of the hydrogen transport it was shown, that hydrogen diffusion strongly depends on the alignment of austenite and δ-ferrite in the duplex stainless steel microstructure. Also, it was proven that the hydrogen transport is mainly realized by the ferritic phase and hydrogen is trapped in the austenitic phase. The numerical analysis of phase specific mechanical stresses and strains revealed that if the duplex stainless steel is
Regularized Biot-Savart Laws for Modeling Magnetic Flux Ropes
Titov, Viacheslav; Downs, Cooper; Mikic, Zoran; Torok, Tibor; Linker, Jon A.
2017-08-01
Many existing models assume that magnetic flux ropes play a key role in solar flares and coronal mass ejections (CMEs). It is therefore important to develop efficient methods for constructing flux-rope configurations constrained by observed magnetic data and the initial morphology of CMEs. As our new step in this direction, we have derived and implemented a compact analytical form that represents the magnetic field of a thin flux rope with an axis of arbitrary shape and a circular cross-section. This form implies that the flux rope carries axial current I and axial flux F, so that the respective magnetic field is a curl of the sum of toroidal and poloidal vector potentials proportional to I and F, respectively. The vector potentials are expressed in terms of Biot-Savart laws whose kernels are regularized at the rope axis. We regularized them in such a way that for a straight-line axis the form provides a cylindrical force-free flux rope with a parabolic profile of the axial current density. So far, we set the shape of the rope axis by tracking the polarity inversion lines of observed magnetograms and estimating its height and other parameters of the rope from a calculated potential field above these lines. In spite of this heuristic approach, we were able to successfully construct pre-eruption configurations for the 2009 February13 and 2011 October 1 CME events. These applications demonstrate that our regularized Biot-Savart laws are indeed a very flexible and efficient method for energizing initial configurations in MHD simulations of CMEs. We discuss possible ways of optimizing the axis paths and other extensions of the method in order to make it more useful and robust.Research supported by NSF, NASA's HSR and LWS Programs, and AFOSR.
Directory of Open Access Journals (Sweden)
Cappuccio Antonio
2009-03-01
Full Text Available Abstract Background There is experimental evidence from animal models favoring the notion that the disruption of interactions between stroma and epithelium plays an important role in the initiation of carcinogenesis. These disrupted interactions are hypothesized to be mediated by molecules, termed morphostats, which diffuse through the tissue to determine cell phenotype and maintain tissue architecture. Methods We developed a computer simulation based on simple properties of cell renewal and morphostats. Results Under the computer simulation, the disruption of the morphostat gradient in the stroma generated epithelial precursors of cancer without any mutation in the epithelium. Conclusion The model is consistent with the possibility that the accumulation of genetic and epigenetic changes found in tumors could arise after the formation of a founder population of aberrant cells, defined as cells that are created by low or insufficient morphostat levels and that no longer respond to morphostat concentrations. Because the model is biologically plausible, we hope that these results will stimulate further experiments.
da Silva, C. L.; Merrill, R. A.; Pasko, V. P.
2015-12-01
A significant portion of the in-cloud lightning development is observed as a series of initial breakdown pulses (IBPs) that are characterized by an abrupt change in the electric field at a remote sensor. Recent experimental and theoretical studies have attributed this process to the stepwise elongation of an initial lightning leader inside the thunderstorm [da Silva and Pasko, JGR, 120, 4989-5009, 2015, and references therein]. Attempts to visually observe these events are hampered due to the fact that clouds are opaque to optical radiation. Due to this reason, throughout the last decade, a number of researchers have used the so-called transmission line models (also commonly referred to as engineering models), widely employed for return stroke simulations, to simulate the waveshapes of IBPs, and also of narrow bipolar events. The transmission line (TL) model approach is to prescribe the source current dynamics in a certain manner to match the measured E-field change waveform, with the purpose of retrieving key information about the source, such as its height, peak current, size, speed of charge motion, etc. Although the TL matching method is not necessarily physics-driven, the estimated source characteristics can give insights on the dominant length- and time-scales, as well as, on the energetics of the source. This contributes to better understanding of the environment where the onset and early stages of lightning development takes place.In the present work, we use numerical modeling to constrain the number of source parameters that can be confidently inferred from the observed far-field IBP waveforms. We compare different modified TL models (i.e., with different attenuation behaviors) to show that they tend to produce similar waveforms in conditions where the channel is short. We also demonstrate that it is impossible to simultaneously retrieve the speed of source current propagation and channel length from an observed IBP waveform, in contrast to what has been
Directory of Open Access Journals (Sweden)
О. A. Tereshchenko
2017-06-01
Full Text Available Purpose. The article highlights development of the methodological basis for simulation the processes of cars accumulation in solving operational planning problems under conditions of initial information uncertainty for assessing the sustainability of the adopted planning scenario and calculating the associated technological risks. Methodology. The solution of the problem under investigation is based on the use of general scientific approaches, the apparatus of probability theory and the theory of fuzzy sets. To achieve this purpose, the factors influencing the entropy of operational plans are systematized. It is established that when planning the operational work of railway stations, sections and nodes, the most significant factors that cause uncertainty in the initial information are: a external conditions with respect to the railway ground in question, expressed by the uncertainty of the timing of cars arrivals; b external, hard-to-identify goals for the functioning of other participants in the logistics chain (primarily customers, expressed by the uncertainty of the completion time with the freight cars. These factors are suggested to be taken into account in automated planning through statistical analysis – the establishment and study of the remaining time (prediction errors. As a result, analytical dependencies are proposed for rational representation of the probability density functions of the time residual distribution in the form of point, piecewise-defined and continuous analytic models. The developed models of cars accumulation, the application of which depends on the identified states of the predicted incoming car flow to the accumulation system, are presented below. In addition, the last proposed model is a general case of models of accumulation processes with an arbitrary level of reliability of the initial information for any structure of the incoming flow of cars. In conclusion, a technique for estimating the results of
Adaptive Regularization of Neural Classifiers
DEFF Research Database (Denmark)
Andersen, Lars Nonboe; Larsen, Jan; Hansen, Lars Kai
1997-01-01
We present a regularization scheme which iteratively adapts the regularization parameters by minimizing the validation error. It is suggested to use the adaptive regularization scheme in conjunction with optimal brain damage pruning to optimize the architecture and to avoid overfitting. Furthermo......, we propose an improved neural classification architecture eliminating an inherent redundancy in the widely used SoftMax classification network. Numerical results demonstrate the viability of the method...
International Nuclear Information System (INIS)
Zhang, Xuesong; Izaurralde, R. César; Arnold, Jeffrey G.; Williams, Jimmy R.; Srinivasan, Raghavan
2013-01-01
Climate change is one of the most compelling modern issues and has important implications for almost every aspect of natural and human systems. The Soil and Water Assessment Tool (SWAT) model has been applied worldwide to support sustainable land and water management in a changing climate. However, the inadequacies of the existing carbon algorithm in SWAT limit its application in assessing impacts of human activities on CO 2 emission, one important source of greenhouse gasses (GHGs) that traps heat in the earth system and results in global warming. In this research, we incorporate a revised version of the CENTURY carbon model into SWAT to describe dynamics of soil organic matter (SOM)-residue and simulate land–atmosphere carbon exchange. We test this new SWAT-C model with daily eddy covariance (EC) observations of net ecosystem exchange (NEE) and evapotranspiration (ET) and annual crop yield at six sites across the U.S. Midwest. Results show that SWAT-C simulates well multi-year average NEE and ET across the spatially distributed sites and capture the majority of temporal variation of these two variables at a daily time scale at each site. Our analyses also reveal that performance of SWAT-C is influenced by multiple factors, such as crop management practices (irrigated vs. rainfed), completeness and accuracy of input data, crop species, and initialization of state variables. Overall, the new SWAT-C demonstrates favorable performance for simulating land–atmosphere carbon exchange across agricultural sites with different soils, climate, and management practices. SWAT-C is expected to serve as a useful tool for including carbon flux into consideration in sustainable watershed management under a changing climate. We also note that extensive assessment of SWAT-C with field observations is required for further improving the model and understanding potential uncertainties of applying it across large regions with complex landscapes. - Highlights: • Expanding the SWAT
Energy Technology Data Exchange (ETDEWEB)
Zhang, Xuesong; Izaurralde, R. César [Joint Global Change Research Institute, Pacific Northwest National Laboratory and University of Maryland, College Park, MD 20740 (United States); Arnold, Jeffrey G. [Grassland, Soil and Water Research Laboratory USDA-ARS, Temple, TX 76502 (United States); Williams, Jimmy R. [Blackland Research and Extension Center, AgriLIFE Research, 720 E. Blackland Road, Temple, TX 76502 (United States); Srinivasan, Raghavan [Spatial Sciences Laboratory in the Department of Ecosystem Science and Management, Texas A and M University, College Stations, TX 77845 (United States)
2013-10-01
Climate change is one of the most compelling modern issues and has important implications for almost every aspect of natural and human systems. The Soil and Water Assessment Tool (SWAT) model has been applied worldwide to support sustainable land and water management in a changing climate. However, the inadequacies of the existing carbon algorithm in SWAT limit its application in assessing impacts of human activities on CO{sub 2} emission, one important source of greenhouse gasses (GHGs) that traps heat in the earth system and results in global warming. In this research, we incorporate a revised version of the CENTURY carbon model into SWAT to describe dynamics of soil organic matter (SOM)-residue and simulate land–atmosphere carbon exchange. We test this new SWAT-C model with daily eddy covariance (EC) observations of net ecosystem exchange (NEE) and evapotranspiration (ET) and annual crop yield at six sites across the U.S. Midwest. Results show that SWAT-C simulates well multi-year average NEE and ET across the spatially distributed sites and capture the majority of temporal variation of these two variables at a daily time scale at each site. Our analyses also reveal that performance of SWAT-C is influenced by multiple factors, such as crop management practices (irrigated vs. rainfed), completeness and accuracy of input data, crop species, and initialization of state variables. Overall, the new SWAT-C demonstrates favorable performance for simulating land–atmosphere carbon exchange across agricultural sites with different soils, climate, and management practices. SWAT-C is expected to serve as a useful tool for including carbon flux into consideration in sustainable watershed management under a changing climate. We also note that extensive assessment of SWAT-C with field observations is required for further improving the model and understanding potential uncertainties of applying it across large regions with complex landscapes. - Highlights: • Expanding the
Class of regular bouncing cosmologies
Vasilić, Milovan
2017-06-01
In this paper, I construct a class of everywhere regular geometric sigma models that possess bouncing solutions. Precisely, I show that every bouncing metric can be made a solution of such a model. My previous attempt to do so by employing one scalar field has failed due to the appearance of harmful singularities near the bounce. In this work, I use four scalar fields to construct a class of geometric sigma models which are free of singularities. The models within the class are parametrized by their background geometries. I prove that, whatever background is chosen, the dynamics of its small perturbations is classically stable on the whole time axis. Contrary to what one expects from the structure of the initial Lagrangian, the physics of background fluctuations is found to carry two tensor, two vector, and two scalar degrees of freedom. The graviton mass, which naturally appears in these models, is shown to be several orders of magnitude smaller than its experimental bound. I provide three simple examples to demonstrate how this is done in practice. In particular, I show that graviton mass can be made arbitrarily small.
2010-09-02
... FARM CREDIT SYSTEM INSURANCE CORPORATION Regular Meeting AGENCY: Farm Credit System Insurance Corporation Board. SUMMARY: Notice is hereby given of the regular meeting of the Farm Credit System Insurance Corporation Board (Board). DATE AND TIME: The meeting of the Board will be held at the offices of the Farm...
Online co-regularized algorithms
Ruijter, T. de; Tsivtsivadze, E.; Heskes, T.
2012-01-01
We propose an online co-regularized learning algorithm for classification and regression tasks. We demonstrate that by sequentially co-regularizing prediction functions on unlabeled data points, our algorithm provides improved performance in comparison to supervised methods on several UCI benchmarks
Constrained Perturbation Regularization Approach for Signal Estimation Using Random Matrix Theory
Suliman, Mohamed Abdalla Elhag; Ballal, Tarig; Kammoun, Abla; Al-Naffouri, Tareq Y.
2016-01-01
random matrix theory are applied to derive the near-optimum regularizer that minimizes the mean-squared error of the estimator. Simulation results demonstrate that the proposed approach outperforms a set of benchmark regularization methods for various
International Nuclear Information System (INIS)
Depuydt, Tom; Poels, Kenneth; Verellen, Dirk; Engels, Benedikt; Collen, Christine; Haverbeke, Chloe; Gevaert, Thierry; Buls, Nico; Van Gompel, Gert; Reynders, Truus; Duchateau, Michael; Tournel, Koen; Boussaer, Marlies; Steenbeke, Femke; Vandenbroucke, Frederik; De Ridder, Mark
2013-01-01
Purpose: To have an initial assessment of the Vero Dynamic Tracking workflow in clinical circumstances and quantify the performance of the tracking system, a simulation study was set up on 5 lung and liver patients. Methods and materials: The preparatory steps of a tumor tracking treatment, based on fiducial markers implanted in the tumor, were executed allowing pursuit of the tumor with the gimbaled linac and monitoring X-rays acquisition, however, without activating the 6 MV beam. Data were acquired on workflow time-efficiency, tracking accuracy and imaging exposure. Results: The average time between the patient entering the treatment room and the first treatment field was about 9 min. The time for building the correlation model was 3.2 min. Tracking errors of 0.55 and 0.95 mm (1σ) were observed in PAN/TILT direction and a 2D range of 3.08 mm. A skin dose was determined of 0.08 mGy/image, with a source-to-skin distance of 900 mm and kV exposure of 1 mAs. On average 1.8 mGy/min kV skin dose was observed for 1 Hz monitoring. Conclusion: The Vero tracking solution proved to be fully functional and showed performance comparable with other real-time tracking systems
Energy Technology Data Exchange (ETDEWEB)
Sun, Yu, E-mail: yu.sun@xjtu.edu.cn [State Key Laboratory for Manufacturing Systems Engineering, School of Mechanical Engineering, Xi’an Jiaotong University, Xi’an 710049 (China); Institute for Computational Mechanics and Its Applications, Northwestern Polytechnical University, Xi’an 710072 (China); Liu, Yilun [State Key Laboratory for Strength and Vibration of Mechanical Structures, School of Aerospace Engineering, Xi’an Jiaotong University, Xi’an 710049 (China); Chen, Xuefeng; Zhai, Zhi [State Key Laboratory for Manufacturing Systems Engineering, School of Mechanical Engineering, Xi’an Jiaotong University, Xi’an 710049 (China); Xu, Fei [Institute for Computational Mechanics and Its Applications, Northwestern Polytechnical University, Xi’an 710072 (China); Liu, Yijun [Institute for Computational Mechanics and Its Applications, Northwestern Polytechnical University, Xi’an 710072 (China); Mechanical Engineering, University of Cincinnati, Cincinnati, OH 45221-0072 (United States)
2017-06-01
Highlights: • A competition mechanism between thermal actuation and compressive stress blocking was found for the oxygen transport. • At low temperature, a compressive stress was generated in the oxide layer which blocked oxygen transport into the deeper region. • O atoms gained larger possibility to go deeper inward as temperature increase. • The related film quality was well explained by the competition mechanism. - Abstract: The early stage oxidation in Si(100) surface has been investigated in this work by a reactive force field molecular dynamics (ReaxFF MD) simulation, manifesting that the oxygen transport acted as a dominant issue for initial oxidation process. Due to the oxidation, a compressive stress was generated in the oxide layer which blocked the oxygen transport perpendicular to the Si(100) surface and further prevented oxidation in the deeper layer. In contrast, thermal actuation was beneficial to the oxygen transport into deeper layer as temperature increases. Therefore, a competition mechanism was found for the oxygen transport during early stage oxidation in Si(100) surface. At room temperature, the oxygen transport was governed by the blocking effect of compressive stress, so a better quality oxide film with more uniform interface and more stoichiometric oxide structure was obtained. Indeed, the mechanism presented in this work is also applicable for other self-limiting oxidation (e.g. metal oxidation) and is helpful for the design of high-performance electronic devices.
International Nuclear Information System (INIS)
Sun, Yu; Liu, Yilun; Chen, Xuefeng; Zhai, Zhi; Xu, Fei; Liu, Yijun
2017-01-01
Highlights: • A competition mechanism between thermal actuation and compressive stress blocking was found for the oxygen transport. • At low temperature, a compressive stress was generated in the oxide layer which blocked oxygen transport into the deeper region. • O atoms gained larger possibility to go deeper inward as temperature increase. • The related film quality was well explained by the competition mechanism. - Abstract: The early stage oxidation in Si(100) surface has been investigated in this work by a reactive force field molecular dynamics (ReaxFF MD) simulation, manifesting that the oxygen transport acted as a dominant issue for initial oxidation process. Due to the oxidation, a compressive stress was generated in the oxide layer which blocked the oxygen transport perpendicular to the Si(100) surface and further prevented oxidation in the deeper layer. In contrast, thermal actuation was beneficial to the oxygen transport into deeper layer as temperature increases. Therefore, a competition mechanism was found for the oxygen transport during early stage oxidation in Si(100) surface. At room temperature, the oxygen transport was governed by the blocking effect of compressive stress, so a better quality oxide film with more uniform interface and more stoichiometric oxide structure was obtained. Indeed, the mechanism presented in this work is also applicable for other self-limiting oxidation (e.g. metal oxidation) and is helpful for the design of high-performance electronic devices.
Decentralized formation of random regular graphs for robust multi-agent networks
Yazicioglu, A. Yasin; Egerstedt, Magnus; Shamma, Jeff S.
2014-01-01
systems. One family of robust graphs is the random regular graphs. In this paper, we present a locally applicable reconfiguration scheme to build random regular graphs through self-organization. For any connected initial graph, the proposed scheme
Ngada, Narcisse
2015-06-15
The complexity and cost of building and running high-power electrical systems make the use of simulations unavoidable. The simulations available today provide great understanding about how systems really operate. This paper helps the reader to gain an insight into simulation in the field of power converters for particle accelerators. Starting with the definition and basic principles of simulation, two simulation types, as well as their leading tools, are presented: analog and numerical simulations. Some practical applications of each simulation type are also considered. The final conclusion then summarizes the main important items to keep in mind before opting for a simulation tool or before performing a simulation.
Abuja, P M; Albertini, R; Esterbauer, H
1997-06-01
Kinetic simulation can help obtain deeper insight into the molecular mechanisms of complex processes, such as lipid peroxidation (LPO) in low-density lipoprotein (LDL). We have previously set up a single-compartment model of this process, initiating with radicals generated externally at a constant rate to show the interplay of radical scavenging and chain propagation. Here we focus on the initiating events, substituting constant rate of initiation (Ri) by redox cycling of Cu2+ and Cu+. Our simulation reveals that early events in copper-mediated LDL oxidation include (1) the reduction of Cu2+ by tocopherol (TocOH) which generates tocopheroxyl radical (TocO.), (2) the fate of TocO. which either is recycled or recombines with lipid peroxyl radical (LOO.), and (3) the reoxidation of Cu+ by lipid hydroperoxide which results in alkoxyl radical (LO.) formation. So TocO., LOO., and LO. can be regarded as primordial radicals, and the sum of their formation rates is the total rate of initiation, Ri. As experimental information of these initiating events cannot be obtained experimentally, the whole model was validated experimentally by comparison of LDL oxidation in the presence and absence of bathocuproine as predicted by simulation. Simulation predicts that Ri decreases by 2 orders of magnitude during lag time. This has important consequences for the estimation of oxidation resistance in copper-mediated LDL oxidation: after consumption of tocopherol, even small amounts of antioxidants may prolong the lag phase for a considerable time.
Parekh, Ankit
Sparsity has become the basis of some important signal processing methods over the last ten years. Many signal processing problems (e.g., denoising, deconvolution, non-linear component analysis) can be expressed as inverse problems. Sparsity is invoked through the formulation of an inverse problem with suitably designed regularization terms. The regularization terms alone encode sparsity into the problem formulation. Often, the ℓ1 norm is used to induce sparsity, so much so that ℓ1 regularization is considered to be `modern least-squares'. The use of ℓ1 norm, as a sparsity-inducing regularizer, leads to a convex optimization problem, which has several benefits: the absence of extraneous local minima, well developed theory of globally convergent algorithms, even for large-scale problems. Convex regularization via the ℓ1 norm, however, tends to under-estimate the non-zero values of sparse signals. In order to estimate the non-zero values more accurately, non-convex regularization is often favored over convex regularization. However, non-convex regularization generally leads to non-convex optimization, which suffers from numerous issues: convergence may be guaranteed to only a stationary point, problem specific parameters may be difficult to set, and the solution is sensitive to the initialization of the algorithm. The first part of this thesis is aimed toward combining the benefits of non-convex regularization and convex optimization to estimate sparse signals more effectively. To this end, we propose to use parameterized non-convex regularizers with designated non-convexity and provide a range for the non-convex parameter so as to ensure that the objective function is strictly convex. By ensuring convexity of the objective function (sum of data-fidelity and non-convex regularizer), we can make use of a wide variety of convex optimization algorithms to obtain the unique global minimum reliably. The second part of this thesis proposes a non-linear signal
Continuum-regularized quantum gravity
International Nuclear Information System (INIS)
Chan Huesum; Halpern, M.B.
1987-01-01
The recent continuum regularization of d-dimensional Euclidean gravity is generalized to arbitrary power-law measure and studied in some detail as a representative example of coordinate-invariant regularization. The weak-coupling expansion of the theory illustrates a generic geometrization of regularized Schwinger-Dyson rules, generalizing previous rules in flat space and flat superspace. The rules are applied in a non-trivial explicit check of Einstein invariance at one loop: the cosmological counterterm is computed and its contribution is included in a verification that the graviton mass is zero. (orig.)
Least square regularized regression in sum space.
Xu, Yong-Li; Chen, Di-Rong; Li, Han-Xiong; Liu, Lu
2013-04-01
This paper proposes a least square regularized regression algorithm in sum space of reproducing kernel Hilbert spaces (RKHSs) for nonflat function approximation, and obtains the solution of the algorithm by solving a system of linear equations. This algorithm can approximate the low- and high-frequency component of the target function with large and small scale kernels, respectively. The convergence and learning rate are analyzed. We measure the complexity of the sum space by its covering number and demonstrate that the covering number can be bounded by the product of the covering numbers of basic RKHSs. For sum space of RKHSs with Gaussian kernels, by choosing appropriate parameters, we tradeoff the sample error and regularization error, and obtain a polynomial learning rate, which is better than that in any single RKHS. The utility of this method is illustrated with two simulated data sets and five real-life databases.
Baker, R. David; Wang, Yansen; Tao, Wei-Kuo; Wetzel, Peter; Belcher, Larry R.
2004-01-01
High-resolution mesoscale model simulations of the 6-7 May 2000 Missouri flash flood event were performed to test the impact of model initialization and land surface treatment on timing, intensity, and location of extreme precipitation. In this flash flood event, a mesoscale convective system (MCS) produced over 340 mm of rain in roughly 9 hours in some locations. Two different types of model initialization were employed: 1) NCEP global reanalysis with 2.5-degree grid spacing and 12-hour temporal resolution, and 2) Eta reanalysis with 40- km grid spacing and $hour temporal resolution. In addition, two different land surface treatments were considered. A simple land scheme. (SLAB) keeps soil moisture fixed at initial values throughout the simulation, while a more sophisticated land model (PLACE) allows for r interactive feedback. Simulations with high-resolution Eta model initialization show considerable improvement in the intensity of precipitation due to the presence in the initialization of a residual mesoscale convective vortex (hlCV) from a previous MCS. Simulations with the PLACE land model show improved location of heavy precipitation. Since soil moisture can vary over time in the PLACE model, surface energy fluxes exhibit strong spatial gradients. These surface energy flux gradients help produce a strong low-level jet (LLJ) in the correct location. The LLJ then interacts with the cold outflow boundary of the MCS to produce new convective cells. The simulation with both high-resolution model initialization and time-varying soil moisture test reproduces the intensity and location of observed rainfall.
New regular black hole solutions
International Nuclear Information System (INIS)
Lemos, Jose P. S.; Zanchin, Vilson T.
2011-01-01
In the present work we consider general relativity coupled to Maxwell's electromagnetism and charged matter. Under the assumption of spherical symmetry, there is a particular class of solutions that correspond to regular charged black holes whose interior region is de Sitter, the exterior region is Reissner-Nordstroem and there is a charged thin-layer in-between the two. The main physical and geometrical properties of such charged regular black holes are analyzed.
Regular variation on measure chains
Czech Academy of Sciences Publication Activity Database
Řehák, Pavel; Vitovec, J.
2010-01-01
Roč. 72, č. 1 (2010), s. 439-448 ISSN 0362-546X R&D Projects: GA AV ČR KJB100190701 Institutional research plan: CEZ:AV0Z10190503 Keywords : regularly varying function * regularly varying sequence * measure chain * time scale * embedding theorem * representation theorem * second order dynamic equation * asymptotic properties Subject RIV: BA - General Mathematics Impact factor: 1.279, year: 2010 http://www.sciencedirect.com/science/article/pii/S0362546X09008475
Manifold Regularized Correlation Object Tracking
Hu, Hongwei; Ma, Bo; Shen, Jianbing; Shao, Ling
2017-01-01
In this paper, we propose a manifold regularized correlation tracking method with augmented samples. To make better use of the unlabeled data and the manifold structure of the sample space, a manifold regularization-based correlation filter is introduced, which aims to assign similar labels to neighbor samples. Meanwhile, the regression model is learned by exploiting the block-circulant structure of matrices resulting from the augmented translated samples over multiple base samples cropped fr...
Condition Number Regularized Covariance Estimation.
Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala
2013-06-01
Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the "large p small n " setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required.
Condition Number Regularized Covariance Estimation*
Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala
2012-01-01
Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the “large p small n” setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required. PMID:23730197
Higher order total variation regularization for EIT reconstruction.
Gong, Bo; Schullcke, Benjamin; Krueger-Ziolek, Sabine; Zhang, Fan; Mueller-Lisse, Ullrich; Moeller, Knut
2018-01-08
Electrical impedance tomography (EIT) attempts to reveal the conductivity distribution of a domain based on the electrical boundary condition. This is an ill-posed inverse problem; its solution is very unstable. Total variation (TV) regularization is one of the techniques commonly employed to stabilize reconstructions. However, it is well known that TV regularization induces staircase effects, which are not realistic in clinical applications. To reduce such artifacts, modified TV regularization terms considering a higher order differential operator were developed in several previous studies. One of them is called total generalized variation (TGV) regularization. TGV regularization has been successively applied in image processing in a regular grid context. In this study, we adapted TGV regularization to the finite element model (FEM) framework for EIT reconstruction. Reconstructions using simulation and clinical data were performed. First results indicate that, in comparison to TV regularization, TGV regularization promotes more realistic images. Graphical abstract Reconstructed conductivity changes located on selected vertical lines. For each of the reconstructed images as well as the ground truth image, conductivity changes located along the selected left and right vertical lines are plotted. In these plots, the notation GT in the legend stands for ground truth, TV stands for total variation method, and TGV stands for total generalized variation method. Reconstructed conductivity distributions from the GREIT algorithm are also demonstrated.
DEFF Research Database (Denmark)
Kolditz, O.; Bauer, S.; Bilke, L.
In this paper we describe the OpenGeoSys (OGS) project, which is a scientific open-source initiative for numerical simulation of thermo-hydro-mechanical/chemical processes in porous media. The basic concept is to provide a flexible numerical framework (using primarily the Finite Element Method (FEM...
Directory of Open Access Journals (Sweden)
Dustin Kai Yan Lau
2014-03-01
Full Text Available Background Unlike alphabetic languages, Chinese uses a logographic script. However, the pronunciation of many character’s phonetic radical has the same pronunciation as the character as a whole. These are considered regular characters and can be read through a lexical non-semantic route (Weekes & Chen, 1999. Pseudocharacters are another way to study this non-semantic route. A pseudocharacter is the combination of existing semantic and phonetic radicals in their legal positions resulting in a non-existing character (Ho, Chan, Chung, Lee, & Tsang, 2007. Pseudocharacters can be pronounced by direct derivation from the sound of its phonetic radical. Conversely, if the pronunciation of a character does not follow that of the phonetic radical, it is considered as irregular and can only be correctly read through the lexical-semantic route. The aim of the current investigation was to examine reading aloud in normal adults. We hypothesized that the regularity effect, previously described for alphabetical scripts and acquired dyslexic patients of Chinese (Weekes & Chen, 1999; Wu, Liu, Sun, Chromik, & Zhang, 2014, would also be present in normal adult Chinese readers. Method Participants. Thirty (50% female native Hong Kong Cantonese speakers with a mean age of 19.6 years and a mean education of 12.9 years. Stimuli. Sixty regular-, 60 irregular-, and 60 pseudo-characters (with at least 75% of name agreement in Chinese were matched by initial phoneme, number of strokes and family size. Additionally, regular- and irregular-characters were matched by frequency (low and consistency. Procedure. Each participant was asked to read aloud the stimuli presented on a laptop using the DMDX software. The order of stimuli presentation was randomized. Data analysis. ANOVAs were carried out by participants and items with RTs and errors as dependent variables and type of stimuli (regular-, irregular- and pseudo-character as repeated measures (F1 or between subject
Geometric continuum regularization of quantum field theory
International Nuclear Information System (INIS)
Halpern, M.B.
1989-01-01
An overview of the continuum regularization program is given. The program is traced from its roots in stochastic quantization, with emphasis on the examples of regularized gauge theory, the regularized general nonlinear sigma model and regularized quantum gravity. In its coordinate-invariant form, the regularization is seen as entirely geometric: only the supermetric on field deformations is regularized, and the prescription provides universal nonperturbative invariant continuum regularization across all quantum field theory. 54 refs
Consistent Partial Least Squares Path Modeling via Regularization.
Jung, Sunho; Park, JaeHong
2018-01-01
Partial least squares (PLS) path modeling is a component-based structural equation modeling that has been adopted in social and psychological research due to its data-analytic capability and flexibility. A recent methodological advance is consistent PLS (PLSc), designed to produce consistent estimates of path coefficients in structural models involving common factors. In practice, however, PLSc may frequently encounter multicollinearity in part because it takes a strategy of estimating path coefficients based on consistent correlations among independent latent variables. PLSc has yet no remedy for this multicollinearity problem, which can cause loss of statistical power and accuracy in parameter estimation. Thus, a ridge type of regularization is incorporated into PLSc, creating a new technique called regularized PLSc. A comprehensive simulation study is conducted to evaluate the performance of regularized PLSc as compared to its non-regularized counterpart in terms of power and accuracy. The results show that our regularized PLSc is recommended for use when serious multicollinearity is present.
Suliman, Mohamed Abdalla Elhag; Ballal, Tarig; Kammoun, Abla; Alnaffouri, Tareq Y.
2016-01-01
In this supplementary appendix we provide proofs and additional simulation results that complement the paper (constrained perturbation regularization approach for signal estimation using random matrix theory).
Directory of Open Access Journals (Sweden)
Chengxiang Zhuge
2014-01-01
Full Text Available Since the traditional four-step model is so simple that it cannot solve complex modern transportation problems, microsimulation is gradually applied for transportation planning and some researches indicate that it is more compatible and realistic. In this paper, a framework of agent-based simulation of travel behavior is proposed, which is realized by MATSim, a simulation tool developed for large-scale agent-based simulation. MATSim is currently developed and some of its models are under training, so a detailed introduction of simulation structure and preparation of input data will be presented. In practice, the preparation process differs from one to another in different simulation projects because the available data for simulation is various. Thus, a simulation of travel behavior under a condition of limited available survey data will be studied based on MATSim; furthermore, a medium-sized city in China will be taken as an example to check whether agent-based simulation of travel behavior can be successfully applied in China.
Metric regularity and subdifferential calculus
International Nuclear Information System (INIS)
Ioffe, A D
2000-01-01
The theory of metric regularity is an extension of two classical results: the Lyusternik tangent space theorem and the Graves surjection theorem. Developments in non-smooth analysis in the 1980s and 1990s paved the way for a number of far-reaching extensions of these results. It was also well understood that the phenomena behind the results are of metric origin, not connected with any linear structure. At the same time it became clear that some basic hypotheses of the subdifferential calculus are closely connected with the metric regularity of certain set-valued maps. The survey is devoted to the metric theory of metric regularity and its connection with subdifferential calculus in Banach spaces
Manifold Regularized Correlation Object Tracking.
Hu, Hongwei; Ma, Bo; Shen, Jianbing; Shao, Ling
2018-05-01
In this paper, we propose a manifold regularized correlation tracking method with augmented samples. To make better use of the unlabeled data and the manifold structure of the sample space, a manifold regularization-based correlation filter is introduced, which aims to assign similar labels to neighbor samples. Meanwhile, the regression model is learned by exploiting the block-circulant structure of matrices resulting from the augmented translated samples over multiple base samples cropped from both target and nontarget regions. Thus, the final classifier in our method is trained with positive, negative, and unlabeled base samples, which is a semisupervised learning framework. A block optimization strategy is further introduced to learn a manifold regularization-based correlation filter for efficient online tracking. Experiments on two public tracking data sets demonstrate the superior performance of our tracker compared with the state-of-the-art tracking approaches.
Dimensional regularization in configuration space
International Nuclear Information System (INIS)
Bollini, C.G.; Giambiagi, J.J.
1995-09-01
Dimensional regularization is introduced in configuration space by Fourier transforming in D-dimensions the perturbative momentum space Green functions. For this transformation, Bochner theorem is used, no extra parameters, such as those of Feynman or Bogoliubov-Shirkov are needed for convolutions. The regularized causal functions in x-space have ν-dependent moderated singularities at the origin. They can be multiplied together and Fourier transformed (Bochner) without divergence problems. The usual ultraviolet divergences appear as poles of the resultant functions of ν. Several example are discussed. (author). 9 refs
Regular algebra and finite machines
Conway, John Horton
2012-01-01
World-famous mathematician John H. Conway based this classic text on a 1966 course he taught at Cambridge University. Geared toward graduate students of mathematics, it will also prove a valuable guide to researchers and professional mathematicians.His topics cover Moore's theory of experiments, Kleene's theory of regular events and expressions, Kleene algebras, the differential calculus of events, factors and the factor matrix, and the theory of operators. Additional subjects include event classes and operator classes, some regulator algebras, context-free languages, communicative regular alg
Matrix regularization of 4-manifolds
Trzetrzelewski, M.
2012-01-01
We consider products of two 2-manifolds such as S^2 x S^2, embedded in Euclidean space and show that the corresponding 4-volume preserving diffeomorphism algebra can be approximated by a tensor product SU(N)xSU(N) i.e. functions on a manifold are approximated by the Kronecker product of two SU(N) matrices. A regularization of the 4-sphere is also performed by constructing N^2 x N^2 matrix representations of the 4-algebra (and as a byproduct of the 3-algebra which makes the regularization of S...
Regular Breakfast and Blood Lead Levels among Preschool Children
Directory of Open Access Journals (Sweden)
Needleman Herbert
2011-04-01
Full Text Available Abstract Background Previous studies have shown that fasting increases lead absorption in the gastrointestinal tract of adults. Regular meals/snacks are recommended as a nutritional intervention for lead poisoning in children, but epidemiological evidence of links between fasting and blood lead levels (B-Pb is rare. The purpose of this study was to examine the association between eating a regular breakfast and B-Pb among children using data from the China Jintan Child Cohort Study. Methods Parents completed a questionnaire regarding children's breakfast-eating habit (regular or not, demographics, and food frequency. Whole blood samples were collected from 1,344 children for the measurements of B-Pb and micronutrients (iron, copper, zinc, calcium, and magnesium. B-Pb and other measures were compared between children with and without regular breakfast. Linear regression modeling was used to evaluate the association between regular breakfast and log-transformed B-Pb. The association between regular breakfast and risk of lead poisoning (B-Pb≥10 μg/dL was examined using logistic regression modeling. Results Median B-Pb among children who ate breakfast regularly and those who did not eat breakfast regularly were 6.1 μg/dL and 7.2 μg/dL, respectively. Eating breakfast was also associated with greater zinc blood levels. Adjusting for other relevant factors, the linear regression model revealed that eating breakfast regularly was significantly associated with lower B-Pb (beta = -0.10 units of log-transformed B-Pb compared with children who did not eat breakfast regularly, p = 0.02. Conclusion The present study provides some initial human data supporting the notion that eating a regular breakfast might reduce B-Pb in young children. To our knowledge, this is the first human study exploring the association between breakfast frequency and B-Pb in young children.
Regularization of Nonmonotone Variational Inequalities
International Nuclear Information System (INIS)
Konnov, Igor V.; Ali, M.S.S.; Mazurkevich, E.O.
2006-01-01
In this paper we extend the Tikhonov-Browder regularization scheme from monotone to rather a general class of nonmonotone multivalued variational inequalities. We show that their convergence conditions hold for some classes of perfectly and nonperfectly competitive economic equilibrium problems
Lattice regularized chiral perturbation theory
International Nuclear Information System (INIS)
Borasoy, Bugra; Lewis, Randy; Ouimet, Pierre-Philippe A.
2004-01-01
Chiral perturbation theory can be defined and regularized on a spacetime lattice. A few motivations are discussed here, and an explicit lattice Lagrangian is reviewed. A particular aspect of the connection between lattice chiral perturbation theory and lattice QCD is explored through a study of the Wess-Zumino-Witten term
2011-01-20
... Meeting SUMMARY: Notice is hereby given of the regular meeting of the Farm Credit System Insurance Corporation Board (Board). Date and Time: The meeting of the Board will be held at the offices of the Farm... meeting of the Board will be open to the [[Page 3630
Forcing absoluteness and regularity properties
Ikegami, D.
2010-01-01
For a large natural class of forcing notions, we prove general equivalence theorems between forcing absoluteness statements, regularity properties, and transcendence properties over L and the core model K. We use our results to answer open questions from set theory of the reals.
Globals of Completely Regular Monoids
Institute of Scientific and Technical Information of China (English)
Wu Qian-qian; Gan Ai-ping; Du Xian-kun
2015-01-01
An element of a semigroup S is called irreducible if it cannot be expressed as a product of two elements in S both distinct from itself. In this paper we show that the class C of all completely regular monoids with irreducible identity elements satisfies the strong isomorphism property and so it is globally determined.
Fluid queues and regular variation
Boxma, O.J.
1996-01-01
This paper considers a fluid queueing system, fed by N independent sources that alternate between silence and activity periods. We assume that the distribution of the activity periods of one or more sources is a regularly varying function of index ¿. We show that its fat tail gives rise to an even
Fluid queues and regular variation
O.J. Boxma (Onno)
1996-01-01
textabstractThis paper considers a fluid queueing system, fed by $N$ independent sources that alternate between silence and activity periods. We assume that the distribution of the activity periods of one or more sources is a regularly varying function of index $zeta$. We show that its fat tail
Empirical laws, regularity and necessity
Koningsveld, H.
1973-01-01
In this book I have tried to develop an analysis of the concept of an empirical law, an analysis that differs in many ways from the alternative analyse's found in contemporary literature dealing with the subject.
1 am referring especially to two well-known views, viz. the regularity and
Interval matrices: Regularity generates singularity
Czech Academy of Sciences Publication Activity Database
Rohn, Jiří; Shary, S.P.
2018-01-01
Roč. 540, 1 March (2018), s. 149-159 ISSN 0024-3795 Institutional support: RVO:67985807 Keywords : interval matrix * regularity * singularity * P-matrix * absolute value equation * diagonally singilarizable matrix Subject RIV: BA - General Mathematics Impact factor: 0.973, year: 2016
Regularization in Matrix Relevance Learning
Schneider, Petra; Bunte, Kerstin; Stiekema, Han; Hammer, Barbara; Villmann, Thomas; Biehl, Michael
A In this paper, we present a regularization technique to extend recently proposed matrix learning schemes in learning vector quantization (LVQ). These learning algorithms extend the concept of adaptive distance measures in LVQ to the use of relevance matrices. In general, metric learning can
International Nuclear Information System (INIS)
Aoufi, A.; Damamme, G.
2011-01-01
The aim of this work is to study by numerical simulation a mathematical modelling technique describing charge trapping during initial charge injection in an insulator submitted to electron beam irradiation. A two-fluxes method described by a set of two stationary transport equations is used to split the electron current j e (z) into coupled forward j e+ (z) and backward j e (z) currents and such that j e (z) = j e+ (z) - j e- (z). The sparse algebraic linear system, resulting from the vertex-centered finite-volume discretization scheme is solved by an iterative decoupled fixed point method which involves the direct inversion of a bi-diagonal matrix. The sensitivity of the initial secondary electron emission yield with respect to the energy of incident primary electrons beam, that is penetration depth of the incident beam, or electron cross sections (absorption and diffusion) is investigated by numerical simulations. (authors)
Regular and conformal regular cores for static and rotating solutions
Energy Technology Data Exchange (ETDEWEB)
Azreg-Aïnou, Mustapha
2014-03-07
Using a new metric for generating rotating solutions, we derive in a general fashion the solution of an imperfect fluid and that of its conformal homolog. We discuss the conditions that the stress–energy tensors and invariant scalars be regular. On classical physical grounds, it is stressed that conformal fluids used as cores for static or rotating solutions are exempt from any malicious behavior in that they are finite and defined everywhere.
Regular and conformal regular cores for static and rotating solutions
International Nuclear Information System (INIS)
Azreg-Aïnou, Mustapha
2014-01-01
Using a new metric for generating rotating solutions, we derive in a general fashion the solution of an imperfect fluid and that of its conformal homolog. We discuss the conditions that the stress–energy tensors and invariant scalars be regular. On classical physical grounds, it is stressed that conformal fluids used as cores for static or rotating solutions are exempt from any malicious behavior in that they are finite and defined everywhere.
Energy functions for regularization algorithms
Delingette, H.; Hebert, M.; Ikeuchi, K.
1991-01-01
Regularization techniques are widely used for inverse problem solving in computer vision such as surface reconstruction, edge detection, or optical flow estimation. Energy functions used for regularization algorithms measure how smooth a curve or surface is, and to render acceptable solutions these energies must verify certain properties such as invariance with Euclidean transformations or invariance with parameterization. The notion of smoothness energy is extended here to the notion of a differential stabilizer, and it is shown that to void the systematic underestimation of undercurvature for planar curve fitting, it is necessary that circles be the curves of maximum smoothness. A set of stabilizers is proposed that meet this condition as well as invariance with rotation and parameterization.
Physical model of dimensional regularization
Energy Technology Data Exchange (ETDEWEB)
Schonfeld, Jonathan F.
2016-12-15
We explicitly construct fractals of dimension 4-ε on which dimensional regularization approximates scalar-field-only quantum-field theory amplitudes. The construction does not require fractals to be Lorentz-invariant in any sense, and we argue that there probably is no Lorentz-invariant fractal of dimension greater than 2. We derive dimensional regularization's power-law screening first for fractals obtained by removing voids from 3-dimensional Euclidean space. The derivation applies techniques from elementary dielectric theory. Surprisingly, fractal geometry by itself does not guarantee the appropriate power-law behavior; boundary conditions at fractal voids also play an important role. We then extend the derivation to 4-dimensional Minkowski space. We comment on generalization to non-scalar fields, and speculate about implications for quantum gravity. (orig.)
Maximum mutual information regularized classification
Wang, Jim Jing-Yan
2014-09-07
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
Maximum mutual information regularized classification
Wang, Jim Jing-Yan; Wang, Yi; Zhao, Shiguang; Gao, Xin
2014-01-01
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
Regularized strings with extrinsic curvature
International Nuclear Information System (INIS)
Ambjoern, J.; Durhuus, B.
1987-07-01
We analyze models of discretized string theories, where the path integral over world sheet variables is regularized by summing over triangulated surfaces. The inclusion of curvature in the action is a necessity for the scaling of the string tension. We discuss the physical properties of models with extrinsic curvature terms in the action and show that the string tension vanishes at the critical point where the bare extrinsic curvature coupling tends to infinity. Similar results are derived for models with intrinsic curvature. (orig.)
Circuit complexity of regular languages
Czech Academy of Sciences Publication Activity Database
Koucký, Michal
2009-01-01
Roč. 45, č. 4 (2009), s. 865-879 ISSN 1432-4350 R&D Projects: GA ČR GP201/07/P276; GA MŠk(CZ) 1M0545 Institutional research plan: CEZ:AV0Z10190503 Keywords : regular languages * circuit complexity * upper and lower bounds Subject RIV: BA - General Mathematics Impact factor: 0.726, year: 2009
Dynamics of coherent states in regular and chaotic regimes of the non-integrable Dicke model
Lerma-Hernández, S.; Chávez-Carlos, J.; Bastarrachea-Magnani, M. A.; López-del-Carpio, B.; Hirsch, J. G.
2018-04-01
The quantum dynamics of initial coherent states is studied in the Dicke model and correlated with the dynamics, regular or chaotic, of their classical limit. Analytical expressions for the survival probability, i.e. the probability of finding the system in its initial state at time t, are provided in the regular regions of the model. The results for regular regimes are compared with those of the chaotic ones. It is found that initial coherent states in regular regions have a much longer equilibration time than those located in chaotic regions. The properties of the distributions for the initial coherent states in the Hamiltonian eigenbasis are also studied. It is found that for regular states the components with no negligible contribution are organized in sequences of energy levels distributed according to Gaussian functions. In the case of chaotic coherent states, the energy components do not have a simple structure and the number of participating energy levels is larger than in the regular cases.
General inverse problems for regular variation
DEFF Research Database (Denmark)
Damek, Ewa; Mikosch, Thomas Valentin; Rosinski, Jan
2014-01-01
Regular variation of distributional tails is known to be preserved by various linear transformations of some random structures. An inverse problem for regular variation aims at understanding whether the regular variation of a transformed random object is caused by regular variation of components ...
DEFF Research Database (Denmark)
Fadeyi, Moshood O.; Weschler, Charles J.; Tham, Kwok W.
2013-01-01
's reactions with various indoor pollutants. The present study examines this possibility for secondary organic aerosols (SOA) derived from ozone-initiated chemistry with limonene, a commonly occurring indoor terpene. The experiments were conducted at realistic ozone and limonene concentrations in a 240 m3...
Lesosky, Maia; Glass, Tracy; Mukonda, Elton; Hsiao, Nei-Yuan; Abrams, Elaine J; Myer, Landon
2017-11-01
HIV viral load (VL) monitoring is a central tool to evaluate ART effectiveness and transmission risk. There is a global movement to expand VL monitoring following recent recommendations from the World Health Organization (WHO), but there has been little research into VL monitoring in pregnant women. We investigated one important question in this area: when and how frequently VL should be monitored in women initiating ART during pregnancy to predict VL at the time of delivery in a simulated South African population. We developed a mathematical model simulating VL from conception through delivery using VL data from the Maternal and Child Health - Antiretroviral Therapy (MCH-ART) cohort. VL was modelled based on three major compartments: pre-ART VL, viral decay immediately after ART initiation and viral maintenance (including viral suppression and viraemic episodes). Using this simulation, we examined the performance of various VL monitoring schema in predicting elevated VL at delivery. If WHO guidelines for non-pregnant adults were used, the majority of HIV-infected pregnant women (69%) would not receive a VL test during pregnancy. Most models that based VL monitoring in pregnancy on the time elapsed since ART initiation (regardless of gestation) performed poorly (sensitivity pregnancy on the woman's gestation (regardless of time on ART) appeared to perform better overall (sensitivity >60%). Across all permutations, inclusion of pre-ART VL values had a negligible impact on predictive performance (improving test sensitivity and specificity pregnancy, supporting better integration of maternal and HIV health services. Testing turnaround times require careful consideration, and point-of-care VL testing may be the best approach for measuring VL at delivery. Broadening the scope of this simulation model in the light of current scale up of VL monitoring in high burden countries is important. © 2017 The Authors. Journal of the International AIDS Society published by John
Lavrentiev regularization method for nonlinear ill-posed problems
International Nuclear Information System (INIS)
Kinh, Nguyen Van
2002-10-01
In this paper we shall be concerned with Lavientiev regularization method to reconstruct solutions x 0 of non ill-posed problems F(x)=y o , where instead of y 0 noisy data y δ is an element of X with absolut(y δ -y 0 ) ≤ δ are given and F:X→X is an accretive nonlinear operator from a real reflexive Banach space X into itself. In this regularization method solutions x α δ are obtained by solving the singularly perturbed nonlinear operator equation F(x)+α(x-x*)=y δ with some initial guess x*. Assuming certain conditions concerning the operator F and the smoothness of the element x*-x 0 we derive stability estimates which show that the accuracy of the regularized solutions is order optimal provided that the regularization parameter α has been chosen properly. (author)
Directory of Open Access Journals (Sweden)
Elizabeth S. Burnside MD, MPH, MS
2017-07-01
Full Text Available Background: There are no publicly available tools designed specifically to assist policy makers to make informed decisions about the optimal ages of breast cancer screening initiation for different populations of US women. Objective: To use three established simulation models to develop a web-based tool called Mammo OUTPuT. Methods: The simulation models use the 1970 US birth cohort and common parameters for incidence, digital screening performance, and treatment effects. Outcomes include breast cancers diagnosed, breast cancer deaths averted, breast cancer mortality reduction, false-positive mammograms, benign biopsies, and overdiagnosis. The Mammo OUTPuT tool displays these outcomes for combinations of age at screening initiation (every year from 40 to 49, annual versus biennial interval, lifetime versus 10-year horizon, and breast density, compared to waiting to start biennial screening at age 50 and continuing to 74. The tool was piloted by decision makers (n = 16 who completed surveys. Results: The tool demonstrates that benefits in the 40s increase linearly with earlier initiation age, without a specific threshold age. Likewise, the harms of screening increase monotonically with earlier ages of initiation in the 40s. The tool also shows users how the balance of benefits and harms varies with breast density. Surveys revealed that 100% of users (16/16 liked the appearance of the site; 94% (15/16 found the tool helpful; and 94% (15/16 would recommend the tool to a colleague. Conclusions: This tool synthesizes a representative subset of the most current CISNET (Cancer Intervention and Surveillance Modeling Network simulation model outcomes to provide policy makers with quantitative data on the benefits and harms of screening women in the 40s. Ultimate decisions will depend on program goals, the population served, and informed judgments about the weight of benefits and harms.
Gettman, Matthew T; Pereira, Claudio W; Lipsky, Katja; Wilson, Torrence; Arnold, Jacqueline J; Leibovich, Bradley C; Karnes, R Jeffrey; Dong, Yue
2009-03-01
Structured opportunities for learning communication, teamwork and laparoscopic principles are limited for urology residents. We evaluated and taught teamwork, communication and laparoscopic skills to urology residents in a simulated operating room. Scenarios related to laparoscopy (insufflator failure, carbon dioxide embolism) were developed using mannequins, urology residents and nurses. These scenarios were developed based on Accreditation Council for Graduate Medical Education core competencies and performed in a simulation center. Between the pretest scenario (insufflation failure) and the posttest scenario (carbon dioxide embolism) instruction was given on teamwork, communication and laparoscopic skills. A total of 19 urology residents participated in the training that involved participation in at least 2 scenarios. Performance was evaluated using validated teamwork instruments, questionnaires and videotape analysis. Significant improvement was noted on validated teamwork instruments between scenarios based on resident (pretest 24, posttest 27, p = 0.01) and expert (pretest 16, posttest 25, p = 0.008) evaluation. Increased teamwork and team performance were also noted between scenarios on videotape analysis with significant improvement for adherence to best practice (p = 0.01) and maintenance of positive rapport among team members (p = 0.02). Significant improvement in the setup of the laparoscopic procedure was observed (p = 0.01). Favorable face and content validity was noted for both scenarios. Teamwork, intraoperative communication and laparoscopic skills of urology residents improved during the high fidelity simulation course. Face and content validity of the individual sessions was favorable. In this study high fidelity simulation was effective for assessing and teaching Accreditation Council for Graduate Medical Education core competencies related to intraoperative communication, teamwork and laparoscopic skills.
DEFF Research Database (Denmark)
Weschler, Charles J.; Wisthaler, Armin; Tamás, Gyöngyi
2006-01-01
Proton-transfer-reaction mass spectrometry (PTR-MS) was used to examine organic compounds in the air of a simulated aircraft cabin under four conditions: low ozone, low air exchange rate; low ozone, high air exchange rate; high ozone, low air exchange rate; high ozone, high air exchange rate....... The results showed large differences in the chemical composition of the cabin air between the low and high ozone conditions. These differences were more pronounced at the low air exchange condition....
Mononen, Mika E.; Tanska, Petri; Isaksson, Hanna; Korhonen, Rami K.
2016-02-01
We present a novel algorithm combined with computational modeling to simulate the development of knee osteoarthritis. The degeneration algorithm was based on excessive and cumulatively accumulated stresses within knee joint cartilage during physiological gait loading. In the algorithm, the collagen network stiffness of cartilage was reduced iteratively if excessive maximum principal stresses were observed. The developed algorithm was tested and validated against experimental baseline and 4-year follow-up Kellgren-Lawrence grades, indicating different levels of cartilage degeneration at the tibiofemoral contact region. Test groups consisted of normal weight and obese subjects with the same gender and similar age and height without osteoarthritic changes. The algorithm accurately simulated cartilage degeneration as compared to the Kellgren-Lawrence findings in the subject group with excess weight, while the healthy subject group’s joint remained intact. Furthermore, the developed algorithm followed the experimentally found trend of cartilage degeneration in the obese group (R2 = 0.95, p osteoarthritis (0-2 years, p 0.05). The proposed algorithm revealed a great potential to objectively simulate the progression of knee osteoarthritis.
DEFF Research Database (Denmark)
Gould, Derek A; Chalmers, Nicholas; Johnson, Sheena J
2012-01-01
Recognition of the many limitations of traditional apprenticeship training is driving new approaches to learning medical procedural skills. Among simulation technologies and methods available today, computer-based systems are topical and bring the benefits of automated, repeatable, and reliable p...... performance assessments. Human factors research is central to simulator model development that is relevant to real-world imaging-guided interventional tasks and to the credentialing programs in which it would be used....
Efficient multidimensional regularization for Volterra series estimation
Birpoutsoukis, Georgios; Csurcsia, Péter Zoltán; Schoukens, Johan
2018-05-01
This paper presents an efficient nonparametric time domain nonlinear system identification method. It is shown how truncated Volterra series models can be efficiently estimated without the need of long, transient-free measurements. The method is a novel extension of the regularization methods that have been developed for impulse response estimates of linear time invariant systems. To avoid the excessive memory needs in case of long measurements or large number of estimated parameters, a practical gradient-based estimation method is also provided, leading to the same numerical results as the proposed Volterra estimation method. Moreover, the transient effects in the simulated output are removed by a special regularization method based on the novel ideas of transient removal for Linear Time-Varying (LTV) systems. Combining the proposed methodologies, the nonparametric Volterra models of the cascaded water tanks benchmark are presented in this paper. The results for different scenarios varying from a simple Finite Impulse Response (FIR) model to a 3rd degree Volterra series with and without transient removal are compared and studied. It is clear that the obtained models capture the system dynamics when tested on a validation dataset, and their performance is comparable with the white-box (physical) models.
Accelerating Large Data Analysis By Exploiting Regularities
Moran, Patrick J.; Ellsworth, David
2003-01-01
We present techniques for discovering and exploiting regularity in large curvilinear data sets. The data can be based on a single mesh or a mesh composed of multiple submeshes (also known as zones). Multi-zone data are typical to Computational Fluid Dynamics (CFD) simulations. Regularities include axis-aligned rectilinear and cylindrical meshes as well as cases where one zone is equivalent to a rigid-body transformation of another. Our algorithms can also discover rigid-body motion of meshes in time-series data. Next, we describe a data model where we can utilize the results from the discovery process in order to accelerate large data visualizations. Where possible, we replace general curvilinear zones with rectilinear or cylindrical zones. In rigid-body motion cases we replace a time-series of meshes with a transformed mesh object where a reference mesh is dynamically transformed based on a given time value in order to satisfy geometry requests, on demand. The data model enables us to make these substitutions and dynamic transformations transparently with respect to the visualization algorithms. We present results with large data sets where we combine our mesh replacement and transformation techniques with out-of-core paging in order to achieve significant speed-ups in analysis.
Emotion regulation deficits in regular marijuana users.
Zimmermann, Kaeli; Walz, Christina; Derckx, Raissa T; Kendrick, Keith M; Weber, Bernd; Dore, Bruce; Ochsner, Kevin N; Hurlemann, René; Becker, Benjamin
2017-08-01
Effective regulation of negative affective states has been associated with mental health. Impaired regulation of negative affect represents a risk factor for dysfunctional coping mechanisms such as drug use and thus could contribute to the initiation and development of problematic substance use. This study investigated behavioral and neural indices of emotion regulation in regular marijuana users (n = 23) and demographically matched nonusing controls (n = 20) by means of an fMRI cognitive emotion regulation (reappraisal) paradigm. Relative to nonusing controls, marijuana users demonstrated increased neural activity in a bilateral frontal network comprising precentral, middle cingulate, and supplementary motor regions during reappraisal of negative affect (P marijuana users relative to controls. Together, the present findings could reflect an unsuccessful attempt of compensatory recruitment of additional neural resources in the context of disrupted amygdala-prefrontal interaction during volitional emotion regulation in marijuana users. As such, impaired volitional regulation of negative affect might represent a consequence of, or risk factor for, regular marijuana use. Hum Brain Mapp 38:4270-4279, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Regularized Statistical Analysis of Anatomy
DEFF Research Database (Denmark)
Sjöstrand, Karl
2007-01-01
This thesis presents the application and development of regularized methods for the statistical analysis of anatomical structures. Focus is on structure-function relationships in the human brain, such as the connection between early onset of Alzheimer’s disease and shape changes of the corpus...... and mind. Statistics represents a quintessential part of such investigations as they are preluded by a clinical hypothesis that must be verified based on observed data. The massive amounts of image data produced in each examination pose an important and interesting statistical challenge...... efficient algorithms which make the analysis of large data sets feasible, and gives examples of applications....
Regularization methods in Banach spaces
Schuster, Thomas; Hofmann, Bernd; Kazimierski, Kamil S
2012-01-01
Regularization methods aimed at finding stable approximate solutions are a necessary tool to tackle inverse and ill-posed problems. Usually the mathematical model of an inverse problem consists of an operator equation of the first kind and often the associated forward operator acts between Hilbert spaces. However, for numerous problems the reasons for using a Hilbert space setting seem to be based rather on conventions than on an approprimate and realistic model choice, so often a Banach space setting would be closer to reality. Furthermore, sparsity constraints using general Lp-norms or the B
Academic Training Lecture - Regular Programme
PH Department
2011-01-01
Regular Lecture Programme 9 May 2011 ACT Lectures on Detectors - Inner Tracking Detectors by Pippa Wells (CERN) 10 May 2011 ACT Lectures on Detectors - Calorimeters (2/5) by Philippe Bloch (CERN) 11 May 2011 ACT Lectures on Detectors - Muon systems (3/5) by Kerstin Hoepfner (RWTH Aachen) 12 May 2011 ACT Lectures on Detectors - Particle Identification and Forward Detectors by Peter Krizan (University of Ljubljana and J. Stefan Institute, Ljubljana, Slovenia) 13 May 2011 ACT Lectures on Detectors - Trigger and Data Acquisition (5/5) by Dr. Brian Petersen (CERN) from 11:00 to 12:00 at CERN ( Bldg. 222-R-001 - Filtration Plant )
Singular tachyon kinks from regular profiles
International Nuclear Information System (INIS)
Copeland, E.J.; Saffin, P.M.; Steer, D.A.
2003-01-01
We demonstrate how Sen's singular kink solution of the Born-Infeld tachyon action can be constructed by taking the appropriate limit of initially regular profiles. It is shown that the order in which different limits are taken plays an important role in determining whether or not such a solution is obtained for a wide class of potentials. Indeed, by introducing a small parameter into the action, we are able circumvent the results of a recent paper which derived two conditions on the asymptotic tachyon potential such that the singular kink could be recovered in the large amplitude limit of periodic solutions. We show that this is explained by the non-commuting nature of two limits, and that Sen's solution is recovered if the order of the limits is chosen appropriately
Energy Technology Data Exchange (ETDEWEB)
Yanagawa, T. [Department of Physics, Nagoya University, Furo-cho, Chikusa-ku, Nagoya, Aichi 464-8602 (Japan); Sakagami, H. [Fundamental Physics Simulation Division, National Institute for Fusion Science, Oroshi-cho, Toki, Gifu 509-5292 (Japan); Nagatomo, H. [Institute of Laser Engineering, Osaka University, Suita, Osaka 565-0871 (Japan)
2013-10-15
In inertial confinement fusion, the implosion process is important in forming a high-density plasma core. In the case of a fast ignition scheme using a cone-guided target, the fuel target is imploded with a cone inserted. This scheme is advantageous for efficiently heating the imploded fuel core; however, asymmetric implosion is essentially inevitable. Moreover, the effect of cone position and opening angle on implosion also becomes critical. Focusing on these problems, the effect of the asymmetric implosion, the initial position, and the opening angle on the compression rate of the fuel is investigated using a three-dimensional pure hydrodynamic code.
Oriol, Nancy E; Hayden, Emily M; Joyal-Mowschenson, Julie; Muret-Wagstaff, Sharon; Faux, Russell; Gordon, James A
2011-09-01
In the natural world, learning emerges from the joy of play, experimentation, and inquiry as part of everyday life. However, this kind of informal learning is often difficult to integrate within structured educational curricula. This report describes an educational program that embeds naturalistic learning into formal high school, college, and graduate school science class work. Our experience is based on work with hundreds of high school, college, and graduate students enrolled in traditional science classes in which mannequin simulators were used to teach physiological principles. Specific case scenarios were integrated into the curriculum as problem-solving exercises chosen to accentuate the basic science objectives of the course. This report also highlights the historic and theoretical basis for the use of mannequin simulators as an important physiology education tool and outlines how the authors' experience in healthcare education has been effectively translated to nonclinical student populations. Particular areas of focus include critical-thinking and problem-solving behaviors and student reflections on the impact of the teaching approach.
RES: Regularized Stochastic BFGS Algorithm
Mokhtari, Aryan; Ribeiro, Alejandro
2014-12-01
RES, a regularized stochastic version of the Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton method is proposed to solve convex optimization problems with stochastic objectives. The use of stochastic gradient descent algorithms is widespread, but the number of iterations required to approximate optimal arguments can be prohibitive in high dimensional problems. Application of second order methods, on the other hand, is impracticable because computation of objective function Hessian inverses incurs excessive computational cost. BFGS modifies gradient descent by introducing a Hessian approximation matrix computed from finite gradient differences. RES utilizes stochastic gradients in lieu of deterministic gradients for both, the determination of descent directions and the approximation of the objective function's curvature. Since stochastic gradients can be computed at manageable computational cost RES is realizable and retains the convergence rate advantages of its deterministic counterparts. Convergence results show that lower and upper bounds on the Hessian egeinvalues of the sample functions are sufficient to guarantee convergence to optimal arguments. Numerical experiments showcase reductions in convergence time relative to stochastic gradient descent algorithms and non-regularized stochastic versions of BFGS. An application of RES to the implementation of support vector machines is developed.
Regularized Label Relaxation Linear Regression.
Fang, Xiaozhao; Xu, Yong; Li, Xuelong; Lai, Zhihui; Wong, Wai Keung; Fang, Bingwu
2018-04-01
Linear regression (LR) and some of its variants have been widely used for classification problems. Most of these methods assume that during the learning phase, the training samples can be exactly transformed into a strict binary label matrix, which has too little freedom to fit the labels adequately. To address this problem, in this paper, we propose a novel regularized label relaxation LR method, which has the following notable characteristics. First, the proposed method relaxes the strict binary label matrix into a slack variable matrix by introducing a nonnegative label relaxation matrix into LR, which provides more freedom to fit the labels and simultaneously enlarges the margins between different classes as much as possible. Second, the proposed method constructs the class compactness graph based on manifold learning and uses it as the regularization item to avoid the problem of overfitting. The class compactness graph is used to ensure that the samples sharing the same labels can be kept close after they are transformed. Two different algorithms, which are, respectively, based on -norm and -norm loss functions are devised. These two algorithms have compact closed-form solutions in each iteration so that they are easily implemented. Extensive experiments show that these two algorithms outperform the state-of-the-art algorithms in terms of the classification accuracy and running time.
Infants Learn Phonotactic Regularities from Brief Auditory Experience.
Chambers, Kyle E.; Onishi, Kristine H.; Fisher, Cynthia
2003-01-01
Two experiments investigated whether novel phonotactic regularities, not present in English, could be acquired by 16.5-month-olds from brief auditory experience. Subjects listened to consonant-vowel-consonant syllables in which particular consonants were artificially restricted to either initial or final position. Findings in a subsequent…
Regularizing properties of Complex Monge-Amp\\`ere flows
Tô, Tat Dat
2016-01-01
We study the regularizing properties of complex Monge-Amp\\`ere flows on a K\\"ahler manifold $(X,\\omega)$ when the initial data are $\\omega$-psh functions with zero Lelong number at all points. We prove that the general Monge-Amp\\`ere flow has a solution which is immediately smooth. We also prove the uniqueness and stability of solution.
Energy Technology Data Exchange (ETDEWEB)
Ud-Din Khan, Salah [Chinese Academy of Sciences, Hefei (China). Inst. of Plasma Physics; King Saud Univ., Riyadh (Saudi Arabia). Sustainable Energy Technologies Center; Peng, Minjun [Harbin Engineering Univ. (China). College of Nuclear Science and Technology; Yuntao, Song; Ud-Din Khan, Shahab [Chinese Academy of Sciences, Hefei (China). Inst. of Plasma Physics; Haider, Sajjad [King Saud Univ., Riyadh (Saudi Arabia). Sustainable Energy Technologies Center
2017-02-15
The objective is to analyze the safety of small modular nuclear reactors of 220 MWe power. Reactivity initiated accidents (RIA) were investigated by neutron kinetic/thermal hydraulic (NK/TH) coupling approach and thermal hydraulic code i.e., RELAP5. The results obtained by these approaches were compared for validation and accuracy of simulation. In the NK/TH coupling technique, three codes (HELIOS, REMARK, THEATRe) were used. These codes calculate different parameters of the reactor core (fission power, reactivity, fuel temperature and inlet/outlet temperatures). The data exchanges between the codes were assessed by running the codes simultaneously. The results obtained from both (NK/TH coupling) and RELAP5 code analyses complement each other, hence confirming the accuracy of simulation.
Nam, M. H.; Winters, J. M.; Stark, L.
1981-01-01
Voluntary active head rotations produced vestibulo-ocular reflex eye movements (VOR) with the subject viewing a fixation target. When this target jumped, the size of the refixation saccades were a function of the ongoing initial velocity of the eye. Saccades made against the VOR were larger in magnitude. Simulation of a reciprocally innervated model eye movement provided results comparable to the experimental data. Most of the experimental effect appeared to be due to linear summation for saccades of 5 and 10 degree magnitude. For small saccades of 2.5 degrees, peripheral nonlinear interaction of state variables in the neuromuscular plant also played a role as proven by comparable behavior in the simulated model with known controller signals.
Ross, Sheldon
2006-01-01
Ross's Simulation, Fourth Edition introduces aspiring and practicing actuaries, engineers, computer scientists and others to the practical aspects of constructing computerized simulation studies to analyze and interpret real phenomena. Readers learn to apply results of these analyses to problems in a wide variety of fields to obtain effective, accurate solutions and make predictions about future outcomes. This text explains how a computer can be used to generate random numbers, and how to use these random numbers to generate the behavior of a stochastic model over time. It presents the statist
A Class of Manifold Regularized Multiplicative Update Algorithms for Image Clustering.
Yang, Shangming; Yi, Zhang; He, Xiaofei; Li, Xuelong
2015-12-01
Multiplicative update algorithms are important tools for information retrieval, image processing, and pattern recognition. However, when the graph regularization is added to the cost function, different classes of sample data may be mapped to the same subspace, which leads to the increase of data clustering error rate. In this paper, an improved nonnegative matrix factorization (NMF) cost function is introduced. Based on the cost function, a class of novel graph regularized NMF algorithms is developed, which results in a class of extended multiplicative update algorithms with manifold structure regularization. Analysis shows that in the learning, the proposed algorithms can efficiently minimize the rank of the data representation matrix. Theoretical results presented in this paper are confirmed by simulations. For different initializations and data sets, variation curves of cost functions and decomposition data are presented to show the convergence features of the proposed update rules. Basis images, reconstructed images, and clustering results are utilized to present the efficiency of the new algorithms. Last, the clustering accuracies of different algorithms are also investigated, which shows that the proposed algorithms can achieve state-of-the-art performance in applications of image clustering.
International Nuclear Information System (INIS)
Akopov, N.; Grigoryan, A.; Karyan, G.
2017-01-01
The aim of this paper is to investigate the GEANT4 simulation for electromagnetic showers initiated by 30 MeV photons entering into the atmosphere at different altitudes (h). Charged and neutral components of the shower have been studied in various radial slices (R) with the detecting level corresponding to the altitude of Aragats mount, where the experimental setups of Cosmic Ray Division (CRD) of Yerevan Physics Institute (YerPhI) are operating. Qualitative observations of the energy spectra, as well as the tabulated parameters describing the fluxes at different values of h and R are used to make a comparison with those from the experimental data. The experimental data on particle fluxes are considered to be correlated with the atmospheric conditions such as pressure, temperature, presence of the charged clouds initiating the lightnings etc. (author)
International Nuclear Information System (INIS)
Ugachi, Hirokazu; Tsukada, Takashi; Kaji, Yoshiyuki; Nagata, Nobuaki; Dozaki, Koji; Takiguchi, Hideki
2003-01-01
Irradiation assisted stress corrosion cracking (IASCC) is caused by the synergistic effects of neutron irradiation, stress and corrosion by high temperature water. It is, therefore, essential to perform in-pile SCC tests, which are material tests under the conditions simulating those of actual LWR operation, in order to clarify the precise mechanism of the phenomenon, though mainly out-of-pile SCC tests for irradiated materials have been carried out in this research field. There are, however, many difficulties to perform in-pile SCC tests. Performing in-pile SCC tests, essential key techniques must be developed. Hence as a part of development of the key techniques for in-pile SCC tests, we have embarked on development of the test technique which enables us to obtain the information concerning the effect of such parameters as applied stress level, water chemistry, irradiation conditions, etc. on the crack initiation behavior. Although it is difficult to detect the crack initiation in in-pile SCC tests, the crack initiation can be evaluated by the detection of specimen rupture if the cross section area of the specimen is small enough. Therefore, we adopted the uniaxial constant loading (UCL) test with small tensile specimens. This paper will describe the current status of the development of several techniques for in-pile SCC initiation tests in JMTR and the results of the performance tests of the designed testing unit using the out-of-pile loop facility. (author)
International Nuclear Information System (INIS)
Guyot, Maxime
2014-01-01
This project is dedicated to the analysis and the quantification of bias corresponding to the computational methodology for simulating the initiating phase of severe accidents on Sodium Fast Reactors. A deterministic approach is carried out to assess the consequences of a severe accident by adopting best estimate design evaluations. An objective of this deterministic approach is to provide guidance to mitigate severe accident developments and re-criticalities through the implementation of adequate design measures. These studies are generally based on modern simulation techniques to test and verify a given design. The new approach developed in this project aims to improve the safety assessment of Sodium Fast Reactors by decreasing the bias related to the deterministic analysis of severe accident scenarios. During the initiating phase, the subassembly wrapper tubes keep their mechanical integrity. Material disruption and dispersal is primarily one-dimensional. For this reason, evaluation methodology for the initiating phase relies on a multiple-channel approach. Typically a channel represents an average pin in a subassembly or a group of similar subassemblies. In the multiple-channel approach, the core thermal-hydraulics model is composed of 1 or 2 D channels. The thermal-hydraulics model is coupled to a neutronics module to provide an estimate of the reactor power level. In this project, a new computational model has been developed to extend the initiating phase modeling. This new model is based on a multi-physics coupling. This model has been applied to obtain information unavailable up to now in regards to neutronics and thermal-hydraulics models and their coupling. (author) [fr
Energy Technology Data Exchange (ETDEWEB)
Zhang, Xuesong; Izaurralde, Roberto C.; Arnold, Jeffrey; Williams, Jimmy R.; Srinivasan, Raghavan
2013-10-01
Climate change is one of the most compelling modern issues and has important implications for almost every aspect of natural and human systems. The Soil and Water Assessment Tool (SWAT) model has been applied worldwide to support sustainable land and water management in a changing climate. However, the inadequacies of the existing carbon algorithm in SWAT limit its application in assessing impacts of human activities on CO2 emission, one important source of greenhouse gases (GHGs) that traps heat in the earth system and results in global warming. In this research, we incorporate a revised version of the CENTURY carbon model into SWAT to describe dynamics of soil organic matter (SOM)- residue and simulate land-atmosphere carbon exchange.
Kim, Steven C; Fisher, Jeremy G; Delman, Keith A; Hinman, Johanna M; Srinivasan, Jahnavi K
Surgical simulation is an important adjunct in surgical education. The majority of operative procedures can be simplified to core components. This study aimed to quantify a cadaver-based simulation course utility in improving exposure to fundamental maneuvers, resident and attending confidence in trainee capability, and if this led to earlier operative independence. A list of fundamental surgical procedures was established by a faculty panel. Residents were assigned to a group led by a chief resident. Residents performed skills on cadavers appropriate for PGY level. A video-recorded examination where they narrated and demonstrated a task independently was then graded by attendings using standardized rubrics. Participants completed surveys regarding improvements in knowledge and confidence. The course was conducted at the Emory University School of Medicine and the T3 Laboratories in Atlanta, GA. A total of 133 residents and 41 attendings participated in the course. 133 (100%) participating residents and 32 (78%) attendings completed surveys. Resident confidence in completing the assigned skill independently increased from 3 (2-3) to 4 (3-4), p 80%), p < 0.04. Attendings were more likely to grant autonomy in the operating room after this exercise (4 [3-5]). A cadaveric skills course focused on fundamental maneuvers with objective confirmation of success is a viable adjunct to clinical operative experience. Residents were formally exposed to fundamental surgical maneuvers earlier as a result of this course. This activity improved both resident and attending confidence in trainee operative skill, resulting in increased attending willingness to grant a higher level of autonomy in the operating room. Copyright Â© 2016 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
Lyu, Peisheng; Wang, Wanlin; Long, Xukai; Zhang, Kaixuan; Gao, Erzhuo; Qin, Rongshan
2018-02-01
The chamfered mold with a typical corner shape (angle between the chamfered face and hot face is 45 deg) was applied to the mold simulator study in this paper, and the results were compared with the previous results from a well-developed right-angle mold simulator system. The results suggested that the designed chamfered structure would increase the thermal resistance and weaken the two-dimensional heat transfer around the mold corner, causing the homogeneity of the mold surface temperatures and heat fluxes. In addition, the chamfered structure can decrease the fluctuation of the steel level and the liquid slag flow around the meniscus at mold corner. The cooling intensities at different longitudinal sections of shell are close to each other due to the similar time-average solidification factors, which are 2.392 mm/s1/2 (section A-A: chamfered center), 2.372 mm/s1/2 (section B-B: 135 deg corner), and 2.380 mm/s1/2 (section D-D: face), respectively. For the same oscillation mark (OM), the heights of OM roots at different positions (profile L1 (face), profile L2 (135 deg corner), and profile L3 (chamfered center)) are very close to each other. The average value of height difference (HD) between two OMs roots for L1 and L2 is 0.22 mm, and for L2 and L3 is 0.38 mm. Finally, with the help of metallographic examination, the shapes of different hooks were also discussed.
From inactive to regular jogger
DEFF Research Database (Denmark)
Lund-Cramer, Pernille; Brinkmann Løite, Vibeke; Bredahl, Thomas Viskum Gjelstrup
study was conducted using individual semi-structured interviews on how a successful long-term behavior change had been achieved. Ten informants were purposely selected from participants in the DANO-RUN research project (7 men, 3 women, average age 41.5). Interviews were performed on the basis of Theory...... of Planned Behavior (TPB) and The Transtheoretical Model (TTM). Coding and analysis of interviews were performed using NVivo 10 software. Results TPB: During the behavior change process, the intention to jogging shifted from a focus on weight loss and improved fitness to both physical health, psychological......Title From inactive to regular jogger - a qualitative study of achieved behavioral change among recreational joggers Authors Pernille Lund-Cramer & Vibeke Brinkmann Løite Purpose Despite extensive knowledge of barriers to physical activity, most interventions promoting physical activity have proven...
Tessellating the Sphere with Regular Polygons
Soto-Johnson, Hortensia; Bechthold, Dawn
2004-01-01
Tessellations in the Euclidean plane and regular polygons that tessellate the sphere are reviewed. The regular polygons that can possibly tesellate the sphere are spherical triangles, squares and pentagons.
On the equivalence of different regularization methods
International Nuclear Information System (INIS)
Brzezowski, S.
1985-01-01
The R-circunflex-operation preceded by the regularization procedure is discussed. Some arguments are given, according to which the results may depend on the method of regularization, introduced in order to avoid divergences in perturbation calculations. 10 refs. (author)
The uniqueness of the regularization procedure
International Nuclear Information System (INIS)
Brzezowski, S.
1981-01-01
On the grounds of the BPHZ procedure, the criteria of correct regularization in perturbation calculations of QFT are given, together with the prescription for dividing the regularized formulas into the finite and infinite parts. (author)
Application of Turchin's method of statistical regularization
Zelenyi, Mikhail; Poliakova, Mariia; Nozik, Alexander; Khudyakov, Alexey
2018-04-01
During analysis of experimental data, one usually needs to restore a signal after it has been convoluted with some kind of apparatus function. According to Hadamard's definition this problem is ill-posed and requires regularization to provide sensible results. In this article we describe an implementation of the Turchin's method of statistical regularization based on the Bayesian approach to the regularization strategy.
Regular extensions of some classes of grammars
Nijholt, Antinus
Culik and Cohen introduced the class of LR-regular grammars, an extension of the LR(k) grammars. In this report we consider the analogous extension of the LL(k) grammers, called the LL-regular grammars. The relations of this class of grammars to other classes of grammars are shown. Every LL-regular
International Nuclear Information System (INIS)
Giannantonio, Tommaso; Porciani, Cristiano
2010-01-01
We study structure formation in the presence of primordial non-Gaussianity of the local type with parameters f NL and g NL . We show that the distribution of dark-matter halos is naturally described by a multivariate bias scheme where the halo overdensity depends not only on the underlying matter density fluctuation δ but also on the Gaussian part of the primordial gravitational potential φ. This corresponds to a non-local bias scheme in terms of δ only. We derive the coefficients of the bias expansion as a function of the halo mass by applying the peak-background split to common parametrizations for the halo mass function in the non-Gaussian scenario. We then compute the halo power spectrum and halo-matter cross spectrum in the framework of Eulerian perturbation theory up to third order. Comparing our results against N-body simulations, we find that our model accurately describes the numerical data for wave numbers k≤0.1-0.3h Mpc -1 depending on redshift and halo mass. In our multivariate approach, perturbations in the halo counts trace φ on large scales, and this explains why the halo and matter power spectra show different asymptotic trends for k→0. This strongly scale-dependent bias originates from terms at leading order in our expansion. This is different from what happens using the standard univariate local bias where the scale-dependent terms come from badly behaved higher-order corrections. On the other hand, our biasing scheme reduces to the usual local bias on smaller scales, where |φ| is typically much smaller than the density perturbations. We finally discuss the halo bispectrum in the context of multivariate biasing and show that, due to its strong scale and shape dependence, it is a powerful tool for the detection of primordial non-Gaussianity from future galaxy surveys.
Kulper, Sloan A; Fang, Christian X; Ren, Xiaodan; Guo, Margaret; Sze, Kam Y; Leung, Frankie K L; Lu, William W
2018-04-01
A novel computational model of implant migration in trabecular bone was developed using smoothed-particle hydrodynamics (SPH), and an initial validation was performed via correlation with experimental data. Six fresh-frozen human cadaveric specimens measuring 10 × 10 × 20 mm were extracted from the proximal femurs of female donors (mean age of 82 years, range 75-90, BV/TV ratios between 17.88% and 30.49%). These specimens were then penetrated under axial loading to a depth of 10 mm with 5 mm diameter cylindrical indenters bearing either flat or sharp/conical tip designs similar to blunt and self-tapping cancellous screws, assigned in a random manner. SPH models were constructed based on microCT scans (17.33 µm) of the cadaveric specimens. Two initial specimens were used for calibration of material model parameters. The remaining four specimens were then simulated in silico using identical material model parameters. Peak forces varied between 92.0 and 365.0 N in the experiments, and 115.5-352.2 N in the SPH simulations. The concordance correlation coefficient between experimental and simulated pairs was 0.888, with a 95%CI of 0.8832-0.8926, a Pearson ρ (precision) value of 0.9396, and a bias correction factor Cb (accuracy) value of 0.945. Patterns of bone compaction were qualitatively similar; both experimental and simulated flat-tipped indenters produced dense regions of compacted material adjacent to the advancing face of the indenter, while sharp-tipped indenters deposited compacted material along their peripheries. Simulations based on SPH can produce accurate predictions of trabecular bone penetration that are useful for characterizing implant performance under high-strain loading conditions. © 2017 Orthopaedic Research Society. Published by Wiley Periodicals, Inc. J Orthop Res 36:1114-1123, 2018. © 2017 Orthopaedic Research Society. Published by Wiley Periodicals, Inc.
Gamma regularization based reconstruction for low dose CT
International Nuclear Information System (INIS)
Zhang, Junfeng; Chen, Yang; Hu, Yining; Luo, Limin; Shu, Huazhong; Li, Bicao; Liu, Jin; Coatrieux, Jean-Louis
2015-01-01
Reducing the radiation in computerized tomography is today a major concern in radiology. Low dose computerized tomography (LDCT) offers a sound way to deal with this problem. However, more severe noise in the reconstructed CT images is observed under low dose scan protocols (e.g. lowered tube current or voltage values). In this paper we propose a Gamma regularization based algorithm for LDCT image reconstruction. This solution is flexible and provides a good balance between the regularizations based on l 0 -norm and l 1 -norm. We evaluate the proposed approach using the projection data from simulated phantoms and scanned Catphan phantoms. Qualitative and quantitative results show that the Gamma regularization based reconstruction can perform better in both edge-preserving and noise suppression when compared with other norms. (paper)
Structural characterization of the packings of granular regular polygons.
Wang, Chuncheng; Dong, Kejun; Yu, Aibing
2015-12-01
By using a recently developed method for discrete modeling of nonspherical particles, we simulate the random packings of granular regular polygons with three to 11 edges under gravity. The effects of shape and friction on the packing structures are investigated by various structural parameters, including packing fraction, the radial distribution function, coordination number, Voronoi tessellation, and bond-orientational order. We find that packing fraction is generally higher for geometrically nonfrustrated regular polygons, and can be increased by the increase of edge number and decrease of friction. The changes of packing fraction are linked with those of the microstructures, such as the variations of the translational and orientational orders and local configurations. In particular, the free areas of Voronoi tessellations (which are related to local packing fractions) can be described by log-normal distributions for all polygons. The quantitative analyses establish a clearer picture for the packings of regular polygons.
Selecting protein families for environmental features based on manifold regularization.
Jiang, Xingpeng; Xu, Weiwei; Park, E K; Li, Guangrong
2014-06-01
Recently, statistics and machine learning have been developed to identify functional or taxonomic features of environmental features or physiological status. Important proteins (or other functional and taxonomic entities) to environmental features can be potentially used as biosensors. A major challenge is how the distribution of protein and gene functions embodies the adaption of microbial communities across environments and host habitats. In this paper, we propose a novel regularization method for linear regression to adapt the challenge. The approach is inspired by local linear embedding (LLE) and we call it a manifold-constrained regularization for linear regression (McRe). The novel regularization procedure also has potential to be used in solving other linear systems. We demonstrate the efficiency and the performance of the approach in both simulation and real data.
Consistent Partial Least Squares Path Modeling via Regularization
Directory of Open Access Journals (Sweden)
Sunho Jung
2018-02-01
Full Text Available Partial least squares (PLS path modeling is a component-based structural equation modeling that has been adopted in social and psychological research due to its data-analytic capability and flexibility. A recent methodological advance is consistent PLS (PLSc, designed to produce consistent estimates of path coefficients in structural models involving common factors. In practice, however, PLSc may frequently encounter multicollinearity in part because it takes a strategy of estimating path coefficients based on consistent correlations among independent latent variables. PLSc has yet no remedy for this multicollinearity problem, which can cause loss of statistical power and accuracy in parameter estimation. Thus, a ridge type of regularization is incorporated into PLSc, creating a new technique called regularized PLSc. A comprehensive simulation study is conducted to evaluate the performance of regularized PLSc as compared to its non-regularized counterpart in terms of power and accuracy. The results show that our regularized PLSc is recommended for use when serious multicollinearity is present.
Energy Technology Data Exchange (ETDEWEB)
Signor, L., E-mail: loic.signor@ensma.fr [Institut Pprime (UPR3346) CNRS/ISAE-ENSMA/Poitiers University (France); Villechaise, P.; Ghidossi, T.; Lacoste, E.; Gueguen, M. [Institut Pprime (UPR3346) CNRS/ISAE-ENSMA/Poitiers University (France); Courtin, S. [AREVA NP (France)
2016-01-01
Local crystallographic configurations (also referred to as local micro-texture) which promote transgranular micro-crack initiation in 316LN stainless steel in low cycle fatigue are studied. Specimens were subjected to tension-compression with constant plastic strain amplitude, in air, at room temperature, during 5000 cycles (i.e. about 20% of the fatigue life). The first part of this work is devoted to a statistical analysis of slip marks and cracks observed at surface of one fatigued specimen using scanning electron microscope (SEM), in a region composed of about 1000 grains. 95 micro-cracks initiated along persistent slip markings detected in this region are analyzed with respect to different characteristics of grains, especially crystallographic orientation, measured using electron backscatter diffraction (EBSD). From the detailed analysis of the numerous data derived from these observations and measurements performed only at surface, the two main significant factors which are found to favour crack formation are the grain size and the orientation of the activated slip system with respect to the surface. Indeed, the mean size of grains which contain cracks is almost twice the one of the remaining grains. Moreover, for most grains in which cracks are observed, the angle between the normal to the surface and the activated Burgers vector (resp. the normal to the activated slip plane) lies in the range [30°, 50°] (resp. [55°, 70°]). No other characteristic was found to provide significant and direct information in order to identify initiation sites. Thus, in the second part of this work, the analysis of initiation sites is performed using additional information relative to three-dimensional (3D) aspects of the microstructure. 3D characterisation of the polycrystalline microstructure and some cracks in one fatigued specimen was achieved using serial-sectioning technique combined with SEM and EBSD. As an example, the study of one specific crack and its surrounding
Surface-based prostate registration with biomechanical regularization
van de Ven, Wendy J. M.; Hu, Yipeng; Barentsz, Jelle O.; Karssemeijer, Nico; Barratt, Dean; Huisman, Henkjan J.
2013-03-01
Adding MR-derived information to standard transrectal ultrasound (TRUS) images for guiding prostate biopsy is of substantial clinical interest. A tumor visible on MR images can be projected on ultrasound by using MRUS registration. A common approach is to use surface-based registration. We hypothesize that biomechanical modeling will better control deformation inside the prostate than a regular surface-based registration method. We developed a novel method by extending a surface-based registration with finite element (FE) simulation to better predict internal deformation of the prostate. For each of six patients, a tetrahedral mesh was constructed from the manual prostate segmentation. Next, the internal prostate deformation was simulated using the derived radial surface displacement as boundary condition. The deformation field within the gland was calculated using the predicted FE node displacements and thin-plate spline interpolation. We tested our method on MR guided MR biopsy imaging data, as landmarks can easily be identified on MR images. For evaluation of the registration accuracy we used 45 anatomical landmarks located in all regions of the prostate. Our results show that the median target registration error of a surface-based registration with biomechanical regularization is 1.88 mm, which is significantly different from 2.61 mm without biomechanical regularization. We can conclude that biomechanical FE modeling has the potential to improve the accuracy of multimodal prostate registration when comparing it to regular surface-based registration.
Iterative regularization in intensity-modulated radiation therapy optimization
International Nuclear Information System (INIS)
Carlsson, Fredrik; Forsgren, Anders
2006-01-01
A common way to solve intensity-modulated radiation therapy (IMRT) optimization problems is to use a beamlet-based approach. The approach is usually employed in a three-step manner: first a beamlet-weight optimization problem is solved, then the fluence profiles are converted into step-and-shoot segments, and finally postoptimization of the segment weights is performed. A drawback of beamlet-based approaches is that beamlet-weight optimization problems are ill-conditioned and have to be regularized in order to produce smooth fluence profiles that are suitable for conversion. The purpose of this paper is twofold: first, to explain the suitability of solving beamlet-based IMRT problems by a BFGS quasi-Newton sequential quadratic programming method with diagonal initial Hessian estimate, and second, to empirically show that beamlet-weight optimization problems should be solved in relatively few iterations when using this optimization method. The explanation of the suitability is based on viewing the optimization method as an iterative regularization method. In iterative regularization, the optimization problem is solved approximately by iterating long enough to obtain a solution close to the optimal one, but terminating before too much noise occurs. Iterative regularization requires an optimization method that initially proceeds in smooth directions and makes rapid initial progress. Solving ten beamlet-based IMRT problems with dose-volume objectives and bounds on the beamlet-weights, we find that the considered optimization method fulfills the requirements for performing iterative regularization. After segment-weight optimization, the treatments obtained using 35 beamlet-weight iterations outperform the treatments obtained using 100 beamlet-weight iterations, both in terms of objective value and of target uniformity. We conclude that iterating too long may in fact deteriorate the quality of the deliverable plan
Directory of Open Access Journals (Sweden)
Kyosuke Hiyama
2015-01-01
Full Text Available Applying data mining techniques on a database of BIM models could provide valuable insights in key design patterns implicitly present in these BIM models. The architectural designer would then be able to use previous data from existing building projects as default values in building performance simulation software for the early phases of building design. The author has proposed the method to minimize the magnitude of the variation in these default values in subsequent design stages. This approach maintains the accuracy of the simulation results in the initial stages of building design. In this study, a more convincing argument is presented to demonstrate the significance of the new method. The variation in the ideal default values for different building design conditions is assessed first. Next, the influence of each condition on these variations is investigated. The space depth is found to have a large impact on the ideal default value of the window to wall ratio. In addition, the presence or absence of lighting control and natural ventilation has a significant influence on the ideal default value. These effects can be used to identify the types of building conditions that should be considered to determine the ideal default values.
Hiyama, Kyosuke
2015-01-01
Applying data mining techniques on a database of BIM models could provide valuable insights in key design patterns implicitly present in these BIM models. The architectural designer would then be able to use previous data from existing building projects as default values in building performance simulation software for the early phases of building design. The author has proposed the method to minimize the magnitude of the variation in these default values in subsequent design stages. This approach maintains the accuracy of the simulation results in the initial stages of building design. In this study, a more convincing argument is presented to demonstrate the significance of the new method. The variation in the ideal default values for different building design conditions is assessed first. Next, the influence of each condition on these variations is investigated. The space depth is found to have a large impact on the ideal default value of the window to wall ratio. In addition, the presence or absence of lighting control and natural ventilation has a significant influence on the ideal default value. These effects can be used to identify the types of building conditions that should be considered to determine the ideal default values.
International Nuclear Information System (INIS)
Calligari, Paolo
2008-01-01
The protein Initiation Factor 6 (IF6) takes part in the protein synthesis regulation of several organisms. It was also found in archeaebacteria such as Methanococcus jannaschii which lives in deep-seas near hydrothermal vents where temperature reaches 80 C and pressure is between 250 bar and 500 bar. The aim of this work was to study for the first time dynamical and structural properties of IF6 produced by M. jannaschii and comparing them with those of the IF6 homologue present in Saccharomyces cerevisiae which lives at 'normal' environmental conditions (27 C and 1 bar). Molecular simulation gave here new insights into the adaptation of these two proteins to their respective physiological conditions and showed that the latter induced similar dynamical and structural properties: in their respective 'natural' conditions, IF6s show very similar structural fluctuations and the characteristic relaxation times which define their dynamical properties shows similar changes when comparing unfavorable conditions to physiological ones. The creation of these corresponding states between the two homologues has been interpreted by the fractional Brownian dynamics model and by a novel method for the characterization of protein secondary structures. The latter is presented here in detail together with some examples of other applications. Experimental data obtained from quasi-elastic neutron scattering seemed to support the results obtained by molecular simulations. (author) [fr
Energy Technology Data Exchange (ETDEWEB)
Schilling, O; Latini, M
2010-01-12
The dynamics of the reshocked multi-mode Richtmyer-Meshkov instability is investigated using 513 x 257{sup 2} three-dimensional ninth-order weighted essentially nonoscillatory shock-capturing simulations. A two-mode initial perturbation with superposed random noise is used to model the Mach 1.5 air/SF{sub 6} Vetter-Sturtevant shock tube experiment. The mass fraction and enstrophy isosurfaces, and density cross-sections are utilized to show the detailed flow structure before, during, and after reshock. It is shown that the mixing layer growth agrees well with the experimentally measured growth rate before and after reshock. The post-reshock growth rate is also in good agreement with the prediction of the Mikaelian model. A parametric study of the sensitivity of the layer growth to the choice of amplitudes of the short and long wavelength initial interfacial perturbation is also presented. Finally, the amplification effects of reshock are quantified using the evolution of the turbulent kinetic energy and turbulent enstrophy spectra, as well as the evolution of the baroclinic enstrophy production, buoyancy production, and shear production terms in the enstrophy and turbulent kinetic transport equations.
Lapenta, William M.; Crosson, William; Dembek, Scott; Lakhtakia, Mercedes
1998-01-01
It is well known that soil moisture is a characteristic of the land surface that strongly affects the partitioning of outgoing radiation into sensible and latent heat which significantly impacts both weather and climate. Detailed land surface schemes are now being coupled to mesoscale atmospheric models in order to represent the effect of soil moisture upon atmospheric simulations. However, there is little direct soil moisture data available to initialize these models on regional to continental scales. As a result, a Soil Hydrology Model (SHM) is currently being used to generate an indirect estimate of the soil moisture conditions over the continental United States at a grid resolution of 36 Km on a daily basis since 8 May 1995. The SHM is forced by analyses of atmospheric observations including precipitation and contains detailed information on slope soil and landcover characteristics.The purpose of this paper is to evaluate the utility of initializing a detailed coupled model with the soil moisture data produced by SHM.
Higher derivative regularization and chiral anomaly
International Nuclear Information System (INIS)
Nagahama, Yoshinori.
1985-02-01
A higher derivative regularization which automatically leads to the consistent chiral anomaly is analyzed in detail. It explicitly breaks all the local gauge symmetry but preserves global chiral symmetry and leads to the chirally symmetric consistent anomaly. This regularization thus clarifies the physics content contained in the consistent anomaly. We also briefly comment on the application of this higher derivative regularization to massless QED. (author)
Regularity effect in prospective memory during aging
Directory of Open Access Journals (Sweden)
Geoffrey Blondelle
2016-10-01
Full Text Available Background: Regularity effect can affect performance in prospective memory (PM, but little is known on the cognitive processes linked to this effect. Moreover, its impacts with regard to aging remain unknown. To our knowledge, this study is the first to examine regularity effect in PM in a lifespan perspective, with a sample of young, intermediate, and older adults. Objective and design: Our study examined the regularity effect in PM in three groups of participants: 28 young adults (18–30, 16 intermediate adults (40–55, and 25 older adults (65–80. The task, adapted from the Virtual Week, was designed to manipulate the regularity of the various activities of daily life that were to be recalled (regular repeated activities vs. irregular non-repeated activities. We examine the role of several cognitive functions including certain dimensions of executive functions (planning, inhibition, shifting, and binding, short-term memory, and retrospective episodic memory to identify those involved in PM, according to regularity and age. Results: A mixed-design ANOVA showed a main effect of task regularity and an interaction between age and regularity: an age-related difference in PM performances was found for irregular activities (older < young, but not for regular activities. All participants recalled more regular activities than irregular ones with no age effect. It appeared that recalling of regular activities only involved planning for both intermediate and older adults, while recalling of irregular ones were linked to planning, inhibition, short-term memory, binding, and retrospective episodic memory. Conclusion: Taken together, our data suggest that planning capacities seem to play a major role in remembering to perform intended actions with advancing age. Furthermore, the age-PM-paradox may be attenuated when the experimental design is adapted by implementing a familiar context through the use of activities of daily living. The clinical
Regularity effect in prospective memory during aging
Blondelle, Geoffrey; Hainselin, Mathieu; Gounden, Yannick; Heurley, Laurent; Voisin, Hélène; Megalakaki, Olga; Bressous, Estelle; Quaglino, Véronique
2016-01-01
Background: Regularity effect can affect performance in prospective memory (PM), but little is known on the cognitive processes linked to this effect. Moreover, its impacts with regard to aging remain unknown. To our knowledge, this study is the first to examine regularity effect in PM in a lifespan perspective, with a sample of young, intermediate, and older adults.Objective and design: Our study examined the regularity effect in PM in three groups of participants: 28 young adults (18–30), 1...
Takasuka, Daisuke; Satoh, Masaki; Miyakawa, Tomoki; Miura, Hiroaki
2018-04-01
To understand the intrinsic onset mechanism of the Madden-Julian Oscillation (MJO), we simulated a set of initiation processes of MJO-like disturbances in 10 year aqua-planet experiments using a global atmospheric model with a 56 km horizontal mesh and an explicit cloud scheme. Under a condition with a zonally nonuniform sea surface temperature (SST) in the tropics, we reproduced MJO-like disturbances over the western warm pool region. The lagged-composite analysis of detected MJO-like disturbances clarifies the time sequence of three-dimensional dynamic and moisture fields prior to the onset. We found that midtropospheric moistening, a condition that is favorable for deep convection, is particularly obvious in the initiation region 5-9 days before onset. The moistening is caused by two-dimensional horizontal advection due to cross-equatorial shallow circulations associated with mixed Rossby-gravity waves, as well as anomalous poleward flows of a negative Rossby response to suppressed convection. When the midtroposphere is sufficiently moistened, lower tropospheric signals of circumnavigating Kelvin waves trigger active convection. The surface latent heat flux (LHF) feedback contributes to the initial stages of convective organization, while the cloud-radiation feedback contributes to later stages. Sensitivity experiments suggest that circumnavigating Kelvin waves regulate the period of MJO-like disturbances because of efficient convective triggering and that the LHF feedback contributes to rapid convective organization. However, the experiments also reveal that both conditions are not necessary for the existence of MJO-like disturbances. Implications for the relevance of these mechanisms for MJO onset are also discussed.
International Nuclear Information System (INIS)
Boyle, Michael; Brown, Duncan A; Pekowsky, Larne
2009-01-01
We study the effectiveness of stationary-phase approximated post-Newtonian waveforms currently used by ground-based gravitational-wave detectors to search for the coalescence of binary black holes by comparing them to an accurate waveform obtained from numerical simulation of an equal-mass non-spinning binary black hole inspiral, merger and ringdown. We perform this study for the initial- and advanced-LIGO detectors. We find that overlaps between the templates and signal can be improved by integrating the match filter to higher frequencies than used currently. We propose simple analytic frequency cutoffs for both initial and advanced LIGO, which achieve nearly optimal matches, and can easily be extended to unequal-mass, spinning systems. We also find that templates that include terms in the phase evolution up to 3.5 post-Newtonian (pN) order are nearly always better, and rarely significantly worse, than 2.0 pN templates currently in use. For initial LIGO we recommend a strategy using templates that include a recently introduced pseudo-4.0 pN term in the low-mass (M ≤ 35 M o-dot ) region, and 3.5 pN templates allowing unphysical values of the symmetric reduced mass η above this. This strategy always achieves overlaps within 0.3% of the optimum, for the data used here. For advanced LIGO we recommend a strategy using 3.5 pN templates up to M = 12 M o-dot , 2.0 pN templates up to M = 21 M o-dot , pseudo-4.0 pN templates up to 65 M o-dot , and 3.5 pN templates with unphysical η for higher masses. This strategy always achieves overlaps within 0.7% of the optimum for advanced LIGO.
Directory of Open Access Journals (Sweden)
Isabelle Guénot-Delahaie
2018-03-01
Full Text Available The ALCYONE multidimensional fuel performance code codeveloped by the CEA, EDF, and AREVA NP within the PLEIADES software environment models the behavior of fuel rods during irradiation in commercial pressurized water reactors (PWRs, power ramps in experimental reactors, or accidental conditions such as loss of coolant accidents or reactivity-initiated accidents (RIAs. As regards the latter case of transient in particular, ALCYONE is intended to predictively simulate the response of a fuel rod by taking account of mechanisms in a way that models the physics as closely as possible, encompassing all possible stages of the transient as well as various fuel/cladding material types and irradiation conditions of interest. On the way to complying with these objectives, ALCYONE development and validation shall include tests on PWR-UO2 fuel rods with advanced claddings such as M5® under “low pressure–low temperature” or “high pressure–high temperature” water coolant conditions.This article first presents ALCYONE V1.4 RIA-related features and modeling. It especially focuses on recent developments dedicated on the one hand to nonsteady water heat and mass transport and on the other hand to the modeling of grain boundary cracking-induced fission gas release and swelling. This article then compares some simulations of RIA transients performed on UO2-M5® fuel rods in flowing sodium or stagnant water coolant conditions to the relevant experimental results gained from tests performed in either the French CABRI or the Japanese NSRR nuclear transient reactor facilities. It shows in particular to what extent ALCYONE—starting from base irradiation conditions it itself computes—is currently able to handle both the first stage of the transient, namely the pellet-cladding mechanical interaction phase, and the second stage of the transient, should a boiling crisis occur.Areas of improvement are finally discussed with a view to simulating and
A regularization method for extrapolation of solar potential magnetic fields
Gary, G. A.; Musielak, Z. E.
1992-01-01
The mathematical basis of a Tikhonov regularization method for extrapolating the chromospheric-coronal magnetic field using photospheric vector magnetograms is discussed. The basic techniques show that the Cauchy initial value problem can be formulated for potential magnetic fields. The potential field analysis considers a set of linear, elliptic partial differential equations. It is found that, by introducing an appropriate smoothing of the initial data of the Cauchy potential problem, an approximate Fourier integral solution is found, and an upper bound to the error in the solution is derived. This specific regularization technique, which is a function of magnetograph measurement sensitivities, provides a method to extrapolate the potential magnetic field above an active region into the chromosphere and low corona.
Regularization and error assignment to unfolded distributions
Zech, Gunter
2011-01-01
The commonly used approach to present unfolded data only in graphical formwith the diagonal error depending on the regularization strength is unsatisfac-tory. It does not permit the adjustment of parameters of theories, the exclusionof theories that are admitted by the observed data and does not allow the com-bination of data from different experiments. We propose fixing the regulariza-tion strength by a p-value criterion, indicating the experimental uncertaintiesindependent of the regularization and publishing the unfolded data in additionwithout regularization. These considerations are illustrated with three differentunfolding and smoothing approaches applied to a toy example.
Iterative Regularization with Minimum-Residual Methods
DEFF Research Database (Denmark)
Jensen, Toke Koldborg; Hansen, Per Christian
2007-01-01
subspaces. We provide a combination of theory and numerical examples, and our analysis confirms the experience that MINRES and MR-II can work as general regularization methods. We also demonstrate theoretically and experimentally that the same is not true, in general, for GMRES and RRGMRES their success......We study the regularization properties of iterative minimum-residual methods applied to discrete ill-posed problems. In these methods, the projection onto the underlying Krylov subspace acts as a regularizer, and the emphasis of this work is on the role played by the basis vectors of these Krylov...... as regularization methods is highly problem dependent....
Iterative regularization with minimum-residual methods
DEFF Research Database (Denmark)
Jensen, Toke Koldborg; Hansen, Per Christian
2006-01-01
subspaces. We provide a combination of theory and numerical examples, and our analysis confirms the experience that MINRES and MR-II can work as general regularization methods. We also demonstrate theoretically and experimentally that the same is not true, in general, for GMRES and RRGMRES - their success......We study the regularization properties of iterative minimum-residual methods applied to discrete ill-posed problems. In these methods, the projection onto the underlying Krylov subspace acts as a regularizer, and the emphasis of this work is on the role played by the basis vectors of these Krylov...... as regularization methods is highly problem dependent....
Tanioka, Yuichiro
2017-04-01
After tsunami disaster due to the 2011 Tohoku-oki great earthquake, improvement of the tsunami forecast has been an urgent issue in Japan. National Institute of Disaster Prevention is installing a cable network system of earthquake and tsunami observation (S-NET) at the ocean bottom along the Japan and Kurile trench. This cable system includes 125 pressure sensors (tsunami meters) which are separated by 30 km. Along the Nankai trough, JAMSTEC already installed and operated the cable network system of seismometers and pressure sensors (DONET and DONET2). Those systems are the most dense observation network systems on top of source areas of great underthrust earthquakes in the world. Real-time tsunami forecast has depended on estimation of earthquake parameters, such as epicenter, depth, and magnitude of earthquakes. Recently, tsunami forecast method has been developed using the estimation of tsunami source from tsunami waveforms observed at the ocean bottom pressure sensors. However, when we have many pressure sensors separated by 30km on top of the source area, we do not need to estimate the tsunami source or earthquake source to compute tsunami. Instead, we can initiate a tsunami simulation from those dense tsunami observed data. Observed tsunami height differences with a time interval at the ocean bottom pressure sensors separated by 30 km were used to estimate tsunami height distribution at a particular time. In our new method, tsunami numerical simulation was initiated from those estimated tsunami height distribution. In this paper, the above method is improved and applied for the tsunami generated by the 2011 Tohoku-oki great earthquake. Tsunami source model of the 2011 Tohoku-oki great earthquake estimated using observed tsunami waveforms, coseimic deformation observed by GPS and ocean bottom sensors by Gusman et al. (2012) is used in this study. The ocean surface deformation is computed from the source model and used as an initial condition of tsunami
Ma, Qian; Xia, Houping; Xu, Qiang; Zhao, Lei
2018-05-01
A new method combining Tikhonov regularization and kernel matrix optimization by multi-wavelength incidence is proposed for retrieving particle size distribution (PSD) in an independent model with improved accuracy and stability. In comparison to individual regularization or multi-wavelength least squares, the proposed method exhibited better anti-noise capability, higher accuracy and stability. While standard regularization typically makes use of the unit matrix, it is not universal for different PSDs, particularly for Junge distributions. Thus, a suitable regularization matrix was chosen by numerical simulation, with the second-order differential matrix found to be appropriate for most PSD types.
Low regularity solutions of the Chern-Simons-Higgs equations in the Lorentz gauge
Directory of Open Access Journals (Sweden)
Nikolaos Bournaveas
2009-09-01
Full Text Available We prove local well-posedness for the 2+1-dimensional Chern-Simons-Higgs equations in the Lorentz gauge with initial data of low regularity. Our result improves earlier results by Huh [10, 11].
Material parameters characterization for arbitrary N-sided regular polygonal invisible cloak
International Nuclear Information System (INIS)
Wu Qun; Zhang Kuang; Meng Fanyi; Li Lewei
2009-01-01
Arbitrary N-sided regular polygonal cylindrical cloaks are proposed and designed based on the coordinate transformation theory. First, the general expressions of constitutive tensors of the N-sided regular polygonal cylindrical cloaks are derived, then there are some full-wave simulations of the cloaks that are composed of inhomogeneous and anisotropic metamaterials, which will bend incoming electromagnetic waves and guide them to propagate around the inner region; such electromagnetic waves will return to their original propagation directions without distorting the waves outside the polygonal cloak. The results of full-wave simulations validate the general expressions of constitutive tensors of the N-sided regular polygonal cylindrical cloaks we derived.
A regularized stationary mean-field game
Yang, Xianjin
2016-01-01
In the thesis, we discuss the existence and numerical approximations of solutions of a regularized mean-field game with a low-order regularization. In the first part, we prove a priori estimates and use the continuation method to obtain the existence of a solution with a positive density. Finally, we introduce the monotone flow method and solve the system numerically.
A regularized stationary mean-field game
Yang, Xianjin
2016-04-19
In the thesis, we discuss the existence and numerical approximations of solutions of a regularized mean-field game with a low-order regularization. In the first part, we prove a priori estimates and use the continuation method to obtain the existence of a solution with a positive density. Finally, we introduce the monotone flow method and solve the system numerically.
On infinite regular and chiral maps
Arredondo, John A.; Valdez, Camilo Ramírez y Ferrán
2015-01-01
We prove that infinite regular and chiral maps take place on surfaces with at most one end. Moreover, we prove that an infinite regular or chiral map on an orientable surface with genus can only be realized on the Loch Ness monster, that is, the topological surface of infinite genus with one end.
From recreational to regular drug use
DEFF Research Database (Denmark)
Järvinen, Margaretha; Ravn, Signe
2011-01-01
This article analyses the process of going from recreational use to regular and problematic use of illegal drugs. We present a model containing six career contingencies relevant for young people’s progress from recreational to regular drug use: the closing of social networks, changes in forms...
Automating InDesign with Regular Expressions
Kahrel, Peter
2006-01-01
If you need to make automated changes to InDesign documents beyond what basic search and replace can handle, you need regular expressions, and a bit of scripting to make them work. This Short Cut explains both how to write regular expressions, so you can find and replace the right things, and how to use them in InDesign specifically.
2010-07-01
... employee under subsection (a) or in excess of the employee's normal working hours or regular working hours... Relating to Labor (Continued) WAGE AND HOUR DIVISION, DEPARTMENT OF LABOR STATEMENTS OF GENERAL POLICY OR... not less than one and one-half times their regular rates of pay. Section 7(e) of the Act defines...
John A. D. Appleby
2010-01-01
We consider the rate of convergence to equilibrium of Volterra integrodifferential equations with infinite memory. We show that if the kernel of Volterra operator is regularly varying at infinity, and the initial history is regularly varying at minus infinity, then the rate of convergence to the equilibrium is regularly varying at infinity, and the exact pointwise rate of convergence can be determined in terms of the rate of decay of the kernel and the rate of growth of the initial history. ...
Altschuler, Bruce R.; Monson, Keith L.
1998-03-01
Representation of crime scenes as virtual reality 3D computer displays promises to become a useful and important tool for law enforcement evaluation and analysis, forensic identification and pathological study and archival presentation during court proceedings. Use of these methods for assessment of evidentiary materials demands complete accuracy of reproduction of the original scene, both in data collection and in its eventual virtual reality representation. The recording of spatially accurate information as soon as possible after first arrival of law enforcement personnel is advantageous for unstable or hazardous crime scenes and reduces the possibility that either inadvertent measurement error or deliberate falsification may occur or be alleged concerning processing of a scene. Detailed measurements and multimedia archiving of critical surface topographical details in a calibrated, uniform, consistent and standardized quantitative 3D coordinate method are needed. These methods would afford professional personnel in initial contact with a crime scene the means for remote, non-contacting, immediate, thorough and unequivocal documentation of the contents of the scene. Measurements of the relative and absolute global positions of object sand victims, and their dispositions within the scene before their relocation and detailed examination, could be made. Resolution must be sufficient to map both small and large objects. Equipment must be able to map regions at varied resolution as collected from different perspectives. Progress is presented in devising methods for collecting and archiving 3D spatial numerical data from crime scenes, sufficient for law enforcement needs, by remote laser structured light and video imagery. Two types of simulation studies were done. One study evaluated the potential of 3D topographic mapping and 3D telepresence using a robotic platform for explosive ordnance disassembly. The second study involved using the laser mapping system on a
Energy Technology Data Exchange (ETDEWEB)
Zhong, Y; Sun, X; Lu, W; Jia, X; Wang, J; Shao, Y [The University of Texas Southwestern Medical Ctr., Dallas, TX (United States)
2016-06-15
Purpose: To investigate the feasibility and requirement for intra-fraction on-line multiple scanning particle beam range verifications (BRVs) with in-situ PET imaging, which is beyond the current single-beam BRV with extra factors that will affect the BR measurement accuracy, such as beam diameter, separation between beams, and different image counts at different BRV positions. Methods: We simulated a 110-MeV proton beam with 5-mm diameter irradiating a uniform PMMA phantom by GATE simulation, which generated nuclear interaction-induced positrons. In this preliminary study, we simply duplicated these positrons and placed them next to the initial protons to approximately mimic the two spatially separated positron distributions produced by two beams parallel to each other but with different beam ranges. These positrons were then imaged by a PET (∼2-mm resolution, 10% sensitivity, 320×320×128 mm^3 FOV) with different acquisition times. We calculated the positron activity ranges (ARs) from reconstructed PET images and compared them with the corresponding ARs of original positron distributions. Results: Without further image data processing and correction, the preliminary study show the errors between the measured and original ARs varied from 0.2 mm to 2.3 mm as center-to-center separations and range differences were in the range of 8–12 mm and 2–8 mm respectively, indicating the accuracy of AR measurement strongly depends on the beam separations and range differences. In addition, it is feasible to achieve ≤ 1.0-mm accuracy for both beams with 1-min PET acquisition and 12 mm beam separation. Conclusion: This study shows that the overlap between the positron distributions from multiple scanning beams can significantly impact the accuracy of BRVs of distributed particle beams and need to be further addressed beyond the established method of single-beam BRV, but it also indicates the feasibility to achieve accurate on-line multi-beam BRV with further improved
Manifold regularization for sparse unmixing of hyperspectral images.
Liu, Junmin; Zhang, Chunxia; Zhang, Jiangshe; Li, Huirong; Gao, Yuelin
2016-01-01
Recently, sparse unmixing has been successfully applied to spectral mixture analysis of remotely sensed hyperspectral images. Based on the assumption that the observed image signatures can be expressed in the form of linear combinations of a number of pure spectral signatures known in advance, unmixing of each mixed pixel in the scene is to find an optimal subset of signatures in a very large spectral library, which is cast into the framework of sparse regression. However, traditional sparse regression models, such as collaborative sparse regression , ignore the intrinsic geometric structure in the hyperspectral data. In this paper, we propose a novel model, called manifold regularized collaborative sparse regression , by introducing a manifold regularization to the collaborative sparse regression model. The manifold regularization utilizes a graph Laplacian to incorporate the locally geometrical structure of the hyperspectral data. An algorithm based on alternating direction method of multipliers has been developed for the manifold regularized collaborative sparse regression model. Experimental results on both the simulated and real hyperspectral data sets have demonstrated the effectiveness of our proposed model.
Mourot, Laurent; Fabre, Nicolas; Andersson, Erik; Willis, Sarah J; Hébert-Losier, Kim; Holmberg, Hans-Christer
2014-08-01
The aim of this study was to assess potential changes in the performance and cardiorespiratory responses of elite cross-country skiers following transition from the classic (CL) to the skating (SK) technique during a simulated skiathlon. Eight elite male skiers performed two 6 km (2 × 3 km) roller-skiing time trials on a treadmill at racing speed: one starting with the classic and switching to the skating technique (CL1-SK2) and another employing the skating technique throughout (SK1-SK2), with continuous monitoring of gas exchanges, heart rates, and kinematics (video). The overall performance times in the CL1-SK2 (21:12 ± 1:24) and SK1-SK2 (20:48 ± 2:00) trials were similar, and during the second section of each performance times and overall cardiopulmonary responses were also comparable. However, in comparison with SK1-SK2, the CL1-SK2 trial involved significantly higher increases in minute ventilation (V̇E, 89.8 ± 26.8 vs. 106.8 ± 17.6 L·min(-1)) and oxygen uptake (V̇O2; 3.1 ± 0.8 vs 3.5 ± 0.5 L·min(-1)) 2 min after the transition as well as longer time constants for V̇E, V̇O2, and heart rate during the first 3 min after the transition. This higher cardiopulmonary exertion was associated with ∼3% faster cycle rates. In conclusion, overall performance during the 2 time trials did not differ. The similar performance times during the second sections were achieved with comparable mean cardiopulmonary responses. However, the observation that during the initial 3-min post-transition following classic skiing cardiopulmonary responses and cycle rates were slightly higher supports the conclusion that an initial section of classic skiing exerts an impact on performance during a subsequent section of skate skiing.
Regularization Techniques for Linear Least-Squares Problems
Suliman, Mohamed
2016-04-01
method deals with discrete ill-posed problems when the singular values of the linear transformation matrix are decaying very fast to a significantly small value. For the both proposed algorithms, the regularization parameter is obtained as a solution of a non-linear characteristic equation. We provide a details study for the general properties of these functions and address the existence and uniqueness of the root. To demonstrate the performance of the derivations, the first proposed COPRA method is applied to estimate different signals with various characteristics, while the second proposed COPRA method is applied to a large set of different real-world discrete ill-posed problems. Simulation results demonstrate that the two proposed methods outperform a set of benchmark regularization algorithms in most cases. In addition, the algorithms are also shown to have the lowest run time.
An iterative method for Tikhonov regularization with a general linear regularization operator
Hochstenbach, M.E.; Reichel, L.
2010-01-01
Tikhonov regularization is one of the most popular approaches to solve discrete ill-posed problems with error-contaminated data. A regularization operator and a suitable value of a regularization parameter have to be chosen. This paper describes an iterative method, based on Golub-Kahan
Hierarchical regular small-world networks
International Nuclear Information System (INIS)
Boettcher, Stefan; Goncalves, Bruno; Guclu, Hasan
2008-01-01
Two new networks are introduced that resemble small-world properties. These networks are recursively constructed but retain a fixed, regular degree. They possess a unique one-dimensional lattice backbone overlaid by a hierarchical sequence of long-distance links, mixing real-space and small-world features. Both networks, one 3-regular and the other 4-regular, lead to distinct behaviors, as revealed by renormalization group studies. The 3-regular network is planar, has a diameter growing as √N with system size N, and leads to super-diffusion with an exact, anomalous exponent d w = 1.306..., but possesses only a trivial fixed point T c = 0 for the Ising ferromagnet. In turn, the 4-regular network is non-planar, has a diameter growing as ∼2 √(log 2 N 2 ) , exhibits 'ballistic' diffusion (d w = 1), and a non-trivial ferromagnetic transition, T c > 0. It suggests that the 3-regular network is still quite 'geometric', while the 4-regular network qualifies as a true small world with mean-field properties. As an engineering application we discuss synchronization of processors on these networks. (fast track communication)
Coupling regularizes individual units in noisy populations
International Nuclear Information System (INIS)
Ly Cheng; Ermentrout, G. Bard
2010-01-01
The regularity of a noisy system can modulate in various ways. It is well known that coupling in a population can lower the variability of the entire network; the collective activity is more regular. Here, we show that diffusive (reciprocal) coupling of two simple Ornstein-Uhlenbeck (O-U) processes can regularize the individual, even when it is coupled to a noisier process. In cellular networks, the regularity of individual cells is important when a select few play a significant role. The regularizing effect of coupling surprisingly applies also to general nonlinear noisy oscillators. However, unlike with the O-U process, coupling-induced regularity is robust to different kinds of coupling. With two coupled noisy oscillators, we derive an asymptotic formula assuming weak noise and coupling for the variance of the period (i.e., spike times) that accurately captures this effect. Moreover, we find that reciprocal coupling can regularize the individual period of higher dimensional oscillators such as the Morris-Lecar and Brusselator models, even when coupled to noisier oscillators. Coupling can have a counterintuitive and beneficial effect on noisy systems. These results have implications for the role of connectivity with noisy oscillators and the modulation of variability of individual oscillators.
Energy Technology Data Exchange (ETDEWEB)
Briffod, Fabien, E-mail: briffod@rme.mm.t.u-tokyo.ac.jp; Shiraiwa, Takayuki; Enoki, Manabu
2017-05-17
In this study, fatigue crack initiation in pure α-iron is investigated through a microstructure-sensitive framework. At first, synthetic microstructures are modeled based on an anisotropic tessellation that accounts for the information of the grains morphology extracted from electron backscatter diffraction (EBSD) analysis. Low-cycle fatigue experiments under strain-controlled conditions are conducted in order to calibrate a crystal plasticity model and a J{sub 2} model including isotropic and kinematic hardening. A critical plane fatigue indicator parameter (FIP) based on the Tanaka-Mura model is then presented to evaluate the location and quantify the driving force for the formation of a crack. The FIP is averaged over several potential crack paths within each grain defined by the intersection between a given slip plane and the plane of the model thus accounting for both the lattice orientation and morphology of the grain. Several fatigue simulations at various stress amplitudes are conducted using a sub-modeling technique for the attribution of boundary conditions on the polycrystalline aggregate models including an elliptic defect. The influence of the microstructure attributes and stress level on the location and amplitude of the FIP are then quantified and discussed.
He, Zhili; Feng, Gang; Yang, Bin; Yang, Lijiang; Liu, Cheng-Wen; Xu, Hong-Guang; Xu, Xi-Ling; Zheng, Wei-Jun; Gao, Yi Qin
2018-06-14
To understand the initial hydration processes of CaCl 2 , we performed molecular simulations employing the force field based on the theory of electronic continuum correction with rescaling. Integrated tempering sampling molecular dynamics were combined with ab initio calculations to overcome the sampling challenge in cluster structure search and refinement. The calculated vertical detachment energies of CaCl 2 (H 2 O) n - (n = 0-8) were compared with the values obtained from photoelectron spectra, and consistency was found between the experiment and computation. Separation of the Cl-Ca ion pair is investigated in CaCl 2 (H 2 O) n - anions, where the first Ca-Cl ionic bond required 4 water molecules, and both Ca-Cl bonds are broken when the number of water molecules is larger than 7. For neutral CaCl 2 (H 2 O) n clusters, breaking of the first Ca-Cl bond starts at n = 5, and 8 water molecules are not enough to separate the two ion pairs. Comparing with the observations on magnesium chloride, it shows that separating one ion pair in CaCl 2 (H 2 O) n requires fewer water molecules than those for MgCl 2 (H 2 O) n . Coincidentally, the solubility of calcium chloride is higher than that of magnesium chloride in bulk solutions.
Stochastic dynamic modeling of regular and slow earthquakes
Aso, N.; Ando, R.; Ide, S.
2017-12-01
Both regular and slow earthquakes are slip phenomena on plate boundaries and are simulated by a (quasi-)dynamic modeling [Liu and Rice, 2005]. In these numerical simulations, spatial heterogeneity is usually considered not only for explaining real physical properties but also for evaluating the stability of the calculations or the sensitivity of the results on the condition. However, even though we discretize the model space with small grids, heterogeneity at smaller scales than the grid size is not considered in the models with deterministic governing equations. To evaluate the effect of heterogeneity at the smaller scales we need to consider stochastic interactions between slip and stress in a dynamic modeling. Tidal stress is known to trigger or affect both regular and slow earthquakes [Yabe et al., 2015; Ide et al., 2016], and such an external force with fluctuation can also be considered as a stochastic external force. A healing process of faults may also be stochastic, so we introduce stochastic friction law. In the present study, we propose a stochastic dynamic model to explain both regular and slow earthquakes. We solve mode III problem, which corresponds to the rupture propagation along the strike direction. We use BIEM (boundary integral equation method) scheme to simulate slip evolution, but we add stochastic perturbations in the governing equations, which is usually written in a deterministic manner. As the simplest type of perturbations, we adopt Gaussian deviations in the formulation of the slip-stress kernel, external force, and friction. By increasing the amplitude of perturbations of the slip-stress kernel, we reproduce complicated rupture process of regular earthquakes including unilateral and bilateral ruptures. By perturbing external force, we reproduce slow rupture propagation at a scale of km/day. The slow propagation generated by a combination of fast interaction at S-wave velocity is analogous to the kinetic theory of gasses: thermal
Diagrammatic methods in phase-space regularization
International Nuclear Information System (INIS)
Bern, Z.; Halpern, M.B.; California Univ., Berkeley
1987-11-01
Using the scalar prototype and gauge theory as the simplest possible examples, diagrammatic methods are developed for the recently proposed phase-space form of continuum regularization. A number of one-loop and all-order applications are given, including general diagrammatic discussions of the nogrowth theorem and the uniqueness of the phase-space stochastic calculus. The approach also generates an alternate derivation of the equivalence of the large-β phase-space regularization to the more conventional coordinate-space regularization. (orig.)
J-regular rings with injectivities
Shen, Liang
2010-01-01
A ring $R$ is called a J-regular ring if R/J(R) is von Neumann regular, where J(R) is the Jacobson radical of R. It is proved that if R is J-regular, then (i) R is right n-injective if and only if every homomorphism from an $n$-generated small right ideal of $R$ to $R_{R}$ can be extended to one from $R_{R}$ to $R_{R}$; (ii) R is right FP-injective if and only if R is right (J, R)-FP-injective. Some known results are improved.
Global regularization method for planar restricted three-body problem
Directory of Open Access Journals (Sweden)
Sharaf M.A.
2015-01-01
Full Text Available In this paper, global regularization method for planar restricted three-body problem is purposed by using the transformation z = x+iy = ν cos n(u+iv, where i = √−1, 0 < ν ≤ 1 and n is a positive integer. The method is developed analytically and computationally. For the analytical developments, analytical solutions in power series of the pseudotime τ are obtained for positions and velocities (u, v, u', v' and (x, y, x˙, y˙ in both regularized and physical planes respectively, the physical time t is also obtained as power series in τ. Moreover, relations between the coefficients of the power series are obtained for two consequent values of n. Also, we developed analytical solutions in power series form for the inverse problem of finding τ in terms of t. As typical examples, three symbolic expressions for the coefficients of the power series were developed in terms of initial values. As to the computational developments, the global regularized equations of motion are developed together with their initial values in forms suitable for digital computations using any differential equations solver. On the other hand, for numerical evolutions of power series, an efficient method depending on the continued fraction theory is provided.
Regular shock refraction in planar ideal MHD
International Nuclear Information System (INIS)
Delmont, P; Keppens, R
2010-01-01
We study the classical problem of planar shock refraction at an oblique density discontinuity, separating two gases at rest, in planar ideal (magneto)hydrodynamics. In the hydrodynamical case, 3 signals arise and the interface becomes Richtmyer-Meshkov unstable due to vorticity deposition on the shocked contact. In the magnetohydrodynamical case, on the other hand, when the normal component of the magnetic field does not vanish, 5 signals will arise. The interface then typically remains stable, since the Rankine-Hugoniot jump conditions in ideal MHD do not allow for vorticity deposition on a contact discontinuity. We present an exact Riemann solver based solution strategy to describe the initial self similar refraction phase. Using grid-adaptive MHD simulations, we show that after reflection from the top wall, the interface remains stable.
Generalized regular genus for manifolds with boundary
Directory of Open Access Journals (Sweden)
Paola Cristofori
2003-05-01
Full Text Available We introduce a generalization of the regular genus, a combinatorial invariant of PL manifolds ([10], which is proved to be strictly related, in dimension three, to generalized Heegaard splittings defined in [12].
Geometric regularizations and dual conifold transitions
International Nuclear Information System (INIS)
Landsteiner, Karl; Lazaroiu, Calin I.
2003-01-01
We consider a geometric regularization for the class of conifold transitions relating D-brane systems on noncompact Calabi-Yau spaces to certain flux backgrounds. This regularization respects the SL(2,Z) invariance of the flux superpotential, and allows for computation of the relevant periods through the method of Picard-Fuchs equations. The regularized geometry is a noncompact Calabi-Yau which can be viewed as a monodromic fibration, with the nontrivial monodromy being induced by the regulator. It reduces to the original, non-monodromic background when the regulator is removed. Using this regularization, we discuss the simple case of the local conifold, and show how the relevant field-theoretic information can be extracted in this approach. (author)
Fast and compact regular expression matching
DEFF Research Database (Denmark)
Bille, Philip; Farach-Colton, Martin
2008-01-01
We study 4 problems in string matching, namely, regular expression matching, approximate regular expression matching, string edit distance, and subsequence indexing, on a standard word RAM model of computation that allows logarithmic-sized words to be manipulated in constant time. We show how...... to improve the space and/or remove a dependency on the alphabet size for each problem using either an improved tabulation technique of an existing algorithm or by combining known algorithms in a new way....
Regular-fat dairy and human health
DEFF Research Database (Denmark)
Astrup, Arne; Bradley, Beth H Rice; Brenna, J Thomas
2016-01-01
In recent history, some dietary recommendations have treated dairy fat as an unnecessary source of calories and saturated fat in the human diet. These assumptions, however, have recently been brought into question by current research on regular fat dairy products and human health. In an effort to......, cheese and yogurt, can be important components of an overall healthy dietary pattern. Systematic examination of the effects of dietary patterns that include regular-fat milk, cheese and yogurt on human health is warranted....
Deterministic automata for extended regular expressions
Directory of Open Access Journals (Sweden)
Syzdykov Mirzakhmet
2017-12-01
Full Text Available In this work we present the algorithms to produce deterministic finite automaton (DFA for extended operators in regular expressions like intersection, subtraction and complement. The method like “overriding” of the source NFA(NFA not defined with subset construction rules is used. The past work described only the algorithm for AND-operator (or intersection of regular languages; in this paper the construction for the MINUS-operator (and complement is shown.
Regularities of intermediate adsorption complex relaxation
International Nuclear Information System (INIS)
Manukova, L.A.
1982-01-01
The experimental data, characterizing the regularities of intermediate adsorption complex relaxation in the polycrystalline Mo-N 2 system at 77 K are given. The method of molecular beam has been used in the investigation. The analytical expressions of change regularity in the relaxation process of full and specific rates - of transition from intermediate state into ''non-reversible'', of desorption into the gas phase and accumUlation of the particles in the intermediate state are obtained
Online Manifold Regularization by Dual Ascending Procedure
Sun, Boliang; Li, Guohui; Jia, Li; Zhang, Hui
2013-01-01
We propose a novel online manifold regularization framework based on the notion of duality in constrained optimization. The Fenchel conjugate of hinge functions is a key to transfer manifold regularization from offline to online in this paper. Our algorithms are derived by gradient ascent in the dual function. For practical purpose, we propose two buffering strategies and two sparse approximations to reduce the computational complexity. Detailed experiments verify the utility of our approache...
Scott, Laura A; Roxburgh, Amanda; Bruno, Raimondo; Matthews, Allison; Burns, Lucy
2012-09-01
Residual effects of ecstasy use induce neurotransmitter changes that make it biologically plausible that extended use of the drug may induce psychological distress. However, there has been only mixed support for this in the literature. The presence of polysubstance use is a confounding factor. The aim of this study was to investigate whether regular cannabis and/or regular methamphetamine use confers additional risk of poor mental health and high levels of psychological distress, beyond regular ecstasy use alone. Three years of data from a yearly, cross-sectional, quantitative survey of Australian regular ecstasy users was examined. Participants were divided into four groups according to whether they regularly (at least monthly) used ecstasy only (n=936), ecstasy and weekly cannabis (n=697), ecstasy and weekly methamphetamine (n=108) or ecstasy, weekly cannabis and weekly methamphetamine (n=180). Self-reported mental health problems and Kessler Psychological Distress Scale (K10) were examined. Approximately one-fifth of participants self-reported at least one mental health problem, most commonly depression and anxiety. The addition of regular cannabis and/or methamphetamine use substantially increases the likelihood of self-reported mental health problems, particularly with regard to paranoia, over regular ecstasy use alone. Regular cannabis use remained significantly associated with self reported mental health problems even when other differences between groups were accounted for. Regular cannabis and methamphetamine use was also associated with earlier initiation to ecstasy use. These findings suggest that patterns of drug use can help identify at risk groups that could benefit from targeted approaches in education and interventions. Given that early initiation to substance use was more common in those with regular cannabis and methamphetamine use and given that this group had a higher likelihood of mental health problems, work around delaying onset of initiation
Real time simulation techniques in Taiwan - Maanshan compact simulator
International Nuclear Information System (INIS)
Liang, K.-S.; Chuang, Y.-M.; Ko, H.-T.
2004-01-01
Recognizing the demand and potential market of simulators in various industries, a special project for real time simulation technology transfer was initiated in Taiwan in 1991. In this technology transfer program, the most advanced real-time dynamic modules for nuclear power simulation were introduced. Those modules can be divided into two categories; one is modeling related to catch dynamic response of each system, and the other is computer related to provide special real time computing environment and man-machine interface. The modeling related modules consist of the thermodynamic module, the three-dimensional core neutronics module and the advanced balance of plant module. As planned in the project, the technology transfer team should build a compact simulator for the Maanshan power plant before the end of the project to demonstrate the success of the technology transfer program. The compact simulator was designed to support the training from the regular full scope simulator which was already equipped in the Maanshan plant. The feature of this compact simulator focused on providing know-why training by the enhanced graphic display. The potential users were identified as senior operators, instructors and nuclear engineers. Total about 13 important systems were covered in the scope of the compact simulator, and multi-graphic displays from three color monitors mounted on the 10 feet compact panel were facilitated to help the user visualize detailed phenomena under scenarios of interest. (author)
Energy Technology Data Exchange (ETDEWEB)
Gurevich, S A; Ekimov, A I; Kudryavtsev, I A [AN SSSR, Leningrad (Russian Federation). Fiziko-Tekhnicheskij Inst.
1994-05-01
Regularities of CdS semiconductor hanocrystal growth in amorphous media (silicate glasses and SiO{sub 2} thin films) are investigated. Dependences of crystal mean dimension on the annealing time show that in accordance with the theory of phase decomposition the crystal growth has the successive stages of nuclei formation and diffusion growth. By means of the nuclei mean radius dependences on the annealing temperature are determined the temperatures of CdS solubility in the matrix material. Effect of the annealing atmosphere composition on the growth and optical properties of CdS nanocrystals is shown.
Directory of Open Access Journals (Sweden)
S. Unterstrasser
2010-02-01
Full Text Available Simulations of contrail-to-cirrus transition were performed with an LES model. In Part 1 the impact of relative humidity, temperature and vertical wind shear was explored in a detailed parametric study. Here, we study atmospheric parameters like stratification and depth of the supersaturated layer and processes which may affect the contrail evolution. We consider contrails in various radiation scenarios herein defined by the season, time of day and the presence of lower-level cloudiness which controls the radiance incident on the contrail layer. Under suitable conditions, controlled by the radiation scenario and stratification, radiative heating lifts the contrail-cirrus and prolongs its lifetime. The potential of contrail-driven secondary nucleation is investigated. We consider homogeneous nucleation and heterogeneous nucleation of preactivated soot cores released from sublimated contrail ice crystals. In our model the contrail dynamics triggered by radiative heating does not suffice to force homogeneous freezing of ambient liquid aerosol particles. Furthermore, our model results suggest that heterogeneous nucleation of preactivated soot cores is unimportant. Contrail evolution is not controlled by the depth of the supersaturated layer as long as it exceeds roughly 500 m. Deep fallstreaks however need thicker layers. A variation of the initial ice crystal number is effective during the whole evolution of a contrail. A cut of the soot particle emission by two orders of magnitude can reduce the contrail timescale by one hour and the optical thickness by a factor of 5. Hence future engines with lower soot particle emissions could potentially lead to a reduction of the climate impact of aviation.
Improvements in GRACE Gravity Fields Using Regularization
Save, H.; Bettadpur, S.; Tapley, B. D.
2008-12-01
The unconstrained global gravity field models derived from GRACE are susceptible to systematic errors that show up as broad "stripes" aligned in a North-South direction on the global maps of mass flux. These errors are believed to be a consequence of both systematic and random errors in the data that are amplified by the nature of the gravity field inverse problem. These errors impede scientific exploitation of the GRACE data products, and limit the realizable spatial resolution of the GRACE global gravity fields in certain regions. We use regularization techniques to reduce these "stripe" errors in the gravity field products. The regularization criteria are designed such that there is no attenuation of the signal and that the solutions fit the observations as well as an unconstrained solution. We have used a computationally inexpensive method, normally referred to as "L-ribbon", to find the regularization parameter. This paper discusses the characteristics and statistics of a 5-year time-series of regularized gravity field solutions. The solutions show markedly reduced stripes, are of uniformly good quality over time, and leave little or no systematic observation residuals, which is a frequent consequence of signal suppression from regularization. Up to degree 14, the signal in regularized solution shows correlation greater than 0.8 with the un-regularized CSR Release-04 solutions. Signals from large-amplitude and small-spatial extent events - such as the Great Sumatra Andaman Earthquake of 2004 - are visible in the global solutions without using special post-facto error reduction techniques employed previously in the literature. Hydrological signals as small as 5 cm water-layer equivalent in the small river basins, like Indus and Nile for example, are clearly evident, in contrast to noisy estimates from RL04. The residual variability over the oceans relative to a seasonal fit is small except at higher latitudes, and is evident without the need for de-striping or
Novel Harmonic Regularization Approach for Variable Selection in Cox’s Proportional Hazards Model
Directory of Open Access Journals (Sweden)
Ge-Jin Chu
2014-01-01
Full Text Available Variable selection is an important issue in regression and a number of variable selection methods have been proposed involving nonconvex penalty functions. In this paper, we investigate a novel harmonic regularization method, which can approximate nonconvex Lq (1/2regularizations, to select key risk factors in the Cox’s proportional hazards model using microarray gene expression data. The harmonic regularization method can be efficiently solved using our proposed direct path seeking approach, which can produce solutions that closely approximate those for the convex loss function and the nonconvex regularization. Simulation results based on the artificial datasets and four real microarray gene expression datasets, such as real diffuse large B-cell lymphoma (DCBCL, the lung cancer, and the AML datasets, show that the harmonic regularization method can be more accurate for variable selection than existing Lasso series methods.
Li, Huiyan; Sun, Xiaojuan; Xiao, Jinghua
2015-01-01
In this paper, we investigate how clustering factors influent spiking regularity of the neuronal network of subnetworks. In order to do so, we fix the averaged coupling probability and the averaged coupling strength, and take the cluster number M, the ratio of intra-connection probability and inter-connection probability R, the ratio of intra-coupling strength and inter-coupling strength S as controlled parameters. With the obtained simulation results, we find that spiking regularity of the neuronal networks has little variations with changing of R and S when M is fixed. However, cluster number M could reduce the spiking regularity to low level when the uniform neuronal network's spiking regularity is at high level. Combined the obtained results, we can see that clustering factors have little influences on the spiking regularity when the entire energy is fixed, which could be controlled by the averaged coupling strength and the averaged connection probability.
Multi-task feature learning by using trace norm regularization
Directory of Open Access Journals (Sweden)
Jiangmei Zhang
2017-11-01
Full Text Available Multi-task learning can extract the correlation of multiple related machine learning problems to improve performance. This paper considers applying the multi-task learning method to learn a single task. We propose a new learning approach, which employs the mixture of expert model to divide a learning task into several related sub-tasks, and then uses the trace norm regularization to extract common feature representation of these sub-tasks. A nonlinear extension of this approach by using kernel is also provided. Experiments conducted on both simulated and real data sets demonstrate the advantage of the proposed approach.
Constrained Perturbation Regularization Approach for Signal Estimation Using Random Matrix Theory
Suliman, Mohamed Abdalla Elhag
2016-10-06
In this work, we propose a new regularization approach for linear least-squares problems with random matrices. In the proposed constrained perturbation regularization approach, an artificial perturbation matrix with a bounded norm is forced into the system model matrix. This perturbation is introduced to improve the singular-value structure of the model matrix and, hence, the solution of the estimation problem. Relying on the randomness of the model matrix, a number of deterministic equivalents from random matrix theory are applied to derive the near-optimum regularizer that minimizes the mean-squared error of the estimator. Simulation results demonstrate that the proposed approach outperforms a set of benchmark regularization methods for various estimated signal characteristics. In addition, simulations show that our approach is robust in the presence of model uncertainty.
32nm 1-D regular pitch SRAM bitcell design for interference-assisted lithography
Greenway, Robert T.; Jeong, Kwangok; Kahng, Andrew B.; Park, Chul-Hong; Petersen, John S.
2008-10-01
As optical lithography advances into the 45nm technology node and beyond, new manufacturing-aware design requirements have emerged. We address layout design for interference-assisted lithography (IAL), a double exposure method that combines maskless interference lithography (IL) and projection lithography (PL); cf. hybrid optical maskless lithography (HOMA) in [2] and [3]. Since IL can generate dense but regular pitch patterns, a key challenge to deployment of IAL is the conversion of existing designs to regular-linewidth, regular-pitch layouts. In this paper, we propose new 1-D regular pitch SRAM bitcell layouts which are amenable to IAL. We evaluate the feasibility of our bitcell designs via lithography simulations and circuit simulations, and confirm that the proposed bitcells can be successfully printed by IAL and that their electrical characteristics are comparable to those of existing bitcells.
Reconstruction of bremsstrahlung spectra from attenuation data using generalized simulated annealing
International Nuclear Information System (INIS)
Menin, O.H.; Martinez, A.S.; Costa, A.M.
2016-01-01
A generalized simulated annealing algorithm, combined with a suitable smoothing regularization function is used to solve the inverse problem of X-ray spectrum reconstruction from attenuation data. The approach is to set the initial acceptance and visitation temperatures and to standardize the terms of objective function to automate the algorithm to accommodate different spectra ranges. Experiments with both numerical and measured attenuation data are presented. Results show that the algorithm reconstructs spectra shapes accurately. It should be noted that in this algorithm, the regularization function was formulated to guarantee a smooth spectrum, thus, the presented technique does not apply to X-ray spectrum where characteristic radiation are present. - Highlights: • X-ray spectra reconstruction from attenuation data using generalized simulated annealing. • Algorithm employs a smoothing regularization function, and sets the initial acceptance and visitation temperatures. • Algorithmic is automated by standardizing the terms of the objective function. • Algorithm is compared with classical methods.
Regular Expression Matching and Operational Semantics
Directory of Open Access Journals (Sweden)
Asiri Rathnayake
2011-08-01
Full Text Available Many programming languages and tools, ranging from grep to the Java String library, contain regular expression matchers. Rather than first translating a regular expression into a deterministic finite automaton, such implementations typically match the regular expression on the fly. Thus they can be seen as virtual machines interpreting the regular expression much as if it were a program with some non-deterministic constructs such as the Kleene star. We formalize this implementation technique for regular expression matching using operational semantics. Specifically, we derive a series of abstract machines, moving from the abstract definition of matching to increasingly realistic machines. First a continuation is added to the operational semantics to describe what remains to be matched after the current expression. Next, we represent the expression as a data structure using pointers, which enables redundant searches to be eliminated via testing for pointer equality. From there, we arrive both at Thompson's lockstep construction and a machine that performs some operations in parallel, suitable for implementation on a large number of cores, such as a GPU. We formalize the parallel machine using process algebra and report some preliminary experiments with an implementation on a graphics processor using CUDA.
Regularities, Natural Patterns and Laws of Nature
Directory of Open Access Journals (Sweden)
Stathis Psillos
2014-02-01
Full Text Available The goal of this paper is to sketch an empiricist metaphysics of laws of nature. The key idea is that there are regularities without regularity-enforcers. Differently put, there are natural laws without law-makers of a distinct metaphysical kind. This sketch will rely on the concept of a natural pattern and more significantly on the existence of a network of natural patterns in nature. The relation between a regularity and a pattern will be analysed in terms of mereology. Here is the road map. In section 2, I will briefly discuss the relation between empiricism and metaphysics, aiming to show that an empiricist metaphysics is possible. In section 3, I will offer arguments against stronger metaphysical views of laws. Then, in section 4 I will motivate nomic objectivism. In section 5, I will address the question ‘what is a regularity?’ and will develop a novel answer to it, based on the notion of a natural pattern. In section 6, I will raise the question: ‘what is a law of nature?’, the answer to which will be: a law of nature is a regularity that is characterised by the unity of a natural pattern.
Robust regularized least-squares beamforming approach to signal estimation
Suliman, Mohamed Abdalla Elhag
2017-05-12
In this paper, we address the problem of robust adaptive beamforming of signals received by a linear array. The challenge associated with the beamforming problem is twofold. Firstly, the process requires the inversion of the usually ill-conditioned covariance matrix of the received signals. Secondly, the steering vector pertaining to the direction of arrival of the signal of interest is not known precisely. To tackle these two challenges, the standard capon beamformer is manipulated to a form where the beamformer output is obtained as a scaled version of the inner product of two vectors. The two vectors are linearly related to the steering vector and the received signal snapshot, respectively. The linear operator, in both cases, is the square root of the covariance matrix. A regularized least-squares (RLS) approach is proposed to estimate these two vectors and to provide robustness without exploiting prior information. Simulation results show that the RLS beamformer using the proposed regularization algorithm outperforms state-of-the-art beamforming algorithms, as well as another RLS beamformers using a standard regularization approaches.
EIT Imaging Regularization Based on Spectral Graph Wavelets.
Gong, Bo; Schullcke, Benjamin; Krueger-Ziolek, Sabine; Vauhkonen, Marko; Wolf, Gerhard; Mueller-Lisse, Ullrich; Moeller, Knut
2017-09-01
The objective of electrical impedance tomographic reconstruction is to identify the distribution of tissue conductivity from electrical boundary conditions. This is an ill-posed inverse problem usually solved under the finite-element method framework. In previous studies, standard sparse regularization was used for difference electrical impedance tomography to achieve a sparse solution. However, regarding elementwise sparsity, standard sparse regularization interferes with the smoothness of conductivity distribution between neighboring elements and is sensitive to noise. As an effect, the reconstructed images are spiky and depict a lack of smoothness. Such unexpected artifacts are not realistic and may lead to misinterpretation in clinical applications. To eliminate such artifacts, we present a novel sparse regularization method that uses spectral graph wavelet transforms. Single-scale or multiscale graph wavelet transforms are employed to introduce local smoothness on different scales into the reconstructed images. The proposed approach relies on viewing finite-element meshes as undirected graphs and applying wavelet transforms derived from spectral graph theory. Reconstruction results from simulations, a phantom experiment, and patient data suggest that our algorithm is more robust to noise and produces more reliable images.
Main regularities of radiolytic transformations of bifunctional organic compounds
International Nuclear Information System (INIS)
Petryaev, E.P.; Shadyro, O.I.
1985-01-01
General regularities of the radiolysis of bifunctional organic compounds (α-diols, ethers of α-diols, amino alcohols, hydroxy aldehydes and hydroxy asids) in aqueous solutions from the early stages of the process to formation of finite products are traced. It is pointed out that the most characteristic course of radiation-chemical, transformation of bifunctional compounds in agueous solutions in the fragmentation process with monomolecular decomposition of primary radicals of initial substrances and simultaneous scission of two vicinal in respect to radical centre bonds via five-membered cyclic transient state. The data obtained are of importance for molecular radiobiology
Adaptive L1/2 Shooting Regularization Method for Survival Analysis Using Gene Expression Data
Directory of Open Access Journals (Sweden)
Xiao-Ying Liu
2013-01-01
Full Text Available A new adaptive L1/2 shooting regularization method for variable selection based on the Cox’s proportional hazards mode being proposed. This adaptive L1/2 shooting algorithm can be easily obtained by the optimization of a reweighed iterative series of L1 penalties and a shooting strategy of L1/2 penalty. Simulation results based on high dimensional artificial data show that the adaptive L1/2 shooting regularization method can be more accurate for variable selection than Lasso and adaptive Lasso methods. The results from real gene expression dataset (DLBCL also indicate that the L1/2 regularization method performs competitively.
Regularizations of two-fold bifurcations in planar piecewise smooth systems using blowup
DEFF Research Database (Denmark)
Kristiansen, Kristian Uldall; Hogan, S. J.
2015-01-01
type of limit cycle that does not appear to be present in the original PWS system. For both types of limit cycle, we show that the criticality of the Hopf bifurcation that gives rise to periodic orbits is strongly dependent on the precise form of the regularization. Finally, we analyse the limit cycles...... as locally unique families of periodic orbits of the regularization and connect them, when possible, to limit cycles of the PWS system. We illustrate our analysis with numerical simulations and show how the regularized system can undergo a canard explosion phenomenon...
Manifold regularized multitask feature learning for multimodality disease classification.
Jie, Biao; Zhang, Daoqiang; Cheng, Bo; Shen, Dinggang
2015-02-01
Multimodality based methods have shown great advantages in classification of Alzheimer's disease (AD) and its prodromal stage, that is, mild cognitive impairment (MCI). Recently, multitask feature selection methods are typically used for joint selection of common features across multiple modalities. However, one disadvantage of existing multimodality based methods is that they ignore the useful data distribution information in each modality, which is essential for subsequent classification. Accordingly, in this paper we propose a manifold regularized multitask feature learning method to preserve both the intrinsic relatedness among multiple modalities of data and the data distribution information in each modality. Specifically, we denote the feature learning on each modality as a single task, and use group-sparsity regularizer to capture the intrinsic relatedness among multiple tasks (i.e., modalities) and jointly select the common features from multiple tasks. Furthermore, we introduce a new manifold-based Laplacian regularizer to preserve the data distribution information from each task. Finally, we use the multikernel support vector machine method to fuse multimodality data for eventual classification. Conversely, we also extend our method to the semisupervised setting, where only partial data are labeled. We evaluate our method using the baseline magnetic resonance imaging (MRI), fluorodeoxyglucose positron emission tomography (FDG-PET), and cerebrospinal fluid (CSF) data of subjects from AD neuroimaging initiative database. The experimental results demonstrate that our proposed method can not only achieve improved classification performance, but also help to discover the disease-related brain regions useful for disease diagnosis. © 2014 Wiley Periodicals, Inc.
Fractional Regularization Term for Variational Image Registration
Directory of Open Access Journals (Sweden)
Rafael Verdú-Monedero
2009-01-01
Full Text Available Image registration is a widely used task of image analysis with applications in many fields. Its classical formulation and current improvements are given in the spatial domain. In this paper a regularization term based on fractional order derivatives is formulated. This term is defined and implemented in the frequency domain by translating the energy functional into the frequency domain and obtaining the Euler-Lagrange equations which minimize it. The new regularization term leads to a simple formulation and design, being applicable to higher dimensions by using the corresponding multidimensional Fourier transform. The proposed regularization term allows for a real gradual transition from a diffusion registration to a curvature registration which is best suited to some applications and it is not possible in the spatial domain. Results with 3D actual images show the validity of this approach.
International Nuclear Information System (INIS)
Obregon, Octavio; Quevedo, Hernando; Ryan, Michael P.
2004-01-01
We construct a family of time and angular dependent, regular S-brane solutions which corresponds to a simple analytical continuation of the Zipoy-Voorhees 4-dimensional vacuum spacetime. The solutions are asymptotically flat and turn out to be free of singularities without requiring a twist in space. They can be considered as the simplest non-singular generalization of the singular S0-brane solution. We analyze the properties of a representative of this family of solutions and show that it resembles to some extent the asymptotic properties of the regular Kerr S-brane. The R-symmetry corresponds, however, to the general lorentzian symmetry. Several generalizations of this regular solution are derived which include a charged S-brane and an additional dilatonic field. (author)
Online Manifold Regularization by Dual Ascending Procedure
Directory of Open Access Journals (Sweden)
Boliang Sun
2013-01-01
Full Text Available We propose a novel online manifold regularization framework based on the notion of duality in constrained optimization. The Fenchel conjugate of hinge functions is a key to transfer manifold regularization from offline to online in this paper. Our algorithms are derived by gradient ascent in the dual function. For practical purpose, we propose two buffering strategies and two sparse approximations to reduce the computational complexity. Detailed experiments verify the utility of our approaches. An important conclusion is that our online MR algorithms can handle the settings where the target hypothesis is not fixed but drifts with the sequence of examples. We also recap and draw connections to earlier works. This paper paves a way to the design and analysis of online manifold regularization algorithms.
Regular transport dynamics produce chaotic travel times.
Villalobos, Jorge; Muñoz, Víctor; Rogan, José; Zarama, Roberto; Johnson, Neil F; Toledo, Benjamín; Valdivia, Juan Alejandro
2014-06-01
In the hope of making passenger travel times shorter and more reliable, many cities are introducing dedicated bus lanes (e.g., Bogota, London, Miami). Here we show that chaotic travel times are actually a natural consequence of individual bus function, and hence of public transport systems more generally, i.e., chaotic dynamics emerge even when the route is empty and straight, stops and lights are equidistant and regular, and loading times are negligible. More generally, our findings provide a novel example of chaotic dynamics emerging from a single object following Newton's laws of motion in a regularized one-dimensional system.
Regularity of difference equations on Banach spaces
Agarwal, Ravi P; Lizama, Carlos
2014-01-01
This work introduces readers to the topic of maximal regularity for difference equations. The authors systematically present the method of maximal regularity, outlining basic linear difference equations along with relevant results. They address recent advances in the field, as well as basic semigroup and cosine operator theories in the discrete setting. The authors also identify some open problems that readers may wish to take up for further research. This book is intended for graduate students and researchers in the area of difference equations, particularly those with advance knowledge of and interest in functional analysis.
PET regularization by envelope guided conjugate gradients
International Nuclear Information System (INIS)
Kaufman, L.; Neumaier, A.
1996-01-01
The authors propose a new way to iteratively solve large scale ill-posed problems and in particular the image reconstruction problem in positron emission tomography by exploiting the relation between Tikhonov regularization and multiobjective optimization to obtain iteratively approximations to the Tikhonov L-curve and its corner. Monitoring the change of the approximate L-curves allows us to adjust the regularization parameter adaptively during a preconditioned conjugate gradient iteration, so that the desired solution can be reconstructed with a small number of iterations
Matrix regularization of embedded 4-manifolds
International Nuclear Information System (INIS)
Trzetrzelewski, Maciej
2012-01-01
We consider products of two 2-manifolds such as S 2 ×S 2 , embedded in Euclidean space and show that the corresponding 4-volume preserving diffeomorphism algebra can be approximated by a tensor product SU(N)⊗SU(N) i.e. functions on a manifold are approximated by the Kronecker product of two SU(N) matrices. A regularization of the 4-sphere is also performed by constructing N 2 ×N 2 matrix representations of the 4-algebra (and as a byproduct of the 3-algebra which makes the regularization of S 3 also possible).
On a correspondence between regular and non-regular operator monotone functions
DEFF Research Database (Denmark)
Gibilisco, P.; Hansen, Frank; Isola, T.
2009-01-01
We prove the existence of a bijection between the regular and the non-regular operator monotone functions satisfying a certain functional equation. As an application we give a new proof of the operator monotonicity of certain functions related to the Wigner-Yanase-Dyson skew information....
Recursive regularization step for high-order lattice Boltzmann methods
Coreixas, Christophe; Wissocq, Gauthier; Puigt, Guillaume; Boussuge, Jean-François; Sagaut, Pierre
2017-09-01
A lattice Boltzmann method (LBM) with enhanced stability and accuracy is presented for various Hermite tensor-based lattice structures. The collision operator relies on a regularization step, which is here improved through a recursive computation of nonequilibrium Hermite polynomial coefficients. In addition to the reduced computational cost of this procedure with respect to the standard one, the recursive step allows to considerably enhance the stability and accuracy of the numerical scheme by properly filtering out second- (and higher-) order nonhydrodynamic contributions in under-resolved conditions. This is first shown in the isothermal case where the simulation of the doubly periodic shear layer is performed with a Reynolds number ranging from 104 to 106, and where a thorough analysis of the case at Re=3 ×104 is conducted. In the latter, results obtained using both regularization steps are compared against the Bhatnagar-Gross-Krook LBM for standard (D2Q9) and high-order (D2V17 and D2V37) lattice structures, confirming the tremendous increase of stability range of the proposed approach. Further comparisons on thermal and fully compressible flows, using the general extension of this procedure, are then conducted through the numerical simulation of Sod shock tubes with the D2V37 lattice. They confirm the stability increase induced by the recursive approach as compared with the standard one.
2012-10-12
structure on the evolving storm behaviour. 13 7. Large scale influences on Rapid Intensification and Extratropical Transition: RI and ET...assimilation techniques to better initialize and validate TC structures (including the intense inner core and storm asymmetries) consistent with the large...Without vortex specification, initial conditions usually contain a weak and misplaced circulation. Based on estimates of central pressure and storm size
Regularity and irreversibility of weekly travel behavior
Kitamura, R.; van der Hoorn, A.I.J.M.
1987-01-01
Dynamic characteristics of travel behavior are analyzed in this paper using weekly travel diaries from two waves of panel surveys conducted six months apart. An analysis of activity engagement indicates the presence of significant regularity in weekly activity participation between the two waves.
Regular and context-free nominal traces
DEFF Research Database (Denmark)
Degano, Pierpaolo; Ferrari, Gian-Luigi; Mezzetti, Gianluca
2017-01-01
Two kinds of automata are presented, for recognising new classes of regular and context-free nominal languages. We compare their expressive power with analogous proposals in the literature, showing that they express novel classes of languages. Although many properties of classical languages hold ...
Faster 2-regular information-set decoding
Bernstein, D.J.; Lange, T.; Peters, C.P.; Schwabe, P.; Chee, Y.M.
2011-01-01
Fix positive integers B and w. Let C be a linear code over F 2 of length Bw. The 2-regular-decoding problem is to find a nonzero codeword consisting of w length-B blocks, each of which has Hamming weight 0 or 2. This problem appears in attacks on the FSB (fast syndrome-based) hash function and
Complexity in union-free regular languages
Czech Academy of Sciences Publication Activity Database
Jirásková, G.; Masopust, Tomáš
2011-01-01
Roč. 22, č. 7 (2011), s. 1639-1653 ISSN 0129-0541 Institutional research plan: CEZ:AV0Z10190503 Keywords : Union-free regular language * one-cycle-free-path automaton * descriptional complexity Subject RIV: BA - General Mathematics Impact factor: 0.379, year: 2011 http://www.worldscinet.com/ijfcs/22/2207/S0129054111008933.html
Regular Gleason Measures and Generalized Effect Algebras
Dvurečenskij, Anatolij; Janda, Jiří
2015-12-01
We study measures, finitely additive measures, regular measures, and σ-additive measures that can attain even infinite values on the quantum logic of a Hilbert space. We show when particular classes of non-negative measures can be studied in the frame of generalized effect algebras.
Regularization of finite temperature string theories
International Nuclear Information System (INIS)
Leblanc, Y.; Knecht, M.; Wallet, J.C.
1990-01-01
The tachyonic divergences occurring in the free energy of various string theories at finite temperature are eliminated through the use of regularization schemes and analytic continuations. For closed strings, we obtain finite expressions which, however, develop an imaginary part above the Hagedorn temperature, whereas open string theories are still plagued with dilatonic divergences. (orig.)
A Sim(2 invariant dimensional regularization
Directory of Open Access Journals (Sweden)
J. Alfaro
2017-09-01
Full Text Available We introduce a Sim(2 invariant dimensional regularization of loop integrals. Then we can compute the one loop quantum corrections to the photon self energy, electron self energy and vertex in the Electrodynamics sector of the Very Special Relativity Standard Model (VSRSM.
Continuum regularized Yang-Mills theory
International Nuclear Information System (INIS)
Sadun, L.A.
1987-01-01
Using the machinery of stochastic quantization, Z. Bern, M. B. Halpern, C. Taubes and I recently proposed a continuum regularization technique for quantum field theory. This regularization may be implemented by applying a regulator to either the (d + 1)-dimensional Parisi-Wu Langevin equation or, equivalently, to the d-dimensional second order Schwinger-Dyson (SD) equations. This technique is non-perturbative, respects all gauge and Lorentz symmetries, and is consistent with a ghost-free gauge fixing (Zwanziger's). This thesis is a detailed study of this regulator, and of regularized Yang-Mills theory, using both perturbative and non-perturbative techniques. The perturbative analysis comes first. The mechanism of stochastic quantization is reviewed, and a perturbative expansion based on second-order SD equations is developed. A diagrammatic method (SD diagrams) for evaluating terms of this expansion is developed. We apply the continuum regulator to a scalar field theory. Using SD diagrams, we show that all Green functions can be rendered finite to all orders in perturbation theory. Even non-renormalizable theories can be regularized. The continuum regulator is then applied to Yang-Mills theory, in conjunction with Zwanziger's gauge fixing. A perturbative expansion of the regulator is incorporated into the diagrammatic method. It is hoped that the techniques discussed in this thesis will contribute to the construction of a renormalized Yang-Mills theory is 3 and 4 dimensions
Gravitational lensing by a regular black hole
International Nuclear Information System (INIS)
Eiroa, Ernesto F; Sendra, Carlos M
2011-01-01
In this paper, we study a regular Bardeen black hole as a gravitational lens. We find the strong deflection limit for the deflection angle, from which we obtain the positions and magnifications of the relativistic images. As an example, we apply the results to the particular case of the supermassive black hole at the center of our galaxy.
Gravitational lensing by a regular black hole
Energy Technology Data Exchange (ETDEWEB)
Eiroa, Ernesto F; Sendra, Carlos M, E-mail: eiroa@iafe.uba.ar, E-mail: cmsendra@iafe.uba.ar [Instituto de Astronomia y Fisica del Espacio, CC 67, Suc. 28, 1428, Buenos Aires (Argentina)
2011-04-21
In this paper, we study a regular Bardeen black hole as a gravitational lens. We find the strong deflection limit for the deflection angle, from which we obtain the positions and magnifications of the relativistic images. As an example, we apply the results to the particular case of the supermassive black hole at the center of our galaxy.
Analytic stochastic regularization and gange invariance
International Nuclear Information System (INIS)
Abdalla, E.; Gomes, M.; Lima-Santos, A.
1986-05-01
A proof that analytic stochastic regularization breaks gauge invariance is presented. This is done by an explicit one loop calculation of the vaccum polarization tensor in scalar electrodynamics, which turns out not to be transversal. The counterterm structure, Langevin equations and the construction of composite operators in the general framework of stochastic quantization, are also analysed. (Author) [pt
Annotation of regular polysemy and underspecification
DEFF Research Database (Denmark)
Martínez Alonso, Héctor; Pedersen, Bolette Sandford; Bel, Núria
2013-01-01
We present the result of an annotation task on regular polysemy for a series of seman- tic classes or dot types in English, Dan- ish and Spanish. This article describes the annotation process, the results in terms of inter-encoder agreement, and the sense distributions obtained with two methods...
Stabilization, pole placement, and regular implementability
Belur, MN; Trentelman, HL
In this paper, we study control by interconnection of linear differential systems. We give necessary and sufficient conditions for regular implementability of a-given linear, differential system. We formulate the problems of stabilization and pole placement as problems of finding a suitable,
12 CFR 725.3 - Regular membership.
2010-01-01
... UNION ADMINISTRATION CENTRAL LIQUIDITY FACILITY § 725.3 Regular membership. (a) A natural person credit....5(b) of this part, and forwarding with its completed application funds equal to one-half of this... 1, 1979, is not required to forward these funds to the Facility until October 1, 1979. (3...
Supervised scale-regularized linear convolutionary filters
DEFF Research Database (Denmark)
Loog, Marco; Lauze, Francois Bernard
2017-01-01
also be solved relatively efficient. All in all, the idea is to properly control the scale of a trained filter, which we solve by introducing a specific regularization term into the overall objective function. We demonstrate, on an artificial filter learning problem, the capabil- ities of our basic...
On regular riesz operators | Raubenheimer | Quaestiones ...
African Journals Online (AJOL)
The r-asymptotically quasi finite rank operators on Banach lattices are examples of regular Riesz operators. We characterise Riesz elements in a subalgebra of a Banach algebra in terms of Riesz elements in the Banach algebra. This enables us to characterise r-asymptotically quasi finite rank operators in terms of adjoint ...
Regularized Discriminant Analysis: A Large Dimensional Study
Yang, Xiaoke
2018-04-28
In this thesis, we focus on studying the performance of general regularized discriminant analysis (RDA) classifiers. The data used for analysis is assumed to follow Gaussian mixture model with different means and covariances. RDA offers a rich class of regularization options, covering as special cases the regularized linear discriminant analysis (RLDA) and the regularized quadratic discriminant analysis (RQDA) classi ers. We analyze RDA under the double asymptotic regime where the data dimension and the training size both increase in a proportional way. This double asymptotic regime allows for application of fundamental results from random matrix theory. Under the double asymptotic regime and some mild assumptions, we show that the asymptotic classification error converges to a deterministic quantity that only depends on the data statistical parameters and dimensions. This result not only implicates some mathematical relations between the misclassification error and the class statistics, but also can be leveraged to select the optimal parameters that minimize the classification error, thus yielding the optimal classifier. Validation results on the synthetic data show a good accuracy of our theoretical findings. We also construct a general consistent estimator to approximate the true classification error in consideration of the unknown previous statistics. We benchmark the performance of our proposed consistent estimator against classical estimator on synthetic data. The observations demonstrate that the general estimator outperforms others in terms of mean squared error (MSE).
Complexity in union-free regular languages
Czech Academy of Sciences Publication Activity Database
Jirásková, G.; Masopust, Tomáš
2011-01-01
Roč. 22, č. 7 (2011), s. 1639-1653 ISSN 0129-0541 Institutional research plan: CEZ:AV0Z10190503 Keywords : Union-free regular language * one-cycle-free- path automaton * descriptional complexity Subject RIV: BA - General Mathematics Impact factor: 0.379, year: 2011 http://www.worldscinet.com/ijfcs/22/2207/S0129054111008933.html
Bit-coded regular expression parsing
DEFF Research Database (Denmark)
Nielsen, Lasse; Henglein, Fritz
2011-01-01
the DFA-based parsing algorithm due to Dub ´e and Feeley to emit the bits of the bit representation without explicitly materializing the parse tree itself. We furthermore show that Frisch and Cardelli’s greedy regular expression parsing algorithm can be straightforwardly modified to produce bit codings...
Tetravalent one-regular graphs of order 4p2
DEFF Research Database (Denmark)
Feng, Yan-Quan; Kutnar, Klavdija; Marusic, Dragan
2014-01-01
A graph is one-regular if its automorphism group acts regularly on the set of its arcs. In this paper tetravalent one-regular graphs of order 4p2, where p is a prime, are classified.......A graph is one-regular if its automorphism group acts regularly on the set of its arcs. In this paper tetravalent one-regular graphs of order 4p2, where p is a prime, are classified....
NPP Krsko simulator training for operations personnel
International Nuclear Information System (INIS)
Pribozic, F.; Krajnc, J.
2000-01-01
Acquisition of a full scope replica simulator represents an important achievement for Nuclear power Plant Krsko. Operating nuclear power plant systems is definitely a set of demanding and complex tasks. The most important element in the goal of assuring capabilities for handling such tasks is efficient training of operations personnel who manipulate controls in the main control room. Use of a simulator during the training process is essential and can not be substituted by other techniques. This article gives an overview of NPP Krsko licensed personnel training historical background, current experience and plans for future training activities. Reactor operator initial training lasts approximately two and a half years. Training is divided into several phases, consisting of theoretical and practical segments, including simulator training. In the past, simulator initial training and annual simulator retraining was contracted, thus operators were trained on non-specific full scope simulators. Use of our own plant specific simulator and associated infrastructure will have a significant effect on the operations personnel training process and, in addition, will also support secondary uses, with the common goal to improve safe and reliable plant operation. A regular annual retraining program has successfully started. Use of the plant specific simulator assures consistent training and good management oversight, enhances conformity of operational practices and supports optimization of operating procedures. (author)
Commercial application of rainfall simulation
Loch, Rob J.
2010-05-01
to rain; and • Regular calibration of all equipment. In general, typical errors when rainfall simulation is carried out by inexperienced researchers include: • Failure to accurately measure rainfall rates (the most common error); • Inappropriate initial conditions, including wetting treatments; • Use of inappropriately small plots - relating to our concern at the erosion processes considered be those of genuine field relevance; • Inappropriate rainfall kinetic energies; and • Failure to observe critical processes operating on the study plots, such as saturation excess or the presence of impeding layers at shallow depths. Landloch regularly uses erodibility data to design stable batter profiles for minesite waste dumps. Subsequent monitoring of designed dumps has confirmed that modelled erosion rates are consistent with those subsequently measured under field conditions.
DWPF SB6 Initial CPC Flowsheet Testing SB6-1 TO SB6-4L Tests Of SB6-A And SB6-B Simulants
International Nuclear Information System (INIS)
Lambert, D.; Pickenheim, B.; Best, D.
2009-01-01
The Defense Waste Processing Facility (DWPF) will transition from Sludge Batch 5 (SB5) processing to Sludge Batch 6 (SB6) processing in late fiscal year 2010. Tests were conducted using non-radioactive simulants of the expected SB6 composition to determine the impact of varying the acid stoichiometry during the Sludge Receipt and Adjustment Tank (SRAT) and Slurry Mix Evaporator (SME) processes. The work was conducted to meet the Technical Task Request (TTR) HLW/DWPF/TTR-2008-0043, Rev.0 and followed the guidelines of a Task Technical and Quality Assurance Plan (TT and QAP). The flowsheet studies are performed to evaluate the potential chemical processing issues, hydrogen generation rates, and process slurry rheological properties as a function of acid stoichiometry. These studies were conducted with the estimated SB6 composition at the time of the study. This composition assumed a blend of 101,085 kg of Tank 4 insoluble solids and 179,000 kg of Tank 12 insoluble solids. The current plans are to subject Tank 12 sludge to aluminum dissolution. Liquid Waste Operations assumed that 75% of the aluminum would be dissolved during this process. After dissolution and blending of Tank 4 sludge slurry, plans included washing the contents of Tank 51 to ∼1M Na. After the completion of washing, the plan assumes that 40 inches on Tank 40 slurry would remain for blending with the qualified SB6 material. There are several parameters that are noteworthy concerning SB6 sludge: (1) This is the second batch DWPF will be processing that contains sludge that has had a significant fraction of aluminum removed through aluminum dissolution; (2) The sludge is high in mercury, but the projected concentration is lower than SB5; (3) The sludge is high in noble metals, but the projected concentrations are lower than SB5; and(4) The sludge is high in U and Pu - components that are not added in sludge simulants. Six DWPF process simulations were completed in 4-L laboratory-scale equipment using
Save, H.; Bettadpur, S. V.
2013-12-01
It has been demonstrated before that using Tikhonov regularization produces spherical harmonic solutions from GRACE that have very little residual stripes while capturing all the signal observed by GRACE within the noise level. This paper demonstrates a two-step process and uses Tikhonov regularization to remove the residual stripes in the CSR regularized spherical harmonic coefficients when computing the spatial projections. We discuss methods to produce mass anomaly grids that have no stripe features while satisfying the necessary condition of capturing all observed signal within the GRACE noise level.
Extreme values, regular variation and point processes
Resnick, Sidney I
1987-01-01
Extremes Values, Regular Variation and Point Processes is a readable and efficient account of the fundamental mathematical and stochastic process techniques needed to study the behavior of extreme values of phenomena based on independent and identically distributed random variables and vectors It presents a coherent treatment of the distributional and sample path fundamental properties of extremes and records It emphasizes the core primacy of three topics necessary for understanding extremes the analytical theory of regularly varying functions; the probabilistic theory of point processes and random measures; and the link to asymptotic distribution approximations provided by the theory of weak convergence of probability measures in metric spaces The book is self-contained and requires an introductory measure-theoretic course in probability as a prerequisite Almost all sections have an extensive list of exercises which extend developments in the text, offer alternate approaches, test mastery and provide for enj...
Stream Processing Using Grammars and Regular Expressions
DEFF Research Database (Denmark)
Rasmussen, Ulrik Terp
disambiguation. The first algorithm operates in two passes in a semi-streaming fashion, using a constant amount of working memory and an auxiliary tape storage which is written in the first pass and consumed by the second. The second algorithm is a single-pass and optimally streaming algorithm which outputs...... as much of the parse tree as is semantically possible based on the input prefix read so far, and resorts to buffering as many symbols as is required to resolve the next choice. Optimality is obtained by performing a PSPACE-complete pre-analysis on the regular expression. In the second part we present...... Kleenex, a language for expressing high-performance streaming string processing programs as regular grammars with embedded semantic actions, and its compilation to streaming string transducers with worst-case linear-time performance. Its underlying theory is based on transducer decomposition into oracle...
Describing chaotic attractors: Regular and perpetual points
Dudkowski, Dawid; Prasad, Awadhesh; Kapitaniak, Tomasz
2018-03-01
We study the concepts of regular and perpetual points for describing the behavior of chaotic attractors in dynamical systems. The idea of these points, which have been recently introduced to theoretical investigations, is thoroughly discussed and extended into new types of models. We analyze the correlation between regular and perpetual points, as well as their relation with phase space, showing the potential usefulness of both types of points in the qualitative description of co-existing states. The ability of perpetual points in finding attractors is indicated, along with its potential cause. The location of chaotic trajectories and sets of considered points is investigated and the study on the stability of systems is shown. The statistical analysis of the observing desired states is performed. We focus on various types of dynamical systems, i.e., chaotic flows with self-excited and hidden attractors, forced mechanical models, and semiconductor superlattices, exhibiting the universality of appearance of the observed patterns and relations.
Chaos regularization of quantum tunneling rates
International Nuclear Information System (INIS)
Pecora, Louis M.; Wu Dongho; Lee, Hoshik; Antonsen, Thomas; Lee, Ming-Jer; Ott, Edward
2011-01-01
Quantum tunneling rates through a barrier separating two-dimensional, symmetric, double-well potentials are shown to depend on the classical dynamics of the billiard trajectories in each well and, hence, on the shape of the wells. For shapes that lead to regular (integrable) classical dynamics the tunneling rates fluctuate greatly with eigenenergies of the states sometimes by over two orders of magnitude. Contrarily, shapes that lead to completely chaotic trajectories lead to tunneling rates whose fluctuations are greatly reduced, a phenomenon we call regularization of tunneling rates. We show that a random-plane-wave theory of tunneling accounts for the mean tunneling rates and the small fluctuation variances for the chaotic systems.
Contour Propagation With Riemannian Elasticity Regularization
DEFF Research Database (Denmark)
Bjerre, Troels; Hansen, Mads Fogtmann; Sapru, W.
2011-01-01
Purpose/Objective(s): Adaptive techniques allow for correction of spatial changes during the time course of the fractionated radiotherapy. Spatial changes include tumor shrinkage and weight loss, causing tissue deformation and residual positional errors even after translational and rotational image...... the planning CT onto the rescans and correcting to reflect actual anatomical changes. For deformable registration, a free-form, multi-level, B-spline deformation model with Riemannian elasticity, penalizing non-rigid local deformations, and volumetric changes, was used. Regularization parameters was defined...... on the original delineation and tissue deformation in the time course between scans form a better starting point than rigid propagation. There was no significant difference of locally and globally defined regularization. The method used in the present study suggests that deformed contours need to be reviewed...
Thin accretion disk around regular black hole
Directory of Open Access Journals (Sweden)
QIU Tianqi
2014-08-01
Full Text Available The Penrose′s cosmic censorship conjecture says that naked singularities do not exist in nature.So,it seems reasonable to further conjecture that not even a singularity exists in nature.In this paper,a regular black hole without singularity is studied in detail,especially on its thin accretion disk,energy flux,radiation temperature and accretion efficiency.It is found that the interaction of regular black hole is stronger than that of the Schwarzschild black hole. Furthermore,the thin accretion will be more efficiency to lost energy while the mass of black hole decreased. These particular properties may be used to distinguish between black holes.
Convex nonnegative matrix factorization with manifold regularization.
Hu, Wenjun; Choi, Kup-Sze; Wang, Peiliang; Jiang, Yunliang; Wang, Shitong
2015-03-01
Nonnegative Matrix Factorization (NMF) has been extensively applied in many areas, including computer vision, pattern recognition, text mining, and signal processing. However, nonnegative entries are usually required for the data matrix in NMF, which limits its application. Besides, while the basis and encoding vectors obtained by NMF can represent the original data in low dimension, the representations do not always reflect the intrinsic geometric structure embedded in the data. Motivated by manifold learning and Convex NMF (CNMF), we propose a novel matrix factorization method called Graph Regularized and Convex Nonnegative Matrix Factorization (GCNMF) by introducing a graph regularized term into CNMF. The proposed matrix factorization technique not only inherits the intrinsic low-dimensional manifold structure, but also allows the processing of mixed-sign data matrix. Clustering experiments on nonnegative and mixed-sign real-world data sets are conducted to demonstrate the effectiveness of the proposed method. Copyright © 2014 Elsevier Ltd. All rights reserved.
Muratore, Sydne; Kim, Michael; Olasky, Jaisa; Campbell, Andre; Acton, Robert
2017-02-01
The ACS/ASE Medical Student Simulation-Based Skills Curriculum was developed to standardize medical student training. This study aims to evaluate the feasibility and validity of implementing the basic airway curriculum. This single-center, prospective study of medical students participating in the basic airway module from 12/2014-3/2016 consisted of didactics, small-group practice, and testing in a simulated clinical scenario. Proficiency was determined by a checklist of skills (1-15), global score (1-5), and letter grade (NR-needs review, PS-proficient in simulation scenario, CP-proficient in clinical scenario). A proportion of students completed pre/post-test surveys regarding experience, satisfaction, comfort, and self-perceived proficiency. Over 16 months, 240 students were enrolled with 98% deemed proficient in a simulated or clinical scenario. Pre/post-test surveys (n = 126) indicated improvement in self-perceived proficiency by 99% of learners. All students felt moderately to very comfortable performing basic airway skills and 94% had moderate to considerable satisfaction after completing the module. The ACS/ASE Surgical Skills Curriculum is a feasible and effective way to teach medical students basic airway skills using simulation. Copyright © 2016 Elsevier Inc. All rights reserved.
A short proof of increased parabolic regularity
Directory of Open Access Journals (Sweden)
Stephen Pankavich
2015-08-01
Full Text Available We present a short proof of the increased regularity obtained by solutions to uniformly parabolic partial differential equations. Though this setting is fairly introductory, our new method of proof, which uses a priori estimates and an inductive method, can be extended to prove analogous results for problems with time-dependent coefficients, advection-diffusion or reaction diffusion equations, and nonlinear PDEs even when other tools, such as semigroup methods or the use of explicit fundamental solutions, are unavailable.
Regular black hole in three dimensions
Myung, Yun Soo; Yoon, Myungseok
2008-01-01
We find a new black hole in three dimensional anti-de Sitter space by introducing an anisotropic perfect fluid inspired by the noncommutative black hole. This is a regular black hole with two horizons. We compare thermodynamics of this black hole with that of non-rotating BTZ black hole. The first-law of thermodynamics is not compatible with the Bekenstein-Hawking entropy.
Sparse regularization for force identification using dictionaries
Qiao, Baijie; Zhang, Xingwu; Wang, Chenxi; Zhang, Hang; Chen, Xuefeng
2016-04-01
The classical function expansion method based on minimizing l2-norm of the response residual employs various basis functions to represent the unknown force. Its difficulty lies in determining the optimum number of basis functions. Considering the sparsity of force in the time domain or in other basis space, we develop a general sparse regularization method based on minimizing l1-norm of the coefficient vector of basis functions. The number of basis functions is adaptively determined by minimizing the number of nonzero components in the coefficient vector during the sparse regularization process. First, according to the profile of the unknown force, the dictionary composed of basis functions is determined. Second, a sparsity convex optimization model for force identification is constructed. Third, given the transfer function and the operational response, Sparse reconstruction by separable approximation (SpaRSA) is developed to solve the sparse regularization problem of force identification. Finally, experiments including identification of impact and harmonic forces are conducted on a cantilever thin plate structure to illustrate the effectiveness and applicability of SpaRSA. Besides the Dirac dictionary, other three sparse dictionaries including Db6 wavelets, Sym4 wavelets and cubic B-spline functions can also accurately identify both the single and double impact forces from highly noisy responses in a sparse representation frame. The discrete cosine functions can also successfully reconstruct the harmonic forces including the sinusoidal, square and triangular forces. Conversely, the traditional Tikhonov regularization method with the L-curve criterion fails to identify both the impact and harmonic forces in these cases.
Analytic stochastic regularization and gauge theories
International Nuclear Information System (INIS)
Abdalla, E.; Gomes, M.; Lima-Santos, A.
1987-04-01
We prove that analytic stochatic regularization braks gauge invariance. This is done by an explicit one loop calculation of the two three and four point vertex functions of the gluon field in scalar chromodynamics, which turns out not to be geuge invariant. We analyse the counter term structure, Langevin equations and the construction of composite operators in the general framework of stochastic quantization. (author) [pt
Preconditioners for regularized saddle point matrices
Czech Academy of Sciences Publication Activity Database
Axelsson, Owe
2011-01-01
Roč. 19, č. 2 (2011), s. 91-112 ISSN 1570-2820 Institutional research plan: CEZ:AV0Z30860518 Keywords : saddle point matrices * preconditioning * regularization * eigenvalue clustering Subject RIV: BA - General Mathematics Impact factor: 0.533, year: 2011 http://www.degruyter.com/view/j/jnma.2011.19.issue-2/jnum.2011.005/jnum.2011.005. xml
Analytic stochastic regularization: gauge and supersymmetry theories
International Nuclear Information System (INIS)
Abdalla, M.C.B.
1988-01-01
Analytic stochastic regularization for gauge and supersymmetric theories is considered. Gauge invariance in spinor and scalar QCD is verified to brak fown by an explicit one loop computation of the two, theree and four point vertex function of the gluon field. As a result, non gauge invariant counterterms must be added. However, in the supersymmetric multiplets there is a cancellation rendering the counterterms gauge invariant. The calculation is considered at one loop order. (author) [pt
Regularized forecasting of chaotic dynamical systems
International Nuclear Information System (INIS)
Bollt, Erik M.
2017-01-01
While local models of dynamical systems have been highly successful in terms of using extensive data sets observing even a chaotic dynamical system to produce useful forecasts, there is a typical problem as follows. Specifically, with k-near neighbors, kNN method, local observations occur due to recurrences in a chaotic system, and this allows for local models to be built by regression to low dimensional polynomial approximations of the underlying system estimating a Taylor series. This has been a popular approach, particularly in context of scalar data observations which have been represented by time-delay embedding methods. However such local models can generally allow for spatial discontinuities of forecasts when considered globally, meaning jumps in predictions because the collected near neighbors vary from point to point. The source of these discontinuities is generally that the set of near neighbors varies discontinuously with respect to the position of the sample point, and so therefore does the model built from the near neighbors. It is possible to utilize local information inferred from near neighbors as usual but at the same time to impose a degree of regularity on a global scale. We present here a new global perspective extending the general local modeling concept. In so doing, then we proceed to show how this perspective allows us to impose prior presumed regularity into the model, by involving the Tikhonov regularity theory, since this classic perspective of optimization in ill-posed problems naturally balances fitting an objective with some prior assumed form of the result, such as continuity or derivative regularity for example. This all reduces to matrix manipulations which we demonstrate on a simple data set, with the implication that it may find much broader context.
Minimal length uncertainty relation and ultraviolet regularization
Kempf, Achim; Mangano, Gianpiero
1997-06-01
Studies in string theory and quantum gravity suggest the existence of a finite lower limit Δx0 to the possible resolution of distances, at the latest on the scale of the Planck length of 10-35 m. Within the framework of the Euclidean path integral we explicitly show ultraviolet regularization in field theory through this short distance structure. Both rotation and translation invariance can be preserved. An example is studied in detail.
Optimal Design of the Adaptive Normalized Matched Filter Detector using Regularized Tyler Estimators
Kammoun, Abla; Couillet, Romain; Pascal, Frederic; Alouini, Mohamed-Slim
2017-01-01
This article addresses improvements on the design of the adaptive normalized matched filter (ANMF) for radar detection. It is well-acknowledged that the estimation of the noise-clutter covariance matrix is a fundamental step in adaptive radar detection. In this paper, we consider regularized estimation methods which force by construction the eigenvalues of the covariance estimates to be greater than a positive regularization parameter ρ. This makes them more suitable for high dimensional problems with a limited number of secondary data samples than traditional sample covariance estimates. The motivation behind this work is to understand the effect and properly set the value of ρthat improves estimate conditioning while maintaining a low estimation bias. More specifically, we consider the design of the ANMF detector for two kinds of regularized estimators, namely the regularized sample covariance matrix (RSCM), the regularized Tyler estimator (RTE). The rationale behind this choice is that the RTE is efficient in mitigating the degradation caused by the presence of impulsive noises while inducing little loss when the noise is Gaussian. Based on asymptotic results brought by recent tools from random matrix theory, we propose a design for the regularization parameter that maximizes the asymptotic detection probability under constant asymptotic false alarm rates. Provided Simulations support the efficiency of the proposed method, illustrating its gain over conventional settings of the regularization parameter.
Optimal Design of the Adaptive Normalized Matched Filter Detector using Regularized Tyler Estimators
Kammoun, Abla
2017-10-25
This article addresses improvements on the design of the adaptive normalized matched filter (ANMF) for radar detection. It is well-acknowledged that the estimation of the noise-clutter covariance matrix is a fundamental step in adaptive radar detection. In this paper, we consider regularized estimation methods which force by construction the eigenvalues of the covariance estimates to be greater than a positive regularization parameter ρ. This makes them more suitable for high dimensional problems with a limited number of secondary data samples than traditional sample covariance estimates. The motivation behind this work is to understand the effect and properly set the value of ρthat improves estimate conditioning while maintaining a low estimation bias. More specifically, we consider the design of the ANMF detector for two kinds of regularized estimators, namely the regularized sample covariance matrix (RSCM), the regularized Tyler estimator (RTE). The rationale behind this choice is that the RTE is efficient in mitigating the degradation caused by the presence of impulsive noises while inducing little loss when the noise is Gaussian. Based on asymptotic results brought by recent tools from random matrix theory, we propose a design for the regularization parameter that maximizes the asymptotic detection probability under constant asymptotic false alarm rates. Provided Simulations support the efficiency of the proposed method, illustrating its gain over conventional settings of the regularization parameter.
Regularity and chaos in cavity QED
International Nuclear Information System (INIS)
Bastarrachea-Magnani, Miguel Angel; López-del-Carpio, Baldemar; Chávez-Carlos, Jorge; Lerma-Hernández, Sergio; Hirsch, Jorge G
2017-01-01
The interaction of a quantized electromagnetic field in a cavity with a set of two-level atoms inside it can be described with algebraic Hamiltonians of increasing complexity, from the Rabi to the Dicke models. Their algebraic character allows, through the use of coherent states, a semiclassical description in phase space, where the non-integrable Dicke model has regions associated with regular and chaotic motion. The appearance of classical chaos can be quantified calculating the largest Lyapunov exponent over the whole available phase space for a given energy. In the quantum regime, employing efficient diagonalization techniques, we are able to perform a detailed quantitative study of the regular and chaotic regions, where the quantum participation ratio (P R ) of coherent states on the eigenenergy basis plays a role equivalent to the Lyapunov exponent. It is noted that, in the thermodynamic limit, dividing the participation ratio by the number of atoms leads to a positive value in chaotic regions, while it tends to zero in the regular ones. (paper)
Solution path for manifold regularized semisupervised classification.
Wang, Gang; Wang, Fei; Chen, Tao; Yeung, Dit-Yan; Lochovsky, Frederick H
2012-04-01
Traditional learning algorithms use only labeled data for training. However, labeled examples are often difficult or time consuming to obtain since they require substantial human labeling efforts. On the other hand, unlabeled data are often relatively easy to collect. Semisupervised learning addresses this problem by using large quantities of unlabeled data with labeled data to build better learning algorithms. In this paper, we use the manifold regularization approach to formulate the semisupervised learning problem where a regularization framework which balances a tradeoff between loss and penalty is established. We investigate different implementations of the loss function and identify the methods which have the least computational expense. The regularization hyperparameter, which determines the balance between loss and penalty, is crucial to model selection. Accordingly, we derive an algorithm that can fit the entire path of solutions for every value of the hyperparameter. Its computational complexity after preprocessing is quadratic only in the number of labeled examples rather than the total number of labeled and unlabeled examples.
Regularizations: different recipes for identical situations
International Nuclear Information System (INIS)
Gambin, E.; Lobo, C.O.; Battistel, O.A.
2004-03-01
We present a discussion where the choice of the regularization procedure and the routing for the internal lines momenta are put at the same level of arbitrariness in the analysis of Ward identities involving simple and well-known problems in QFT. They are the complex self-interacting scalar field and two simple models where the SVV and AVV process are pertinent. We show that, in all these problems, the conditions to symmetry relations preservation are put in terms of the same combination of divergent Feynman integrals, which are evaluated in the context of a very general calculational strategy, concerning the manipulations and calculations involving divergences. Within the adopted strategy, all the arbitrariness intrinsic to the problem are still maintained in the final results and, consequently, a perfect map can be obtained with the corresponding results of the traditional regularization techniques. We show that, when we require an universal interpretation for the arbitrariness involved, in order to get consistency with all stated physical constraints, a strong condition is imposed for regularizations which automatically eliminates the ambiguities associated to the routing of the internal lines momenta of loops. The conclusion is clean and sound: the association between ambiguities and unavoidable symmetry violations in Ward identities cannot be maintained if an unique recipe is required for identical situations in the evaluation of divergent physical amplitudes. (author)
Directory of Open Access Journals (Sweden)
Y. Kamae
2012-05-01
Full Text Available The mid-Pliocene (3.3 to 3.0 million yr ago, a globally warm period before the Quaternary, is recently attracting attention as a new target for paleoclimate modelling and data-model synthesis. This paper reports set-ups and results of experiments proposed in Pliocene Model Intercomparison Project (PlioMIP using a global climate model, MRI-CGCM2.3. We conducted pre-industrial and mid-Pliocene runs by using the coupled atmosphere-ocean general circulation model (AOGCM and its atmospheric component (AGCM for the PlioMIP Experiments 2 and 1, respectively. In addition, we conducted two types of integrations in AOGCM simulation, with and without flux adjustments on sea surface. General characteristics of differences in the simulated mid-Pliocene climate relative to the pre-industrial in the three integrations are compared. In addition, patterns of predicted mid-Pliocene biomes resulting from the three climate simulations are compared in this study. Generally, difference of simulated surface climate between AGCM and AOGCM is larger than that between the two AOGCM runs, with and without flux adjustments. The simulated climate shows different pattern between AGCM and AOGCM particularly over low latitude oceans, subtropical land regions and high latitude oceans. The AOGCM simulations do not reproduce wetter environment in the subtropics relative to the present-day, which is suggested by terrestrial proxy data. The differences between the two types of AOGCM runs are small over the land, but evident over the ocean particularly in the North Atlantic and polar regions.
The persistence of the attentional bias to regularities in a changing environment.
Yu, Ru Qi; Zhao, Jiaying
2015-10-01
The environment often is stable, but some aspects may change over time. The challenge for the visual system is to discover and flexibly adapt to the changes. We examined how attention is shifted in the presence of changes in the underlying structure of the environment. In six experiments, observers viewed four simultaneous streams of objects while performing a visual search task. In the first half of each experiment, the stream in the structured location contained regularities, the shapes in the random location were randomized, and gray squares appeared in two neutral locations. In the second half, the stream in the structured or the random location may change. In the first half of all experiments, visual search was facilitated in the structured location, suggesting that attention was consistently biased toward regularities. In the second half, this bias persisted in the structured location when no change occurred (Experiment 1), when the regularities were removed (Experiment 2), or when new regularities embedded in the original or novel stimuli emerged in the previously random location (Experiments 3 and 6). However, visual search was numerically but no longer reliably faster in the structured location when the initial regularities were removed and new regularities were introduced in the previously random location (Experiment 4), or when novel random stimuli appeared in the random location (Experiment 5). This suggests that the attentional bias was weakened. Overall, the results demonstrate that the attentional bias to regularities was persistent but also sensitive to changes in the environment.
Sparsity regularization for parameter identification problems
International Nuclear Information System (INIS)
Jin, Bangti; Maass, Peter
2012-01-01
The investigation of regularization schemes with sparsity promoting penalty terms has been one of the dominant topics in the field of inverse problems over the last years, and Tikhonov functionals with ℓ p -penalty terms for 1 ⩽ p ⩽ 2 have been studied extensively. The first investigations focused on regularization properties of the minimizers of such functionals with linear operators and on iteration schemes for approximating the minimizers. These results were quickly transferred to nonlinear operator equations, including nonsmooth operators and more general function space settings. The latest results on regularization properties additionally assume a sparse representation of the true solution as well as generalized source conditions, which yield some surprising and optimal convergence rates. The regularization theory with ℓ p sparsity constraints is relatively complete in this setting; see the first part of this review. In contrast, the development of efficient numerical schemes for approximating minimizers of Tikhonov functionals with sparsity constraints for nonlinear operators is still ongoing. The basic iterated soft shrinkage approach has been extended in several directions and semi-smooth Newton methods are becoming applicable in this field. In particular, the extension to more general non-convex, non-differentiable functionals by variational principles leads to a variety of generalized iteration schemes. We focus on such iteration schemes in the second part of this review. A major part of this survey is devoted to applying sparsity constrained regularization techniques to parameter identification problems for partial differential equations, which we regard as the prototypical setting for nonlinear inverse problems. Parameter identification problems exhibit different levels of complexity and we aim at characterizing a hierarchy of such problems. The operator defining these inverse problems is the parameter-to-state mapping. We first summarize some
Group-regularized individual prediction: theory and application to pain.
Lindquist, Martin A; Krishnan, Anjali; López-Solà, Marina; Jepma, Marieke; Woo, Choong-Wan; Koban, Leonie; Roy, Mathieu; Atlas, Lauren Y; Schmidt, Liane; Chang, Luke J; Reynolds Losin, Elizabeth A; Eisenbarth, Hedwig; Ashar, Yoni K; Delk, Elizabeth; Wager, Tor D
2017-01-15
Multivariate pattern analysis (MVPA) has become an important tool for identifying brain representations of psychological processes and clinical outcomes using fMRI and related methods. Such methods can be used to predict or 'decode' psychological states in individual subjects. Single-subject MVPA approaches, however, are limited by the amount and quality of individual-subject data. In spite of higher spatial resolution, predictive accuracy from single-subject data often does not exceed what can be accomplished using coarser, group-level maps, because single-subject patterns are trained on limited amounts of often-noisy data. Here, we present a method that combines population-level priors, in the form of biomarker patterns developed on prior samples, with single-subject MVPA maps to improve single-subject prediction. Theoretical results and simulations motivate a weighting based on the relative variances of biomarker-based prediction-based on population-level predictive maps from prior groups-and individual-subject, cross-validated prediction. Empirical results predicting pain using brain activity on a trial-by-trial basis (single-trial prediction) across 6 studies (N=180 participants) confirm the theoretical predictions. Regularization based on a population-level biomarker-in this case, the Neurologic Pain Signature (NPS)-improved single-subject prediction accuracy compared with idiographic maps based on the individuals' data alone. The regularization scheme that we propose, which we term group-regularized individual prediction (GRIP), can be applied broadly to within-person MVPA-based prediction. We also show how GRIP can be used to evaluate data quality and provide benchmarks for the appropriateness of population-level maps like the NPS for a given individual or study. Copyright © 2015 Elsevier Inc. All rights reserved.
DEFF Research Database (Denmark)
Deroba, J. J.; Butterworth, D. S.; Methot, R. D.
2015-01-01
The World Conference on Stock Assessment Methods (July 2013) included a workshop on testing assessment methods through simulations. The exercise was made up of two steps applied to datasets from 14 representative fish stocks from around the world. Step 1 involved applying stock assessments to dat...
Learning Sparse Visual Representations with Leaky Capped Norm Regularizers
Wangni, Jianqiao; Lin, Dahua
2017-01-01
Sparsity inducing regularization is an important part for learning over-complete visual representations. Despite the popularity of $\\ell_1$ regularization, in this paper, we investigate the usage of non-convex regularizations in this problem. Our contribution consists of three parts. First, we propose the leaky capped norm regularization (LCNR), which allows model weights below a certain threshold to be regularized more strongly as opposed to those above, therefore imposes strong sparsity and...
Temporal regularity of the environment drives time perception
van Rijn, H; Rhodes, D; Di Luca, M
2016-01-01
It’s reasonable to assume that a regularly paced sequence should be perceived as regular, but here we show that perceived regularity depends on the context in which the sequence is embedded. We presented one group of participants with perceptually regularly paced sequences, and another group of participants with mostly irregularly paced sequences (75% irregular, 25% regular). The timing of the final stimulus in each sequence could be var- ied. In one experiment, we asked whether the last stim...
Entanglement in coined quantum walks on regular graphs
International Nuclear Information System (INIS)
Carneiro, Ivens; Loo, Meng; Xu, Xibai; Girerd, Mathieu; Kendon, Viv; Knight, Peter L
2005-01-01
Quantum walks, both discrete (coined) and continuous time, form the basis of several recent quantum algorithms. Here we use numerical simulations to study the properties of discrete, coined quantum walks. We investigate the variation in the entanglement between the coin and the position of the particle by calculating the entropy of the reduced density matrix of the coin. We consider both dynamical evolution and asymptotic limits for coins of dimensions from two to eight on regular graphs. For low coin dimensions, quantum walks which spread faster (as measured by the mean square deviation of their distribution from uniform) also exhibit faster convergence towards the asymptotic value of the entanglement between the coin and particle's position. For high-dimensional coins, the DFT coin operator is more efficient at spreading than the Grover coin. We study the entanglement of the coin on regular finite graphs such as cycles, and also show that on complete bipartite graphs, a quantum walk with a Grover coin is always periodic with period four. We generalize the 'glued trees' graph used by Childs et al (2003 Proc. STOC, pp 59-68) to higher branching rate (fan out) and verify that the scaling with branching rate and with tree depth is polynomial
Robust regularized singular value decomposition with application to mortality data
Zhang, Lingsong
2013-09-01
We develop a robust regularized singular value decomposition (RobRSVD) method for analyzing two-way functional data. The research is motivated by the application of modeling human mortality as a smooth two-way function of age group and year. The RobRSVD is formulated as a penalized loss minimization problem where a robust loss function is used to measure the reconstruction error of a low-rank matrix approximation of the data, and an appropriately defined two-way roughness penalty function is used to ensure smoothness along each of the two functional domains. By viewing the minimization problem as two conditional regularized robust regressions, we develop a fast iterative reweighted least squares algorithm to implement the method. Our implementation naturally incorporates missing values. Furthermore, our formulation allows rigorous derivation of leaveone- row/column-out cross-validation and generalized cross-validation criteria, which enable computationally efficient data-driven penalty parameter selection. The advantages of the new robust method over nonrobust ones are shown via extensive simulation studies and the mortality rate application. © Institute of Mathematical Statistics, 2013.
Regularized quasinormal modes for plasmonic resonators and open cavities
Kamandar Dezfouli, Mohsen; Hughes, Stephen
2018-03-01
Optical mode theory and analysis of open cavities and plasmonic particles is an essential component of optical resonator physics, offering considerable insight and efficiency for connecting to classical and quantum optical properties such as the Purcell effect. However, obtaining the dissipative modes in normalized form for arbitrarily shaped open-cavity systems is notoriously difficult, often involving complex spatial integrations, even after performing the necessary full space solutions to Maxwell's equations. The formal solutions are termed quasinormal modes, which are known to diverge in space, and additional techniques are frequently required to obtain more accurate field representations in the far field. In this work, we introduce a finite-difference time-domain technique that can be used to obtain normalized quasinormal modes using a simple dipole-excitation source, and an inverse Green function technique, in real frequency space, without having to perform any spatial integrations. Moreover, we show how these modes are naturally regularized to ensure the correct field decay behavior in the far field, and thus can be used at any position within and outside the resonator. We term these modes "regularized quasinormal modes" and show the reliability and generality of the theory by studying the generalized Purcell factor of dipole emitters near metallic nanoresonators, hybrid devices with metal nanoparticles coupled to dielectric waveguides, as well as coupled cavity-waveguides in photonic crystals slabs. We also directly compare our results with full-dipole simulations of Maxwell's equations without any approximations, and show excellent agreement.
International Nuclear Information System (INIS)
Yamaki, M.; Hoki, K.; Kono, H.; Fujimura, Y.
2008-01-01
Rotational mechanisms of a chiral molecular motor driven by femtosecond laser pulses were investigated on the basis of results of a quantum control simulation. A chiral molecule, (R)-2-methyl-cyclopenta-2,4-dienecarboaldehyde, was treated as a molecular motor within a one-dimensional model. It was assumed that the motor is fixed on a surface and driven in the low temperature limit. Electric fields of femtosecond laser pulses driving both regular rotation of the molecular motor with a plus angular momentum and reverse rotation with a minus one were designed by using a global control method. The mechanism of the regular rotation is similar to that obtained by a conventional pump-dump pulse method: the direction of rotation is the same as that of the initial wave packet propagation on the potential surface of the first singlet (nπ*) excited state S 1 . A new control mechanism has been proposed for the reverse rotation that cannot be driven by a simple pump-dump pulse method. In this mechanism, a coherent Stokes pulse creates a wave packet localized on the ground state potential surface in the right hand side. The wave packet has a negative angular momentum to drive reverse rotation at an early time
Directory of Open Access Journals (Sweden)
Appleby JohnAD
2010-01-01
Full Text Available We consider the rate of convergence to equilibrium of Volterra integrodifferential equations with infinite memory. We show that if the kernel of Volterra operator is regularly varying at infinity, and the initial history is regularly varying at minus infinity, then the rate of convergence to the equilibrium is regularly varying at infinity, and the exact pointwise rate of convergence can be determined in terms of the rate of decay of the kernel and the rate of growth of the initial history. The result is considered both for a linear Volterra integrodifferential equation as well as for the delay logistic equation from population biology.
GLOBAL OPTIMIZATION METHODS FOR GRAVITATIONAL LENS SYSTEMS WITH REGULARIZED SOURCES
International Nuclear Information System (INIS)
Rogers, Adam; Fiege, Jason D.
2012-01-01
Several approaches exist to model gravitational lens systems. In this study, we apply global optimization methods to find the optimal set of lens parameters using a genetic algorithm. We treat the full optimization procedure as a two-step process: an analytical description of the source plane intensity distribution is used to find an initial approximation to the optimal lens parameters; the second stage of the optimization uses a pixelated source plane with the semilinear method to determine an optimal source. Regularization is handled by means of an iterative method and the generalized cross validation (GCV) and unbiased predictive risk estimator (UPRE) functions that are commonly used in standard image deconvolution problems. This approach simultaneously estimates the optimal regularization parameter and the number of degrees of freedom in the source. Using the GCV and UPRE functions, we are able to justify an estimation of the number of source degrees of freedom found in previous work. We test our approach by applying our code to a subset of the lens systems included in the SLACS survey.
A general framework for regularized, similarity-based image restoration.
Kheradmand, Amin; Milanfar, Peyman
2014-12-01
Any image can be represented as a function defined on a weighted graph, in which the underlying structure of the image is encoded in kernel similarity and associated Laplacian matrices. In this paper, we develop an iterative graph-based framework for image restoration based on a new definition of the normalized graph Laplacian. We propose a cost function, which consists of a new data fidelity term and regularization term derived from the specific definition of the normalized graph Laplacian. The normalizing coefficients used in the definition of the Laplacian and associated regularization term are obtained using fast symmetry preserving matrix balancing. This results in some desired spectral properties for the normalized Laplacian such as being symmetric, positive semidefinite, and returning zero vector when applied to a constant image. Our algorithm comprises of outer and inner iterations, where in each outer iteration, the similarity weights are recomputed using the previous estimate and the updated objective function is minimized using inner conjugate gradient iterations. This procedure improves the performance of the algorithm for image deblurring, where we do not have access to a good initial estimate of the underlying image. In addition, the specific form of the cost function allows us to render the spectral analysis for the solutions of the corresponding linear equations. In addition, the proposed approach is general in the sense that we have shown its effectiveness for different restoration problems, including deblurring, denoising, and sharpening. Experimental results verify the effectiveness of the proposed algorithm on both synthetic and real examples.
Bias correction for magnetic resonance images via joint entropy regularization.
Wang, Shanshan; Xia, Yong; Dong, Pei; Luo, Jianhua; Huang, Qiu; Feng, Dagan; Li, Yuanxiang
2014-01-01
Due to the imperfections of the radio frequency (RF) coil or object-dependent electrodynamic interactions, magnetic resonance (MR) images often suffer from a smooth and biologically meaningless bias field, which causes severe troubles for subsequent processing and quantitative analysis. To effectively restore the original signal, this paper simultaneously exploits the spatial and gradient features of the corrupted MR images for bias correction via the joint entropy regularization. With both isotropic and anisotropic total variation (TV) considered, two nonparametric bias correction algorithms have been proposed, namely IsoTVBiasC and AniTVBiasC. These two methods have been applied to simulated images under various noise levels and bias field corruption and also tested on real MR data. The test results show that the proposed two methods can effectively remove the bias field and also present comparable performance compared to the state-of-the-art methods.
Regularized multivariate regression models with skew-t error distributions
Chen, Lianfu
2014-06-01
We consider regularization of the parameters in multivariate linear regression models with the errors having a multivariate skew-t distribution. An iterative penalized likelihood procedure is proposed for constructing sparse estimators of both the regression coefficient and inverse scale matrices simultaneously. The sparsity is introduced through penalizing the negative log-likelihood by adding L1-penalties on the entries of the two matrices. Taking advantage of the hierarchical representation of skew-t distributions, and using the expectation conditional maximization (ECM) algorithm, we reduce the problem to penalized normal likelihood and develop a procedure to minimize the ensuing objective function. Using a simulation study the performance of the method is assessed, and the methodology is illustrated using a real data set with a 24-dimensional response vector. © 2014 Elsevier B.V.
Convergence and fluctuations of Regularized Tyler estimators
Kammoun, Abla; Couillet, Romain; Pascal, Frederic; Alouini, Mohamed-Slim
2015-01-01
This article studies the behavior of regularized Tyler estimators (RTEs) of scatter matrices. The key advantages of these estimators are twofold. First, they guarantee by construction a good conditioning of the estimate and second, being a derivative of robust Tyler estimators, they inherit their robustness properties, notably their resilience to the presence of outliers. Nevertheless, one major problem that poses the use of RTEs in practice is represented by the question of setting the regularization parameter p. While a high value of p is likely to push all the eigenvalues away from zero, it comes at the cost of a larger bias with respect to the population covariance matrix. A deep understanding of the statistics of RTEs is essential to come up with appropriate choices for the regularization parameter. This is not an easy task and might be out of reach, unless one considers asymptotic regimes wherein the number of observations n and/or their size N increase together. First asymptotic results have recently been obtained under the assumption that N and n are large and commensurable. Interestingly, no results concerning the regime of n going to infinity with N fixed exist, even though the investigation of this assumption has usually predated the analysis of the most difficult N and n large case. This motivates our work. In particular, we prove in the present paper that the RTEs converge to a deterministic matrix when n → ∞ with N fixed, which is expressed as a function of the theoretical covariance matrix. We also derive the fluctuations of the RTEs around this deterministic matrix and establish that these fluctuations converge in distribution to a multivariate Gaussian distribution with zero mean and a covariance depending on the population covariance and the parameter.
Convergence and fluctuations of Regularized Tyler estimators
Kammoun, Abla
2015-10-26
This article studies the behavior of regularized Tyler estimators (RTEs) of scatter matrices. The key advantages of these estimators are twofold. First, they guarantee by construction a good conditioning of the estimate and second, being a derivative of robust Tyler estimators, they inherit their robustness properties, notably their resilience to the presence of outliers. Nevertheless, one major problem that poses the use of RTEs in practice is represented by the question of setting the regularization parameter p. While a high value of p is likely to push all the eigenvalues away from zero, it comes at the cost of a larger bias with respect to the population covariance matrix. A deep understanding of the statistics of RTEs is essential to come up with appropriate choices for the regularization parameter. This is not an easy task and might be out of reach, unless one considers asymptotic regimes wherein the number of observations n and/or their size N increase together. First asymptotic results have recently been obtained under the assumption that N and n are large and commensurable. Interestingly, no results concerning the regime of n going to infinity with N fixed exist, even though the investigation of this assumption has usually predated the analysis of the most difficult N and n large case. This motivates our work. In particular, we prove in the present paper that the RTEs converge to a deterministic matrix when n → ∞ with N fixed, which is expressed as a function of the theoretical covariance matrix. We also derive the fluctuations of the RTEs around this deterministic matrix and establish that these fluctuations converge in distribution to a multivariate Gaussian distribution with zero mean and a covariance depending on the population covariance and the parameter.
2014-04-01
Figure 7b. Participant performs the simulation while wearing motion- capture sensors The team traveled to Innovative Sports on 03/03/14 to view...team is now collaborating with Dr. Michael Zinn at the University of Wisconsin-Madison who is an expert in haptic and virtual reality systems. A...choice was made to purchase one Force Dimension Omega Series haptic device (Figure 6a) as well as two GeoMagic Touch devices (Figure 6b) that will
The use of regularization in inferential measurements
International Nuclear Information System (INIS)
Hines, J. Wesley; Gribok, Andrei V.; Attieh, Ibrahim; Uhrig, Robert E.
1999-01-01
Inferential sensing is the prediction of a plant variable through the use of correlated plant variables. A correct prediction of the variable can be used to monitor sensors for drift or other failures making periodic instrument calibrations unnecessary. This move from periodic to condition based maintenance can reduce costs and increase the reliability of the instrument. Having accurate, reliable measurements is important for signals that may impact safety or profitability. This paper investigates how collinearity adversely affects inferential sensing by making the results inconsistent and unrepeatable; and presents regularization as a potential solution (author) (ml)
Regularization ambiguities in loop quantum gravity
International Nuclear Information System (INIS)
Perez, Alejandro
2006-01-01
One of the main achievements of loop quantum gravity is the consistent quantization of the analog of the Wheeler-DeWitt equation which is free of ultraviolet divergences. However, ambiguities associated to the intermediate regularization procedure lead to an apparently infinite set of possible theories. The absence of an UV problem--the existence of well-behaved regularization of the constraints--is intimately linked with the ambiguities arising in the quantum theory. Among these ambiguities is the one associated to the SU(2) unitary representation used in the diffeomorphism covariant 'point-splitting' regularization of the nonlinear functionals of the connection. This ambiguity is labeled by a half-integer m and, here, it is referred to as the m ambiguity. The aim of this paper is to investigate the important implications of this ambiguity. We first study 2+1 gravity (and more generally BF theory) quantized in the canonical formulation of loop quantum gravity. Only when the regularization of the quantum constraints is performed in terms of the fundamental representation of the gauge group does one obtain the usual topological quantum field theory as a result. In all other cases unphysical local degrees of freedom arise at the level of the regulated theory that conspire against the existence of the continuum limit. This shows that there is a clear-cut choice in the quantization of the constraints in 2+1 loop quantum gravity. We then analyze the effects of the ambiguity in 3+1 gravity exhibiting the existence of spurious solutions for higher representation quantizations of the Hamiltonian constraint. Although the analysis is not complete in 3+1 dimensions - due to the difficulties associated to the definition of the physical inner product - it provides evidence supporting the definitions quantum dynamics of loop quantum gravity in terms of the fundamental representation of the gauge group as the only consistent possibilities. If the gauge group is SO(3) we find
Effort variation regularization in sound field reproduction
DEFF Research Database (Denmark)
Stefanakis, Nick; Jacobsen, Finn; Sarris, Ioannis
2010-01-01
In this paper, active control is used in order to reproduce a given sound field in an extended spatial region. A method is proposed which minimizes the reproduction error at a number of control positions with the reproduction sources holding a certain relation within their complex strengths......), and adaptive wave field synthesis (AWFS), both under free-field conditions and in reverberant rooms. It is shown that effort variation regularization overcomes the problems associated with small spaces and with a low ratio of direct to reverberant energy, improving thus the reproduction accuracy...
New regularities in mass spectra of hadrons
International Nuclear Information System (INIS)
Kajdalov, A.B.
1989-01-01
The properties of bosonic and baryonic Regge trajectories for hadrons composed of light quarks are considered. Experimental data agree with an existence of daughter trajectories consistent with string models. It is pointed out that the parity doubling for baryonic trajectories, observed experimentally, is not understood in the existing quark models. Mass spectrum of bosons and baryons indicates to an approximate supersymmetry in the mass region M>1 GeV. These regularities indicates to a high degree of symmetry for the dynamics in the confinement region. 8 refs.; 5 figs
Total-variation regularization with bound constraints
International Nuclear Information System (INIS)
Chartrand, Rick; Wohlberg, Brendt
2009-01-01
We present a new algorithm for bound-constrained total-variation (TV) regularization that in comparison with its predecessors is simple, fast, and flexible. We use a splitting approach to decouple TV minimization from enforcing the constraints. Consequently, existing TV solvers can be employed with minimal alteration. This also makes the approach straightforward to generalize to any situation where TV can be applied. We consider deblurring of images with Gaussian or salt-and-pepper noise, as well as Abel inversion of radiographs with Poisson noise. We incorporate previous iterative reweighting algorithms to solve the TV portion.
Bayesian regularization of diffusion tensor images
DEFF Research Database (Denmark)
Frandsen, Jesper; Hobolth, Asger; Østergaard, Leif
2007-01-01
Diffusion tensor imaging (DTI) is a powerful tool in the study of the course of nerve fibre bundles in the human brain. Using DTI, the local fibre orientation in each image voxel can be described by a diffusion tensor which is constructed from local measurements of diffusion coefficients along...... several directions. The measured diffusion coefficients and thereby the diffusion tensors are subject to noise, leading to possibly flawed representations of the three dimensional fibre bundles. In this paper we develop a Bayesian procedure for regularizing the diffusion tensor field, fully utilizing...
Indefinite metric and regularization of electrodynamics
International Nuclear Information System (INIS)
Gaudin, M.
1984-06-01
The invariant regularization of Pauli and Villars in quantum electrodynamics can be considered as deriving from a local and causal lagrangian theory for spin 1/2 bosons, by introducing an indefinite metric and a condition on the allowed states similar to the Lorentz condition. The consequences are the asymptotic freedom of the photon's propagator. We present a calcultion of the effective charge to the fourth order in the coupling as a function of the auxiliary masses, the theory avoiding all mass divergencies to this order [fr
Strategies for regular segmented reductions on GPU
DEFF Research Database (Denmark)
Larsen, Rasmus Wriedt; Henriksen, Troels
2017-01-01
We present and evaluate an implementation technique for regular segmented reductions on GPUs. Existing techniques tend to be either consistent in performance but relatively inefficient in absolute terms, or optimised for specific workloads and thereby exhibiting bad performance for certain input...... is in the context of the Futhark compiler, the implementation technique is applicable to any library or language that has a need for segmented reductions. We evaluate the technique on four microbenchmarks, two of which we also compare to implementations in the CUB library for GPU programming, as well as on two...
Decentralized formation of random regular graphs for robust multi-agent networks
Yazicioglu, A. Yasin
2014-12-15
Multi-agent networks are often modeled via interaction graphs, where the nodes represent the agents and the edges denote direct interactions between the corresponding agents. Interaction graphs have significant impact on the robustness of networked systems. One family of robust graphs is the random regular graphs. In this paper, we present a locally applicable reconfiguration scheme to build random regular graphs through self-organization. For any connected initial graph, the proposed scheme maintains connectivity and the average degree while minimizing the degree differences and randomizing the links. As such, if the average degree of the initial graph is an integer, then connected regular graphs are realized uniformly at random as time goes to infinity.
Budianto; Lun, Daniel P K
2015-12-01
Conventional fringe projection profilometry methods often have difficulty in reconstructing the 3D model of objects when the fringe images have the so-called highlight regions due to strong illumination from nearby light sources. Within a highlight region, the fringe pattern is often overwhelmed by the strong reflected light. Thus, the 3D information of the object, which is originally embedded in the fringe pattern, can no longer be retrieved. In this paper, a novel inpainting algorithm is proposed to restore the fringe images in the presence of highlights. The proposed method first detects the highlight regions based on a Gaussian mixture model. Then, a geometric sketch of the missing fringes is made and used as the initial guess of an iterative regularization procedure for regenerating the missing fringes. The simulation and experimental results show that the proposed algorithm can accurately reconstruct the 3D model of objects even when their fringe images have large highlight regions. It significantly outperforms the traditional approaches in both quantitative and qualitative evaluations.
International Nuclear Information System (INIS)
Wang Juan; Liu Li-Na; Dong En-Zeng; Wang Li
2013-01-01
To deeply understand the emergence of cooperation in natural, social and economical systems, we present an improved fitness evaluation mechanism with memory in spatial prisoner's dilemma game on regular lattices. In our model, the individual fitness is not only determined by the payoff in the current game round, but also by the payoffs in previous round bins. A tunable parameter, termed as the memory strength (μ), which lies between 0 and 1, is introduced into the model to regulate the ratio of payoffs of current and previous game rounds in the individual fitness calculation. When μ = 0, our model is reduced to the standard prisoner's dilemma game; while μ = 1 represents the case in which the payoff is totally determined by the initial strategies and thus it is far from the realistic ones. Extensive numerical simulations indicate that the memory effect can substantially promote the evolution of cooperation. For μ < 1, the stronger the memory effect, the higher the cooperation level, but μ = 1 leads to a pathological state of cooperation, but can partially enhance the cooperation in the very large temptation parameter. The current results are of great significance for us to account for the role of memory effect during the evolution of cooperation among selfish players. (general)
Watanabe, Tomoya; Isoda, Haruo; Takehara, Yasuo; Terada, Masaki; Naito, Takehiro; Kosugi, Takafumi; Onishi, Yuki; Tanoi, Chiharu; Izumi, Takashi
2018-05-01
We performed computational fluid dynamics (CFD) for patients with and without paraclinoid internal carotid artery (ICA) aneurysms to evaluate the distribution of vascular biomarkers at the aneurysm initiation sites of the paraclinoid ICA. This study included 35 patients who were followed up for aneurysms using 3D time of flight (TOF) magnetic resonance angiography (MRA) and 3D cine phase-contrast MR imaging. Fifteen affected ICAs were included in group A with the 15 unaffected contralateral ICAs in group B. Thirty-three out of 40 paraclinoid ICAs free of aneurysms and arteriosclerotic lesions were included in group C. We deleted the aneurysms in group A based on the 3D TOF MRA dataset. We performed CFD based on MR data set and obtained wall shear stress (WSS), its derivatives, and streamlines. We qualitatively evaluated their distributions at and near the intracranial aneurysm initiation site among three groups. We also calculated and compared the normalized highest (nh-) WSS and nh-spatial WSS gradient (SWSSG) around the paraclinoid ICA among three groups. High WSS and SWSSG distribution were observed at and near the aneurysm initiation site in group A. High WSS and SWSSG were also observed at similar locations in group B and group C. However, nh-WSS and nh-SWSSG were significantly higher in group A than in group C, and nh-SWSSG was significantly higher in group A than in group B. Our findings indicated that nh-WSS and nh-SWSSG were good biomarkers for aneurysm initiation in the paraclinoid ICA.
Harmonic R-matrices for scattering amplitudes and spectral regularization
Energy Technology Data Exchange (ETDEWEB)
Ferro, Livia; Plefka, Jan [Humboldt-Universitaet, Berlin (Germany). Inst. fuer Physik; Lukowski, Tomasz [Humboldt-Universitaet, Berlin (Germany). Inst. fuer Physik; Humboldt-Universitaet, Berlin (Germany). Inst. fuer Mathematik; Humboldt-Univ. Berlin (Germany). IRIS Adlershof; Meneghelli, Carlo [Hamburg Univ. (Germany). Fachbereich 11 - Mathematik; Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany). Theory Group; Staudacher, Matthias [Humboldt-Universitaet, Berlin (Germany). Inst. fuer Mathematik; Humboldt-Universitaet, Berlin (Germany). Inst. fuer Physik; Max-Planck-Institut fuer Gravitationsphysik (Albert-Einstein-Institut), Potsdam (Germany)
2012-12-15
Planar N=4 super Yang-Mills appears to be integrable. While this allows to find this theory's exact spectrum, integrability has hitherto been of no direct use for scattering amplitudes. To remedy this, we deform all scattering amplitudes by a spectral parameter. The deformed tree-level four-point function turns out to be essentially the one-loop R-matrix of the integrable N=4 spin chain satisfying the Yang-Baxter equation. Deformed on-shell three-point functions yield novel three-leg R-matrices satisfying bootstrap equations. Finally, we supply initial evidence that the spectral parameter might find its use as a novel symmetry-respecting regulator replacing dimensional regularization. Its physical meaning is a local deformation of particle helicity, a fact which might be useful for a much larger class of non-integrable four-dimensional field theories.
A Priori Regularity of Parabolic Partial Differential Equations
Berkemeier, Francisco
2018-05-13
In this thesis, we consider parabolic partial differential equations such as the heat equation, the Fokker-Planck equation, and the porous media equation. Our aim is to develop methods that provide a priori estimates for solutions with singular initial data. These estimates are obtained by understanding the time decay of norms of solutions. First, we derive regularity results for the heat equation by estimating the decay of Lebesgue norms. Then, we apply similar methods to the Fokker-Planck equation with suitable assumptions on the advection and diffusion. Finally, we conclude by extending our techniques to the porous media equation. The sharpness of our results is confirmed by examining known solutions of these equations. The main contribution of this thesis is the use of functional inequalities to express decay of norms as differential inequalities. These are then combined with ODE methods to deduce estimates for the norms of solutions and their derivatives.
Primordial Regular Black Holes: Thermodynamics and Dark Matter
Directory of Open Access Journals (Sweden)
José Antonio de Freitas Pacheco
2018-05-01
Full Text Available The possibility that dark matter particles could be constituted by extreme regular primordial black holes is discussed. Extreme black holes have zero surface temperature, and are not subjected to the Hawking evaporation process. Assuming that the common horizon radius of these black holes is fixed by the minimum distance that is derived from the Riemann invariant computed from loop quantum gravity, the masses of these non-singular stable black holes are of the order of the Planck mass. However, if they are formed just after inflation, during reheating, their initial masses are about six orders of magnitude higher. After a short period of growth by the accretion of relativistic matter, they evaporate until reaching the extreme solution. Only a fraction of 3.8 × 10−22 of relativistic matter is required to be converted into primordial black holes (PBHs in order to explain the present abundance of dark matter particles.
National Aeronautics and Space Administration — The Advanced Manufacturing Technologies (AMT) Project supports multiple activities within the Administration's National Manufacturing Initiative. A key component of...
Supporting Regularized Logistic Regression Privately and Efficiently
Li, Wenfa; Liu, Hongzhe; Yang, Peng; Xie, Wei
2016-01-01
As one of the most popular statistical and machine learning models, logistic regression with regularization has found wide adoption in biomedicine, social sciences, information technology, and so on. These domains often involve data of human subjects that are contingent upon strict privacy regulations. Concerns over data privacy make it increasingly difficult to coordinate and conduct large-scale collaborative studies, which typically rely on cross-institution data sharing and joint analysis. Our work here focuses on safeguarding regularized logistic regression, a widely-used statistical model while at the same time has not been investigated from a data security and privacy perspective. We consider a common use scenario of multi-institution collaborative studies, such as in the form of research consortia or networks as widely seen in genetics, epidemiology, social sciences, etc. To make our privacy-enhancing solution practical, we demonstrate a non-conventional and computationally efficient method leveraging distributing computing and strong cryptography to provide comprehensive protection over individual-level and summary data. Extensive empirical evaluations on several studies validate the privacy guarantee, efficiency and scalability of our proposal. We also discuss the practical implications of our solution for large-scale studies and applications from various disciplines, including genetic and biomedical studies, smart grid, network analysis, etc. PMID:27271738
Supporting Regularized Logistic Regression Privately and Efficiently.
Li, Wenfa; Liu, Hongzhe; Yang, Peng; Xie, Wei
2016-01-01
As one of the most popular statistical and machine learning models, logistic regression with regularization has found wide adoption in biomedicine, social sciences, information technology, and so on. These domains often involve data of human subjects that are contingent upon strict privacy regulations. Concerns over data privacy make it increasingly difficult to coordinate and conduct large-scale collaborative studies, which typically rely on cross-institution data sharing and joint analysis. Our work here focuses on safeguarding regularized logistic regression, a widely-used statistical model while at the same time has not been investigated from a data security and privacy perspective. We consider a common use scenario of multi-institution collaborative studies, such as in the form of research consortia or networks as widely seen in genetics, epidemiology, social sciences, etc. To make our privacy-enhancing solution practical, we demonstrate a non-conventional and computationally efficient method leveraging distributing computing and strong cryptography to provide comprehensive protection over individual-level and summary data. Extensive empirical evaluations on several studies validate the privacy guarantee, efficiency and scalability of our proposal. We also discuss the practical implications of our solution for large-scale studies and applications from various disciplines, including genetic and biomedical studies, smart grid, network analysis, etc.
Multiple graph regularized nonnegative matrix factorization
Wang, Jim Jing-Yan
2013-10-01
Non-negative matrix factorization (NMF) has been widely used as a data representation method based on components. To overcome the disadvantage of NMF in failing to consider the manifold structure of a data set, graph regularized NMF (GrNMF) has been proposed by Cai et al. by constructing an affinity graph and searching for a matrix factorization that respects graph structure. Selecting a graph model and its corresponding parameters is critical for this strategy. This process is usually carried out by cross-validation or discrete grid search, which are time consuming and prone to overfitting. In this paper, we propose a GrNMF, called MultiGrNMF, in which the intrinsic manifold is approximated by a linear combination of several graphs with different models and parameters inspired by ensemble manifold regularization. Factorization metrics and linear combination coefficients of graphs are determined simultaneously within a unified object function. They are alternately optimized in an iterative algorithm, thus resulting in a novel data representation algorithm. Extensive experiments on a protein subcellular localization task and an Alzheimer\\'s disease diagnosis task demonstrate the effectiveness of the proposed algorithm. © 2013 Elsevier Ltd. All rights reserved.
Supporting Regularized Logistic Regression Privately and Efficiently.
Directory of Open Access Journals (Sweden)
Wenfa Li
Full Text Available As one of the most popular statistical and machine learning models, logistic regression with regularization has found wide adoption in biomedicine, social sciences, information technology, and so on. These domains often involve data of human subjects that are contingent upon strict privacy regulations. Concerns over data privacy make it increasingly difficult to coordinate and conduct large-scale collaborative studies, which typically rely on cross-institution data sharing and joint analysis. Our work here focuses on safeguarding regularized logistic regression, a widely-used statistical model while at the same time has not been investigated from a data security and privacy perspective. We consider a common use scenario of multi-institution collaborative studies, such as in the form of research consortia or networks as widely seen in genetics, epidemiology, social sciences, etc. To make our privacy-enhancing solution practical, we demonstrate a non-conventional and computationally efficient method leveraging distributing computing and strong cryptography to provide comprehensive protection over individual-level and summary data. Extensive empirical evaluations on several studies validate the privacy guarantee, efficiency and scalability of our proposal. We also discuss the practical implications of our solution for large-scale studies and applications from various disciplines, including genetic and biomedical studies, smart grid, network analysis, etc.
Multiview Hessian regularization for image annotation.
Liu, Weifeng; Tao, Dacheng
2013-07-01
The rapid development of computer hardware and Internet technology makes large scale data dependent models computationally tractable, and opens a bright avenue for annotating images through innovative machine learning algorithms. Semisupervised learning (SSL) therefore received intensive attention in recent years and was successfully deployed in image annotation. One representative work in SSL is Laplacian regularization (LR), which smoothes the conditional distribution for classification along the manifold encoded in the graph Laplacian, however, it is observed that LR biases the classification function toward a constant function that possibly results in poor generalization. In addition, LR is developed to handle uniformly distributed data (or single-view data), although instances or objects, such as images and videos, are usually represented by multiview features, such as color, shape, and texture. In this paper, we present multiview Hessian regularization (mHR) to address the above two problems in LR-based image annotation. In particular, mHR optimally combines multiple HR, each of which is obtained from a particular view of instances, and steers the classification function that varies linearly along the data manifold. We apply mHR to kernel least squares and support vector machines as two examples for image annotation. Extensive experiments on the PASCAL VOC'07 dataset validate the effectiveness of mHR by comparing it with baseline algorithms, including LR and HR.
EIT image reconstruction with four dimensional regularization.
Dai, Tao; Soleimani, Manuchehr; Adler, Andy
2008-09-01
Electrical impedance tomography (EIT) reconstructs internal impedance images of the body from electrical measurements on body surface. The temporal resolution of EIT data can be very high, although the spatial resolution of the images is relatively low. Most EIT reconstruction algorithms calculate images from data frames independently, although data are actually highly correlated especially in high speed EIT systems. This paper proposes a 4-D EIT image reconstruction for functional EIT. The new approach is developed to directly use prior models of the temporal correlations among images and 3-D spatial correlations among image elements. A fast algorithm is also developed to reconstruct the regularized images. Image reconstruction is posed in terms of an augmented image and measurement vector which are concatenated from a specific number of previous and future frames. The reconstruction is then based on an augmented regularization matrix which reflects the a priori constraints on temporal and 3-D spatial correlations of image elements. A temporal factor reflecting the relative strength of the image correlation is objectively calculated from measurement data. Results show that image reconstruction models which account for inter-element correlations, in both space and time, show improved resolution and noise performance, in comparison to simpler image models.
Dukmak, Samir
2010-01-01
Samir Dukmak is an assistant professor in the Department of Special Education in the Faculty of Education at the United Arab Emirates University. The research reported in this article investigated the frequency, types of and reasons for student-initiated interactions in both regular and special education classrooms in the United Arab Emirates…
2016-12-01
B were set between 10% and 90% of the maximum closed loop force handled by the device (14.5 N/mm), or between 1.45 and 13.05 N/mm. The effective...include administration of vasoactive medications , rapid resuscitation, total parenteral nutrition, and delivery of caustic medications .2 When considering...Award Number: W81XWH-13-1-0080 TITLE: "Psycho-Motor and Error Enabled Simulations: Modeling Vulnerable Skills in the Pre-Mastery Phase - Medical
Energy Technology Data Exchange (ETDEWEB)
Mohsen, O. [Northern Illinois U.; Gonin, I. [Fermilab; Kephart, R. [Fermilab; Khabiboulline, T. [Fermilab; Piot, P. [Northern Illinois U.; Solyak, N. [Fermilab; Thangaraj, J. C. [Fermilab; Yakovlev, V. [Fermilab
2018-01-05
High-power electron beams are sought-after tools in support to a wide array of societal applications. This paper investigates the production of high-power electron beams by combining a high-current field-emission electron source to a superconducting radio-frequency (SRF) cavity. We especially carry out beam-dynamics simulations that demonstrate the viability of the scheme to form $\\sim$ 300 kW average-power electron beam using a 1+1/2-cell SRF gun.
DEFF Research Database (Denmark)
Boy, M.; Sogachev, Andrey; Lauros, J.
2010-01-01
Chemistry in the atmospheric boundary layer (ABL) is controlled by complex processes of surface fluxes, flow, turbulent transport, and chemical reactions. We present a new model SOSA (model to simulate the concentration of organic vapours and sulphuric acid) and attempt to reconstruct the emissions...... in the surface layer we were able to get a reasonable description of turbulence and other quantities through the ABL. As a first application of the model, we present vertical profiles of organic compounds and discuss their relation to newly formed particles....
DEFF Research Database (Denmark)
Boy, M.; Sogachev, Andrey; Lauros, J.
2011-01-01
Chemistry in the atmospheric boundary layer (ABL) is controlled by complex processes of surface fluxes, flow, turbulent transport, and chemical reactions. We present a new model SOSA (model to simulate the concentration of organic vapours and sulphuric acid) and attempt to reconstruct the emissions...... in the surface layer we were able to get a reasonable description of turbulence and other quantities through the ABL. As a first application of the model, we present vertical profiles of organic compounds and discuss their relation to newly formed particles....
Czech Academy of Sciences Publication Activity Database
Janeček, Ivan; Stachoň, M.; Gadéa, F. X.; Kalus, R.
2017-01-01
Roč. 19, č. 37 (2017), s. 25423-25440 ISSN 1463-9076 R&D Projects: GA MŠk ED2.1.00/03.0082; GA MŠk(CZ) LO1406 Institutional support: RVO:68145535 Keywords : atomic clusters * molecular physics * computer simulations Subject RIV: CF - Physical ; Theoretical Chemistry OBOR OECD: Atomic, molecular and chemical physics (physics of atoms and molecules including collision, interaction with radiation, magnetic resonances, Mössbauer effect) Impact factor: 4.123, year: 2016 http://pubs.rsc.org/en/content/articlelanding/2014/cp/c7cp03940a#!divAbstract
Accretion onto some well-known regular black holes
International Nuclear Information System (INIS)
Jawad, Abdul; Shahzad, M.U.
2016-01-01
In this work, we discuss the accretion onto static spherically symmetric regular black holes for specific choices of the equation of state parameter. The underlying regular black holes are charged regular black holes using the Fermi-Dirac distribution, logistic distribution, nonlinear electrodynamics, respectively, and Kehagias-Sftesos asymptotically flat regular black holes. We obtain the critical radius, critical speed, and squared sound speed during the accretion process near the regular black holes. We also study the behavior of radial velocity, energy density, and the rate of change of the mass for each of the regular black holes. (orig.)
Accretion onto some well-known regular black holes
Energy Technology Data Exchange (ETDEWEB)
Jawad, Abdul; Shahzad, M.U. [COMSATS Institute of Information Technology, Department of Mathematics, Lahore (Pakistan)
2016-03-15
In this work, we discuss the accretion onto static spherically symmetric regular black holes for specific choices of the equation of state parameter. The underlying regular black holes are charged regular black holes using the Fermi-Dirac distribution, logistic distribution, nonlinear electrodynamics, respectively, and Kehagias-Sftesos asymptotically flat regular black holes. We obtain the critical radius, critical speed, and squared sound speed during the accretion process near the regular black holes. We also study the behavior of radial velocity, energy density, and the rate of change of the mass for each of the regular black holes. (orig.)
Accretion onto some well-known regular black holes
Jawad, Abdul; Shahzad, M. Umair
2016-03-01
In this work, we discuss the accretion onto static spherically symmetric regular black holes for specific choices of the equation of state parameter. The underlying regular black holes are charged regular black holes using the Fermi-Dirac distribution, logistic distribution, nonlinear electrodynamics, respectively, and Kehagias-Sftesos asymptotically flat regular black holes. We obtain the critical radius, critical speed, and squared sound speed during the accretion process near the regular black holes. We also study the behavior of radial velocity, energy density, and the rate of change of the mass for each of the regular black holes.
Directory of Open Access Journals (Sweden)
Jinping Tang
2017-01-01
Full Text Available Optical tomography is an emerging and important molecular imaging modality. The aim of optical tomography is to reconstruct optical properties of human tissues. In this paper, we focus on reconstructing the absorption coefficient based on the radiative transfer equation (RTE. It is an ill-posed parameter identification problem. Regularization methods have been broadly applied to reconstruct the optical coefficients, such as the total variation (TV regularization and the L1 regularization. In order to better reconstruct the piecewise constant and sparse coefficient distributions, TV and L1 norms are combined as the regularization. The forward problem is discretized with the discontinuous Galerkin method on the spatial space and the finite element method on the angular space. The minimization problem is solved by a Jacobian-based Levenberg-Marquardt type method which is equipped with a split Bregman algorithms for the L1 regularization. We use the adjoint method to compute the Jacobian matrix which dramatically improves the computation efficiency. By comparing with the other imaging reconstruction methods based on TV and L1 regularizations, the simulation results show the validity and efficiency of the proposed method.
The Jump Set under Geometric Regularization. Part 1: Basic Technique and First-Order Denoising
Valkonen, Tuomo
2015-01-01
© 2015 Society for Industrial and Applied Mathematics. Let u ∈ BV(Ω) solve the total variation (TV) denoising problem with L^{2}-squared fidelity and data f. Caselles, Chambolle, and Novaga [Multiscale Model. Simul., 6 (2008), pp. 879-894] have shown the containment H^{m-1} (Ju \\\\Jf) = 0 of the jump set Ju of u in that of f. Their proof unfortunately depends heavily on the co-area formula, as do many results in this area, and as such is not directly extensible to higher-order, curvature-based, and other advanced geometric regularizers, such as total generalized variation and Euler\\'s elastica. These have received increased attention in recent times due to their better practical regularization properties compared to conventional TV or wavelets. We prove analogous jump set containment properties for a general class of regularizers. We do this with novel Lipschitz transformation techniques and do not require the co-area formula. In the present Part 1 we demonstrate the general technique on first-order regularizers, while in Part 2 we will extend it to higher-order regularizers. In particular, we concentrate in this part on TV and, as a novelty, Huber-regularized TV. We also demonstrate that the technique would apply to nonconvex TV models as well as the Perona-Malik anisotropic diffusion, if these approaches were well-posed to begin with.
A New Method for Determining Optimal Regularization Parameter in Near-Field Acoustic Holography
Directory of Open Access Journals (Sweden)
Yue Xiao
2018-01-01
Full Text Available Tikhonov regularization method is effective in stabilizing reconstruction process of the near-field acoustic holography (NAH based on the equivalent source method (ESM, and the selection of the optimal regularization parameter is a key problem that determines the regularization effect. In this work, a new method for determining the optimal regularization parameter is proposed. The transfer matrix relating the source strengths of the equivalent sources to the measured pressures on the hologram surface is augmented by adding a fictitious point source with zero strength. The minimization of the norm of this fictitious point source strength is as the criterion for choosing the optimal regularization parameter since the reconstructed value should tend to zero. The original inverse problem in calculating the source strengths is converted into a univariate optimization problem which is solved by a one-dimensional search technique. Two numerical simulations with a point driven simply supported plate and a pulsating sphere are investigated to validate the performance of the proposed method by comparison with the L-curve method. The results demonstrate that the proposed method can determine the regularization parameter correctly and effectively for the reconstruction in NAH.
Regan, R. Steve; LaFontaine, Jacob H.
2017-10-05
This report documents seven enhancements to the U.S. Geological Survey (USGS) Precipitation-Runoff Modeling System (PRMS) hydrologic simulation code: two time-series input options, two new output options, and three updates of existing capabilities. The enhancements are (1) new dynamic parameter module, (2) new water-use module, (3) new Hydrologic Response Unit (HRU) summary output module, (4) new basin variables summary output module, (5) new stream and lake flow routing module, (6) update to surface-depression storage and flow simulation, and (7) update to the initial-conditions specification. This report relies heavily upon U.S. Geological Survey Techniques and Methods, book 6, chapter B7, which documents PRMS version 4 (PRMS-IV). A brief description of PRMS is included in this report.
Laplacian embedded regression for scalable manifold regularization.
Chen, Lin; Tsang, Ivor W; Xu, Dong
2012-06-01
Semi-supervised learning (SSL), as a powerful tool to learn from a limited number of labeled data and a large number of unlabeled data, has been attracting increasing attention in the machine learning community. In particular, the manifold regularization framework has laid solid theoretical foundations for a large family of SSL algorithms, such as Laplacian support vector machine (LapSVM) and Laplacian regularized least squares (LapRLS). However, most of these algorithms are limited to small scale problems due to the high computational cost of the matrix inversion operation involved in the optimization problem. In this paper, we propose a novel framework called Laplacian embedded regression by introducing an intermediate decision variable into the manifold regularization framework. By using ∈-insensitive loss, we obtain the Laplacian embedded support vector regression (LapESVR) algorithm, which inherits the sparse solution from SVR. Also, we derive Laplacian embedded RLS (LapERLS) corresponding to RLS under the proposed framework. Both LapESVR and LapERLS possess a simpler form of a transformed kernel, which is the summation of the original kernel and a graph kernel that captures the manifold structure. The benefits of the transformed kernel are two-fold: (1) we can deal with the original kernel matrix and the graph Laplacian matrix in the graph kernel separately and (2) if the graph Laplacian matrix is sparse, we only need to perform the inverse operation for a sparse matrix, which is much more efficient when compared with that for a dense one. Inspired by kernel principal component analysis, we further propose to project the introduced decision variable into a subspace spanned by a few eigenvectors of the graph Laplacian matrix in order to better reflect the data manifold, as well as accelerate the calculation of the graph kernel, allowing our methods to efficiently and effectively cope with large scale SSL problems. Extensive experiments on both toy and real
International Nuclear Information System (INIS)
Bieler, Noah S.; Hünenberger, Philippe H.
2014-01-01
In a recent article [Bieler et al., J. Chem. Theory Comput. 10, 3006–3022 (2014)], we introduced a combination of the λ-dynamics (λD) approach for calculating alchemical free-energy differences and of the local-elevation umbrella-sampling (LEUS) memory-based biasing method to enhance the sampling along the alchemical coordinate. The combined scheme, referred to as λ-LEUS, was applied to the perturbation of hydroquinone to benzene in water as a test system, and found to represent an improvement over thermodynamic integration (TI) in terms of sampling efficiency at equivalent accuracy. However, the preoptimization of the biasing potential required in the λ-LEUS method requires “filling up” all the basins in the potential of mean force. This introduces a non-productive pre-sampling time that is system-dependent, and generally exceeds the corresponding equilibration time in a TI calculation. In this letter, a remedy is proposed to this problem, termed the slow growth memory guessing (SGMG) approach. Instead of initializing the biasing potential to zero at the start of the preoptimization, an approximate potential of mean force is estimated from a short slow growth calculation, and its negative used to construct the initial memory. Considering the same test system as in the preceding article, it is shown that of the application of SGMG in λ-LEUS permits to reduce the preoptimization time by about a factor of four