Gore, Brian Francis; Hooey, Becky Lee; Haan, Nancy; Socash, Connie; Mahlstedt, Eric; Foyle, David C.
2013-01-01
The Closely Spaced Parallel Operations (CSPO) scenario is a complex, human performance model scenario that tested alternate operator roles and responsibilities to a series of off-nominal operations on approach and landing (see Gore, Hooey, Mahlstedt, Foyle, 2013). The model links together the procedures, equipment, crewstation, and external environment to produce predictions of operator performance in response to Next Generation system designs, like those expected in the National Airspaces NextGen concepts. The task analysis that is contained in the present report comes from the task analysis window in the MIDAS software. These tasks link definitions and states for equipment components, environmental features as well as operational contexts. The current task analysis culminated in 3300 tasks that included over 1000 Subject Matter Expert (SME)-vetted, re-usable procedural sets for three critical phases of flight; the Descent, Approach, and Land procedural sets (see Gore et al., 2011 for a description of the development of the tasks included in the model; Gore, Hooey, Mahlstedt, Foyle, 2013 for a description of the model, and its results; Hooey, Gore, Mahlstedt, Foyle, 2013 for a description of the guidelines that were generated from the models results; Gore, Hooey, Foyle, 2012 for a description of the models implementation and its settings). The rollout, after landing checks, taxi to gate and arrive at gate illustrated in Figure 1 were not used in the approach and divert scenarios exercised. The other networks in Figure 1 set up appropriate context settings for the flight deck.The current report presents the models task decomposition from the tophighest level and decomposes it to finer-grained levels. The first task that is completed by the model is to set all of the initial settings for the scenario runs included in the model (network 75 in Figure 1). This initialization process also resets the CAD graphic files contained with MIDAS, as well as the embedded
Portable, parallel, reusable Krylov space codes
Energy Technology Data Exchange (ETDEWEB)
Smith, B.; Gropp, W. [Argonne National Lab., IL (United States)
1994-12-31
Krylov space accelerators are an important component of many algorithms for the iterative solution of linear systems. Each Krylov space method has it`s own particular advantages and disadvantages, therefore it is desirable to have a variety of them available all with an identical, easy to use, interface. A common complaint application programmers have with available software libraries for the iterative solution of linear systems is that they require the programmer to use the data structures provided by the library. The library is not able to work with the data structures of the application code. Hence, application programmers find themselves constantly recoding the Krlov space algorithms. The Krylov space package (KSP) is a data-structure-neutral implementation of a variety of Krylov space methods including preconditioned conjugate gradient, GMRES, BiCG-Stab, transpose free QMR and CGS. Unlike all other software libraries for linear systems that the authors are aware of, KSP will work with any application codes data structures, in Fortran or C. Due to it`s data-structure-neutral design KSP runs unchanged on both sequential and parallel machines. KSP has been tested on workstations, the Intel i860 and Paragon, Thinking Machines CM-5 and the IBM SP1.
Close-Spaced High Temperature Knudsen Flow.
1984-06-15
study of discharge processes in Knudsen mode (collisionless), thermionic energy converters. Areas of research involve mechanism for reducing the...The mechanisms we have chosen to study are: reduction of space-charge through a very close inter- electrode gap (less than 10 microns); transport and...AD-AI4U 471 :NNTIM R~ A Rl M ,i; ,11 , i J)W R8 1070 1 I~ "i E~Hhhh IIt Ll ~ : RASOR ASSOCIATES, INC.- AFOSR.TR. 84-1070 NSR-22-2 CLOSE -SPACED HIGH
Principal normal indicatrices of closed space curves
DEFF Research Database (Denmark)
Røgen, Peter
1999-01-01
A theorem due to J. Weiner, which is also proven by B. Solomon, implies that a principal normal indicatrix of a closed space curve with nonvanishing curvature has integrated geodesic curvature zero and contains no subarc with integrated geodesic curvature pi. We prove that the inverse problem alw...
Thermal Interaction of Closely Spaced Persons
DEFF Research Database (Denmark)
Brohus, Henrik; Nielsen, Peter V.; Tøgersen, Michael
2011-01-01
This paper presents results from a pilot study on the thermal interaction of closely spaced persons in a large enclosure. The surface temperature at different densities of persons is evaluated using a high resolution thermo vision camera in a controlled thermal environment. The corresponding ther...... thermal sensation is evaluated using questionnaires for the various densities. The results indicate that it may be acceptable to consider persons standalone, in a thermal sense, disregarding thermal interaction at usual densities in the design of large enclosures.......This paper presents results from a pilot study on the thermal interaction of closely spaced persons in a large enclosure. The surface temperature at different densities of persons is evaluated using a high resolution thermo vision camera in a controlled thermal environment. The corresponding...
78 FR 53497 - Commercial Space Transportation Advisory Committee; Closed Session
2013-08-29
... DEPARTMENT OF TRANSPORTATION Federal Aviation Administration Commercial Space Transportation... Commercial Space Transportation Advisory Committee Special Closed Session. SUMMARY: Pursuant to Section 10(a...), notice is hereby given of a special closed session of the Commercial Space Transportation Advisory...
Ion extraction capabilities of closely spaced grids
Rovang, D. C.; Wilbur, P. J.
1982-01-01
The ion extraction capabilities of accelerator systems with small screen hole diameters (less than 2.0 mm) are investigated at net-accelerating voltages of 100, 300, and 500 V. Results show that the impingement-limited perveance is not dramatically affected by reductions in screen hole diameter to 1.0 mm, but impingement-limited performance was found to be dependent on the grid separation distance, the discharge-to-total accelerating voltage ratio, and the net-to-total accelerating voltage ratio. Results obtained using small hole diameters and closely spaced grids indicate a new mode of grid operation where high current density operation can be achieved with a specified net acceleration voltage by operating the grids at a high rather than low net-to-total acceleration voltage. Beam current densities as high as 25 mA/sq cm were obtained using grids with 1.0 mm diameter holes operating at a net accelerating voltage of 500 V.
Parallel Auxiliary Space AMG Solver for $H(div)$ Problems
Energy Technology Data Exchange (ETDEWEB)
Kolev, Tzanio V. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Vassilevski, Panayot S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2012-12-18
We present a family of scalable preconditioners for matrices arising in the discretization of $H(div)$ problems using the lowest order Raviart--Thomas finite elements. Our approach belongs to the class of “auxiliary space''--based methods and requires only the finite element stiffness matrix plus some minimal additional discretization information about the topology and orientation of mesh entities. Also, we provide a detailed algebraic description of the theory, parallel implementation, and different variants of this parallel auxiliary space divergence solver (ADS) and discuss its relations to the Hiptmair--Xu (HX) auxiliary space decomposition of $H(div)$ [SIAM J. Numer. Anal., 45 (2007), pp. 2483--2509] and to the auxiliary space Maxwell solver AMS [J. Comput. Math., 27 (2009), pp. 604--623]. Finally, an extensive set of numerical experiments demonstrates the robustness and scalability of our implementation on large-scale $H(div)$ problems with large jumps in the material coefficients.
Mappings with closed range and finite dimensional linear spaces
International Nuclear Information System (INIS)
Iyahen, S.O.
1984-09-01
This paper looks at two settings, each of continuous linear mappings of linear topological spaces. In one setting, the domain space is fixed while the range space varies over a class of linear topological spaces. In the second setting, the range space is fixed while the domain space similarly varies. The interest is in when the requirement that the mappings have a closed range implies that the domain or range space is finite dimensional. Positive results are obtained for metrizable spaces. (author)
Closely spaced mirror pair for reshaping and homogenizing pump beams in laser amplifiers
International Nuclear Information System (INIS)
Bass, I.L.
1992-12-01
Channeling a laser beam by multiple reflections between two closely-spaced, parallel or nearly parallel mirrors, serves to reshape and homogenize the beam at the output gap between the mirrors. Application of this device to improve the spatial overlap of a copper laser pump beam with the signal beam in a dye laser amplifier is described. This technique has been applied to the AVLIS program at the Lawrence Livermore National Laboratory
Nω –CLOSED SETS IN NEUTROSOPHIC TOPOLOGICAL SPACES
Directory of Open Access Journals (Sweden)
Santhi R.
2016-08-01
Full Text Available Neutrosophic set and Neutrosophic Topological spaces has been introduced by Salama. Neutrosophic Closed set and Neutrosophic Continuous Functions were introduced by Salama et. al.. In this paper, we introduce the concept of Nω- closed sets and their properties in Neutrosophic topological spaces.
Adaptive integrand decomposition in parallel and orthogonal space
International Nuclear Information System (INIS)
Mastrolia, Pierpaolo; Peraro, Tiziano; Primo, Amedeo
2016-01-01
We present the integrand decomposition of multiloop scattering amplitudes in parallel and orthogonal space-time dimensions, d=d ∥ +d ⊥ , being d ∥ the dimension of the parallel space spanned by the legs of the diagrams. When the number n of external legs is n≤4, the corresponding representation of multiloop integrals exposes a subset of integration variables which can be easily integrated away by means of Gegenbauer polynomials orthogonality condition. By decomposing the integration momenta along parallel and orthogonal directions, the polynomial division algorithm is drastically simplified. Moreover, the orthogonality conditions of Gegenbauer polynomials can be suitably applied to integrate the decomposed integrand, yielding the systematic annihilation of spurious terms. Consequently, multiloop amplitudes are expressed in terms of integrals corresponding to irreducible scalar products of loop momenta and external ones. We revisit the one-loop decomposition, which turns out to be controlled by the maximum-cut theorem in different dimensions, and we discuss the integrand reduction of two-loop planar and non-planar integrals up to n=8 legs, for arbitrary external and internal kinematics. The proposed algorithm extends to all orders in perturbation theory.
Adaptive integrand decomposition in parallel and orthogonal space
Energy Technology Data Exchange (ETDEWEB)
Mastrolia, Pierpaolo [Dipartimento di Fisica ed Astronomia, Università di Padova,Via Marzolo 8, 35131 Padova (Italy); INFN, Sezione di Padova,Via Marzolo 8, 35131 Padova (Italy); Peraro, Tiziano [Higgs Centre for Theoretical Physics, School of Physics and Astronomy,The University of Edinburgh,James Clerk Maxwell Building,Peter Guthrie Tait Road, Edinburgh EH9 3FD, Scotland (United Kingdom); Primo, Amedeo [Dipartimento di Fisica ed Astronomia, Università di Padova,Via Marzolo 8, 35131 Padova (Italy); INFN, Sezione di Padova,Via Marzolo 8, 35131 Padova (Italy)
2016-08-29
We present the integrand decomposition of multiloop scattering amplitudes in parallel and orthogonal space-time dimensions, d=d{sub ∥}+d{sub ⊥}, being d{sub ∥} the dimension of the parallel space spanned by the legs of the diagrams. When the number n of external legs is n≤4, the corresponding representation of multiloop integrals exposes a subset of integration variables which can be easily integrated away by means of Gegenbauer polynomials orthogonality condition. By decomposing the integration momenta along parallel and orthogonal directions, the polynomial division algorithm is drastically simplified. Moreover, the orthogonality conditions of Gegenbauer polynomials can be suitably applied to integrate the decomposed integrand, yielding the systematic annihilation of spurious terms. Consequently, multiloop amplitudes are expressed in terms of integrals corresponding to irreducible scalar products of loop momenta and external ones. We revisit the one-loop decomposition, which turns out to be controlled by the maximum-cut theorem in different dimensions, and we discuss the integrand reduction of two-loop planar and non-planar integrals up to n=8 legs, for arbitrary external and internal kinematics. The proposed algorithm extends to all orders in perturbation theory.
Scattering by multiple parallel radially stratified infinite cylinders buried in a lossy half space.
Lee, Siu-Chun
2013-07-01
The theoretical solution for scattering by an arbitrary configuration of closely spaced parallel infinite cylinders buried in a lossy half space is presented in this paper. The refractive index and permeability of the half space and cylinders are complex in general. Each cylinder is radially stratified with a distinct complex refractive index and permeability. The incident radiation is an arbitrarily polarized plane wave propagating in the plane normal to the axes of the cylinders. Analytic solutions are derived for the electric and magnetic fields and the Poynting vector of backscattered radiation emerging from the half space. Numerical examples are presented to illustrate the application of the scattering solution to calculate backscattering from a lossy half space containing multiple homogeneous and radially stratified cylinders at various depths and different angles of incidence.
Airborne Precision Spacing (APS) Dependent Parallel Arrivals (DPA)
Smith, Colin L.
2012-01-01
The Airborne Precision Spacing (APS) team at the NASA Langley Research Center (LaRC) has been developing a concept of operations to extend the current APS concept to support dependent approaches to parallel or converging runways along with the required pilot and controller procedures and pilot interfaces. A staggered operations capability for the Airborne Spacing for Terminal Arrival Routes (ASTAR) tool was developed and designated as ASTAR10. ASTAR10 has reached a sufficient level of maturity to be validated and tested through a fast-time simulation. The purpose of the experiment was to identify and resolve any remaining issues in the ASTAR10 algorithm, as well as put the concept of operations through a practical test.
Regular Generalized Star Star closed sets in Bitopological Spaces
K. Kannan; D. Narasimhan; K. Chandrasekhara Rao; R. Ravikumar
2011-01-01
The aim of this paper is to introduce the concepts of τ1τ2-regular generalized star star closed sets , τ1τ2-regular generalized star star open sets and study their basic properties in bitopological spaces.
Transfer closed and transfer open multimaps in minimal spaces
International Nuclear Information System (INIS)
Alimohammady, M.; Roohi, M.; Delavar, M.R.
2009-01-01
This paper is devoted to introduce the concepts of transfer closed and transfer open multimaps in minimal spaces. Also, some characterizations of them are considered. Further, the notion of minimal local intersection property will be introduced and characterized. Moreover, some maximal element theorems via minimal transfer closed multimaps and minimal local intersection property are given.
Parallelization of the Physical-Space Statistical Analysis System (PSAS)
Larson, J. W.; Guo, J.; Lyster, P. M.
1999-01-01
Atmospheric data assimilation is a method of combining observations with model forecasts to produce a more accurate description of the atmosphere than the observations or forecast alone can provide. Data assimilation plays an increasingly important role in the study of climate and atmospheric chemistry. The NASA Data Assimilation Office (DAO) has developed the Goddard Earth Observing System Data Assimilation System (GEOS DAS) to create assimilated datasets. The core computational components of the GEOS DAS include the GEOS General Circulation Model (GCM) and the Physical-space Statistical Analysis System (PSAS). The need for timely validation of scientific enhancements to the data assimilation system poses computational demands that are best met by distributed parallel software. PSAS is implemented in Fortran 90 using object-based design principles. The analysis portions of the code solve two equations. The first of these is the "innovation" equation, which is solved on the unstructured observation grid using a preconditioned conjugate gradient (CG) method. The "analysis" equation is a transformation from the observation grid back to a structured grid, and is solved by a direct matrix-vector multiplication. Use of a factored-operator formulation reduces the computational complexity of both the CG solver and the matrix-vector multiplication, rendering the matrix-vector multiplications as a successive product of operators on a vector. Sparsity is introduced to these operators by partitioning the observations using an icosahedral decomposition scheme. PSAS builds a large (approx. 128MB) run-time database of parameters used in the calculation of these operators. Implementing a message passing parallel computing paradigm into an existing yet developing computational system as complex as PSAS is nontrivial. One of the technical challenges is balancing the requirements for computational reproducibility with the need for high performance. The problem of computational
On semi star generalized closed sets in bitopological spaces.
Directory of Open Access Journals (Sweden)
K. Kannan
2010-07-01
Full Text Available K. Chandrasekhara Rao and K. Joseph [5] introduced the concepts of semi star generalized open sets and semi star generalized closed sets in a topological space. The same concept was extended to bitopological spaces by K. Chan-drasekhara Rao and K. Kannan [6,7]. In this paper, we continue the study of τ1τ2-s∗g closed sets inbitopology and we introduced the newly related concept of pairwise s∗g-continuous mappings. Also S∗GO-connectedness and S∗GO-compactness are introduced in bitopological spaces and some of their properties are established.
Relativistic cosmologies with closed, locally homogeneous space sections
International Nuclear Information System (INIS)
Fagundes, H.V.
1985-01-01
The homogeneous Bianchi and Kantowski-Sachs metrics of relativistic cosmology are investigated through their correspondence with recent geometrical results of Thurston. These allow a partial classification of the topologies for closed, locally homogeneous spaces according to Thurston's eight geometric types. Besides, which of the Bianchi-Kantowski-Sachs metrics can be imposed on closed space sections of cosmological models are learned. This is seen as a progress toward implementation of a postulate of the closure of space for both classical and quantum gravity. (Author) [pt
An Implementation and Parallelization of the Scale Space Meshing Algorithm
Directory of Open Access Journals (Sweden)
Julie Digne
2015-11-01
Full Text Available Creating an interpolating mesh from an unorganized set of oriented points is a difficult problemwhich is often overlooked. Most methods focus indeed on building a watertight smoothed meshby defining some function whose zero level set is the surface of the object. However in some casesit is crucial to build a mesh that interpolates the points and does not fill the acquisition holes:either because the data are sparse and trying to fill the holes would create spurious artifactsor because the goal is to explore visually the data exactly as they were acquired without anysmoothing process. In this paper we detail a parallel implementation of the Scale-Space Meshingalgorithm, which builds on the scale-space framework for reconstructing a high precision meshfrom an input oriented point set. This algorithm first smoothes the point set, producing asingularity free shape. It then uses a standard mesh reconstruction technique, the Ball PivotingAlgorithm, to build a mesh from the smoothed point set. The final step consists in back-projecting the mesh built on the smoothed positions onto the original point set. The result ofthis process is an interpolating, hole-preserving surface mesh reconstruction.
Judging Criterion of Controlled Structures with Closely Spaced Natural Frequencies
International Nuclear Information System (INIS)
Xie Faxiang; Sun Limin
2010-01-01
The structures with closely spaced natural frequencies widely exist in civil engineering; however, the judging criterion of the density of closely spaced frequencies is in dispute. This paper suggests a judging criterion for structures with closely spaced natural frequencies based on the analysis on a controlled 2-DOF structure. The analysis results indicate that the optimal control gain of the structure with velocity feedback is dependent on the frequency density parameter of structure and the maximum attainable additional modal damping ratio is 1.72 times of the frequency density parameter when state feedback is applied. Based on a brief review on the previous researches, a judging criterion related the minimum frequency density parameter and the required mode damping ratio was proposed.
Rationale for evaluating a closed food chain for space habitats
Modell, M.; Spurlock, J. M.
1980-01-01
Closed food cycles for long duration space flight and space habitation are examined. Wash water for a crew of six is economically recyclable after a week, while a total closed loop water system is effective only if the stay exceeds six months' length. The stoichiometry of net plant growth is calculated and it is shown that the return of urine, feces, and inedible plant parts to the food chain, along with the addition of photosynthesis, closes the food chain loop. Scenarios are presented to explore the technical feasibility of achieving a closed loop system. An optimal choice of plants is followed by processing, waste conversion, equipment specifications, and control requirements, and finally, cost-effectiveness.
D-instantons and closed string tachyons in Misner space
International Nuclear Information System (INIS)
Hikida, Yasuaki; Tai, T.-S.
2006-01-01
We investigate closed string tachyon condensation in Misner space, a toy model for big bang universe. In Misner space, we are able to condense tachyonic modes of closed strings in the twisted sectors, which is supposed to remove the big bang singularity. In order to examine this, we utilize D-instanton as a probe. First, we study general properties of D-instanton by constructing boundary state and effective action. Then, resorting to these, we are able to show that tachyon condensation actually deforms the geometry such that the singularity becomes milder
Closed-Loop Optimal Control Implementations for Space Applications
2016-12-01
with standard linear algebra techniques if is converted to a diagonal square matrix by multiplying by the identity matrix, I , as was done in (1.134...OPTIMAL CONTROL IMPLEMENTATIONS FOR SPACE APPLICATIONS by Colin S. Monk December 2016 Thesis Advisor: Mark Karpenko Second Reader: I. M...COVERED Master’s thesis, Jan-Dec 2016 4. TITLE AND SUBTITLE CLOSED-LOOP OPTIMAL CONTROL IMPLEMENTATIONS FOR SPACE APPLICATIONS 5. FUNDING NUMBERS
Biharmonic Submanifolds with Parallel Mean Curvature Vector in Pseudo-Euclidean Spaces
Energy Technology Data Exchange (ETDEWEB)
Fu, Yu, E-mail: yufudufe@gmail.com [Dongbei University of Finance and Economics, School of Mathematics and Quantitative Economics (China)
2013-12-15
In this paper, we investigate biharmonic submanifolds in pseudo-Euclidean spaces with arbitrary index and dimension. We give a complete classification of biharmonic spacelike submanifolds with parallel mean curvature vector in pseudo-Euclidean spaces. We also determine all biharmonic Lorentzian surfaces with parallel mean curvature vector field in pseudo-Euclidean spaces.
Biharmonic Submanifolds with Parallel Mean Curvature Vector in Pseudo-Euclidean Spaces
International Nuclear Information System (INIS)
Fu, Yu
2013-01-01
In this paper, we investigate biharmonic submanifolds in pseudo-Euclidean spaces with arbitrary index and dimension. We give a complete classification of biharmonic spacelike submanifolds with parallel mean curvature vector in pseudo-Euclidean spaces. We also determine all biharmonic Lorentzian surfaces with parallel mean curvature vector field in pseudo-Euclidean spaces
Theta-Generalized closed sets in fuzzy topological spaces
International Nuclear Information System (INIS)
El-Shafei, M.E.; Zakari, A.
2006-01-01
In this paper we introduce the concepts of theta-generalized closed fuzzy sets and generalized fuzzy sets in topological spaces. Furthermore, generalized fuzzy sets are extended to theta-generalized fuzzy sets. Also, we introduce the concepts of fuzzy theta-generalized continuous and fuzzy theta-generalized irresolute mappings. (author)
Can we close large prosthetic space with orthodontics?
Mesko, Mauro Elias; Skupien, Jovito Adiel; Valentini, Fernanda; Pereira-Cenci, Tatiana
2013-01-01
For years, the treatment for the replacement of a missing tooth was a fixed dental prosthesis. Currently, implants are indicated to replace missing teeth due to high clinical success and with the advantage of not performing preparations in the adjacent tooth. Another option for space closure is the use of orthodontics associated to miniscrews for anchorage allowing better control of the orthodontic biomechanics and especially making possible closure of larger prosthetic spaces. Thus, this article describes two cases with indications and discussion of the advantages and disadvantages of using orthodontics for prosthetic spaces closure. The cases herein presented show that it is possible to close an space when there are available teeth in the adjacent area. It can be concluded that when a malocclusion is present there will be a strong trend to indicate space closure by orthodontic movement as it preserves natural teeth and seems a more physiological approach.
Initiation at closely spaced replication origins in a yeast chromosome.
Brewer, B J; Fangman, W L
1993-12-10
Replication of eukaryotic chromosomes involves initiation at origins spaced an average of 50 to 100 kilobase pairs. In yeast, potential origins can be recognized as autonomous replication sequences (ARSs) that allow maintenance of plasmids. However, there are more ARS elements than active chromosomal origins. The possibility was examined that close spacing of ARSs can lead to inactive origins. Two ARSs located 6.5 kilobase pairs apart can indeed interfere with each other. Replication is initiated from one or the other ARS with equal probability, but rarely (< 5%) from both ARSs on the same DNA molecule.
Closed-form solution for piezoelectric layer with two collinear cracks parallel to the boundaries
Directory of Open Access Journals (Sweden)
B. M. Singh
2006-01-01
Full Text Available We consider the problem of determining the stress distribution in an infinitely long piezoelectric layer of finite width, with two collinear cracks of equal length and parallel to the layer boundaries. Within the framework of reigning piezoelectric theory under mode III, the cracked piezoelectric layer subjected to combined electromechanical loading is analyzed. The faces of the layers are subjected to electromechanical loading. The collinear cracks are located at the middle plane of the layer parallel to its face. By the use of Fourier transforms we reduce the problem to solving a set of triple integral equations with cosine kernel and a weight function. The triple integral equations are solved exactly. Closed form analytical expressions for stress intensity factors, electric displacement intensity factors, and shape of crack and energy release rate are derived. As the limiting case, the solution of the problem with one crack in the layer is derived. Some numerical results for the physical quantities are obtained and displayed graphically.
Thermally optimum spacing of vertical, natural convection cooled, parallel plates
Bar-Cohen, A.; Rohsenow, W. M.
Vertical two-dimensional channels formed by parallel plates or fins are a frequently encountered configuration in natural convection cooling in air of electronic equipment. In connection with the complexity of heat dissipation in vertical parallel plate arrays, little theoretical effort is devoted to thermal optimization of the relevant packaging configurations. The present investigation is concerned with the establishment of an analytical structure for analyses of such arrays, giving attention to useful relations for heat distribution patterns. The limiting relations for fully-developed laminar flow, in a symmetric isothermal or isoflux channel as well as in a channel with an insulated wall, are derived by use of a straightforward integral formulation.
78 FR 70093 - Commercial Space Transportation Advisory Committee-Closed Session
2013-11-22
... DEPARTMENT OF TRANSPORTATION Federal Aviation Administration Commercial Space Transportation... Commercial Space Transportation Advisory Committee Special Closed Session. SUMMARY: Pursuant to Section 10(a...), notice is hereby given of a special closed session of the Commercial Space Transportation Advisory...
76 FR 4412 - Commercial Space Transportation Advisory Committee-Closed Session
2011-01-25
... DEPARTMENT OF TRANSPORTATION Federal Aviation Administration Commercial Space Transportation... Commercial Space Transportation Advisory Committee Special Closed Session. SUMMARY: Pursuant to Section 10(a... Commercial Space Transportation Advisory Committee (COMSTAC). The special closed session will be an...
Power Absorption by Closely Spaced Point Absorbers in Constrained Conditions
DEFF Research Database (Denmark)
De Backer, G.; Vantorre, M.; Beels, C.
2010-01-01
The performance of an array of closely spaced point absorbers is numerically assessed in a frequency domain model Each point absorber is restricted to the heave mode and is assumed to have its own linear power take-off (PTO) system Unidirectional irregular incident waves are considered......, representing the wave climate at Westhinder on the Belgian Continental Shelf The impact of slamming, stroke and force restrictions on the power absorption is evaluated and optimal PTO parameters are determined For multiple bodies optimal control parameters (CP) are not only dependent on the incoming waves...
Allphin, Devin
benefits of this technique. For the offline approximation, latin hypercube sampling (LHS) was used for design space filling across four (4) independent design variable degrees of freedom (DOF). Flow solutions at the mapped test sites were converged using STAR-CCM+ with aerodynamic forces from the CFD models then functionally approximated using Kriging interpolation. For the closed-form approximation, the problem was interpreted as an ideal 2-D converging-diverging (C-D) nozzle, where aerodynamic forces were directly mapped by application of the Euler equation solutions for isentropic compression/expansion. A cost-weighting procedure was finally established for creating model-selective discretionary logic, with a synthesized parallel simulation resource summary provided.
State-space Generalized Predicitve Control for redundant parallel robots
Czech Academy of Sciences Publication Activity Database
Belda, Květoslav; Böhm, Josef; Valášek, M.
2003-01-01
Roč. 31, č. 3 (2003), s. 413-432 ISSN 1539-7734 R&D Projects: GA ČR GA101/03/0620 Grant - others:CTU(CZ) 0204512 Institutional research plan: CEZ:AV0Z1075907 Keywords : parallel robot construction * generalized predictive control * drive redundancy Subject RIV: BC - Control Systems Theory http://library.utia.cas.cz/separaty/historie/belda-0411126.pdf
POSITION DETERMINATION OF CLOSELY SPACED BUNCHES USING CAVITY BPMs
Joshi, N; Cullinan, F; Lyapin, A
2011-01-01
Radio Frequency (RF) Cavity Beam Position Monitor (BPM) systems form a major part of precision position measurement diagnostics for linear accelerators with low emittance beams. Using cavity BPMs, a position resolution of less than 100 nm has been demonstrated in single bunch mode operation. In the case of closely spaced bunches, where the decay time of the cavity is comparable to the time separation between bunches, the BPM signal from a bunch is polluted by the signal induced by the previous bunches in the same bunch-train. This paper discusses our ongoing work to develop the methods to extract the position of closely spaced bunches using cavity BPMs. A signal subtraction code is being developed to remove the signal pollution from previous bunches and to determine the individual bunch position. Another code has been developed to simulate the BPM data for the cross check. Performance of the code is studied on the experimental and simulated data. Application of the analysis techniques to the linear colliders,...
Domain Specific Language for Geant4 Parallelization for Space-based Applications, Phase I
National Aeronautics and Space Administration — A major limiting factor in HPC growth is the requirement to parallelize codes to leverage emerging architectures, especially as single core performance has plateaued...
Fijany, Amir
1993-01-01
In this paper, parallel O(log n) algorithms for computation of rigid multibody dynamics are developed. These parallel algorithms are derived by parallelization of new O(n) algorithms for the problem. The underlying feature of these O(n) algorithms is a drastically different strategy for decomposition of interbody force which leads to a new factorization of the mass matrix (M). Specifically, it is shown that a factorization of the inverse of the mass matrix in the form of the Schur Complement is derived as M(exp -1) = C - B(exp *)A(exp -1)B, wherein matrices C, A, and B are block tridiagonal matrices. The new O(n) algorithm is then derived as a recursive implementation of this factorization of M(exp -1). For the closed-chain systems, similar factorizations and O(n) algorithms for computation of Operational Space Mass Matrix lambda and its inverse lambda(exp -1) are also derived. It is shown that these O(n) algorithms are strictly parallel, that is, they are less efficient than other algorithms for serial computation of the problem. But, to our knowledge, they are the only known algorithms that can be parallelized and that lead to both time- and processor-optimal parallel algorithms for the problem, i.e., parallel O(log n) algorithms with O(n) processors. The developed parallel algorithms, in addition to their theoretical significance, are also practical from an implementation point of view due to their simple architectural requirements.
Complex Pupil Masks for Aberrated Imaging of Closely Spaced Objects
Reddy, A. N. K.; Sagar, D. K.; Khonina, S. N.
2017-12-01
Current approach demonstrates the suppression of optical side-lobes and the contraction of the main lobe in the composite image of two object points of the optical system under the influence of defocusing effect when an asymmetric phase edges are imposed over the apodized circular aperture. The resolution of two point sources having different intensity ratio is discussed in terms of the modified Sparrow criterion, functions of the degree of coherence of the illumination, the intensity difference and the degree of asymmetric phase masking. Here we have introduced and explored the effects of focus aberration (defect-of-focus) on the two-point resolution of the optical systems. Results on the aberrated composite image of closely spaced objects with amplitude mask and asymmetric phase masks forms a significant contribution in astronomical and microscopic observations.
3. A Closed Aquatic System for Space and Earth Application
Slenzka, K.; Duenne, M.; Jastorff, B.; Ranke, J.; Schirmer, M.
Increased durations in space travel as well as living in extreme environments are requiring reliable life support systems in general and bioregenerative ones in detail. Waste water management, air revitalization and food production are obviously center goals in this research, however, in addition a potential influence by chemicals, drugs etc. released to the closed environment must be considered. On this basis ecotoxicological data become more and more important for CELSS (Closed Ecological Life Support System) development and performance. The experiences gained during the last years in our research group lead to the development of an aquatic habitat, called AquaHab (formerly CBRU), which is a closed, self-sustaining system with a total water volume of 9 liters. In the frame program of a R&D project funded by the state of Bremen and OHB System, AquaHab is under adaptation to become an ecotoxicological research unit containing for example Japanese Medaka or Zebra Fish, amphipods, water snails and water plants. Test runs were standardized and analytical methods were developed. Beside general biological and water chemical parameters, activity measurements of biotransforming enzymes (G6PDH, CytP450-Oxidase, Peroxidase) and cell viability tests as well as residual analysis of the applied substance and respective metabolites were selected as evaluation criteria. In a first series of tests low doses effects of TBT (Tributyltin, 0.1 to 20 μgTBT/l nominal concentration) were analyzed. The AquaHab and data obtained for applied environmental risk assessment will be presented at the assembly.
Minimal surfaces in symmetric spaces with parallel second ...
Indian Academy of Sciences (India)
Xiaoxiang Jiao
2017-07-31
Jul 31, 2017 ... space and its non-compact dual by totally real, totally complex, and invariant immersions. ... frame fields, let θ1,θ2 and ω1,...,ωn be their dual frames. ... where ˜∇ is the induced connection of the pull-back bundle f. −1. T(N), which is defined by. ˜∇X W = ¯∇ f∗ X W for W ∈ f. −1. T(N) and X ∈ T(M). Let f∗(ei ) ...
Calculational models of close-spaced thermionic converters
International Nuclear Information System (INIS)
McVey, J.B.
1983-01-01
Two new calculational models have been developed in conjunction with the SAVTEC experimental program. These models have been used to analyze data from experimental close-spaced converters, providing values for spacing, electrode work functions, and converter efficiency. They have also been used to make performance predictions for such converters over a wide range of conditions. Both models are intended for use in the collisionless (Knudsen) regime. They differ from each other in that the simpler one uses a Langmuir-type formulation which only considers electrons emitted from the emitter. This approach is implemented in the LVD (Langmuir Vacuum Diode) computer program, which has the virtue of being both simple and fast. The more complex model also includes both Saha-Langmuir emission of positive cesium ions from the emitter and collector back emission. Computer implementation is by the KMD1 (Knudsen Mode Diode) program. The KMD1 model derives the particle distribution functions from the Vlasov equation. From these the particle densities are found for various interelectrode motive shapes. Substituting the particle densities into Poisson's equation gives a second order differential equation for potential. This equation can be integrated once analytically. The second integration, which gives the interelectrode motive, is performed numerically by the KMD1 program. This is complicated by the fact that the integrand is often singular at one end point of the integration interval. The program performs a transformation on the integrand to make it finite over the entire interval. Once the motive has been computed, the output voltage, current density, power density, and efficiency are found. The program is presently unable to operate when the ion richness ratio β is between about .8 and 1.0, due to the occurrence of oscillatory motives
Preliminary closed Brayton cycle study for a space reactor application
International Nuclear Information System (INIS)
Guimaraes, Lamartine Nogueira Frutuoso; Carvalho, Ricardo Pinto de; Camillo, Giannino Ponchio
2007-01-01
The Nuclear Energy Division (ENU) of the Institute for Advanced Studies (IEAv) has started a preliminary design study for a Closed Brayton Cycle Loop (CBCL) aimed at a space reactor application. The main objectives of the study are to establish a starting concept for the CBCL components specifications, and to develop a demonstrative simulator of CBCL in nominal operation conditions. The ENU/IEAv preliminary design study is developing the CBCL around the NOELLE 60290 turbo machine. The actual nuclear reactor study is being conducted independently. Because of that, a conventional heat source is being used for the CBCL, in this preliminary design phase. This paper describes the steady state simulator of the CBCL operating with NOELLE 60290 turbo machine. In principle, several gases are being considered as working fluid, as for instance: air, helium, nitrogen, CO2 and gas mixtures such as helium and xenon. At this moment the simulator is running with Helium as the working fluid. Simplified models of heat and mass transfer are being developed to simulate thermal components. Future efforts will focus on keeping track of the modifications being implemented at the NOELLE 60290 turbo machine in order to build the CBCL. (author)
Preliminary closed Brayton cycle study for a space reactor application
Energy Technology Data Exchange (ETDEWEB)
Guimaraes, Lamartine Nogueira Frutuoso; Carvalho, Ricardo Pinto de [Institute for Advanced Studies, Sao Jose dos Campos, SP (Brazil)]. E-mail: guimarae@ieav.cta.br; Camillo, Giannino Ponchio [Instituto Tecnologico de Aeronautica (ITA), Sao Jose dos Campos, SP (Brazil)]. E-mail: gianninocamillo@gmail.com
2007-07-01
The Nuclear Energy Division (ENU) of the Institute for Advanced Studies (IEAv) has started a preliminary design study for a Closed Brayton Cycle Loop (CBCL) aimed at a space reactor application. The main objectives of the study are to establish a starting concept for the CBCL components specifications, and to develop a demonstrative simulator of CBCL in nominal operation conditions. The ENU/IEAv preliminary design study is developing the CBCL around the NOELLE 60290 turbo machine. The actual nuclear reactor study is being conducted independently. Because of that, a conventional heat source is being used for the CBCL, in this preliminary design phase. This paper describes the steady state simulator of the CBCL operating with NOELLE 60290 turbo machine. In principle, several gases are being considered as working fluid, as for instance: air, helium, nitrogen, CO2 and gas mixtures such as helium and xenon. At this moment the simulator is running with Helium as the working fluid. Simplified models of heat and mass transfer are being developed to simulate thermal components. Future efforts will focus on keeping track of the modifications being implemented at the NOELLE 60290 turbo machine in order to build the CBCL. (author)
International Nuclear Information System (INIS)
Gianluca, Longoni; Alireza, Haghighat
2003-01-01
In recent years, the SP L (simplified spherical harmonics) equations have received renewed interest for the simulation of nuclear systems. We have derived the SP L equations starting from the even-parity form of the S N equations. The SP L equations form a system of (L+1)/2 second order partial differential equations that can be solved with standard iterative techniques such as the Conjugate Gradient (CG). We discretized the SP L equations with the finite-volume approach in a 3-D Cartesian space. We developed a new 3-D general code, Pensp L (Parallel Environment Neutral-particle SP L ). Pensp L solves both fixed source and criticality eigenvalue problems. In order to optimize the memory management, we implemented a Compressed Diagonal Storage (CDS) to store the SP L matrices. Pensp L includes parallel algorithms for space and moment domain decomposition. The computational load is distributed on different processors, using a mapping function, which maps the 3-D Cartesian space and moments onto processors. The code is written in Fortran 90 using the Message Passing Interface (MPI) libraries for the parallel implementation of the algorithm. The code has been tested on the Pcpen cluster and the parallel performance has been assessed in terms of speed-up and parallel efficiency. (author)
Energy Technology Data Exchange (ETDEWEB)
Loring, Burlen; Karimabadi, Homa; Rortershteyn, Vadim
2014-07-01
The surface line integral convolution(LIC) visualization technique produces dense visualization of vector fields on arbitrary surfaces. We present a screen space surface LIC algorithm for use in distributed memory data parallel sort last rendering infrastructures. The motivations for our work are to support analysis of datasets that are too large to fit in the main memory of a single computer and compatibility with prevalent parallel scientific visualization tools such as ParaView and VisIt. By working in screen space using OpenGL we can leverage the computational power of GPUs when they are available and run without them when they are not. We address efficiency and performance issues that arise from the transformation of data from physical to screen space by selecting an alternate screen space domain decomposition. We analyze the algorithm's scaling behavior with and without GPUs on two high performance computing systems using data from turbulent plasma simulations.
State-space-based harmonic stability analysis for paralleled grid-connected inverters
DEFF Research Database (Denmark)
Wang, Yanbo; Wang, Xiongfei; Chen, Zhe
2016-01-01
This paper addresses a state-space-based harmonic stability analysis of paralleled grid-connected inverters system. A small signal model of individual inverter is developed, where LCL filter, the equivalent delay of control system, and current controller are modeled. Then, the overall small signal...... model of paralleled grid-connected inverters is built. Finally, the state space-based stability analysis approach is developed to explain the harmonic resonance phenomenon. The eigenvalue traces associated with time delay and coupled grid impedance are obtained, which accounts for how the unstable...... inverter produces the harmonic resonance and leads to the instability of whole paralleled system. The proposed approach reveals the contributions of the grid impedance as well as the coupled effect on other grid-connected inverters under different grid conditions. Simulation and experimental results...
Algorithms for a parallel implementation of Hidden Markov Models with a small state space
DEFF Research Database (Denmark)
Nielsen, Jesper; Sand, Andreas
2011-01-01
Two of the most important algorithms for Hidden Markov Models are the forward and the Viterbi algorithms. We show how formulating these using linear algebra naturally lends itself to parallelization. Although the obtained algorithms are slow for Hidden Markov Models with large state spaces...
Large parallel volumes of finite and compact sets in d-dimensional Euclidean space
DEFF Research Database (Denmark)
Kampf, Jürgen; Kiderlen, Markus
The r-parallel volume V (Cr) of a compact subset C in d-dimensional Euclidean space is the volume of the set Cr of all points of Euclidean distance at most r > 0 from C. According to Steiner’s formula, V (Cr) is a polynomial in r when C is convex. For finite sets C satisfying a certain geometric...
Fast Time and Space Parallel Algorithms for Solution of Parabolic Partial Differential Equations
Fijany, Amir
1993-01-01
In this paper, fast time- and Space -Parallel agorithms for solution of linear parabolic PDEs are developed. It is shown that the seemingly strictly serial iterations of the time-stepping procedure for solution of the problem can be completed decoupled.
Nearly auto-parallel maps and conservation laws on curved spaces
International Nuclear Information System (INIS)
Vacaru, S.
1994-01-01
The theory of nearly auto-parallel maps (na-maps, generalization of conformal transforms) of Einstein-Cartan spaces is formulated. The transformation laws of geometrical objects and gravitational and matter field equations under superpositions of na-maps are considered. A special attention is paid to the very important problem of definition of conservation laws for gravitational fields. (Author)
On weakly BR-closed functions between topological spaces
Caldas, Miguel; Ekici, Erdal; Jafari, Saeid; Moshokoa, Seithuti P.
2009-01-01
In this paper, we offer a new class of functions called weakly BR-closed functions. Moreover, we investigate not only some of their basic properties but also their relationships with other types of already well-known functions.
Infinitesimal conformal closed transformations of de Sitter and Robertson-Walker cosmological spaces
International Nuclear Information System (INIS)
Sakoto, Moussa
1976-01-01
The infinitesimal conformal closed transfromations of de Sitter and Robertson-Walker cosmological spaces are determined and an interesting property of the current lines for Robertson-Walker spaces is given [fr
National Aeronautics and Space Administration — There are several ongoing challenges in non-contacting blade vibration and stress measurement systems that can address closely spaced modes and blade-to-blade...
Nonseparable closed vector subspaces of separable topological vector spaces
Czech Academy of Sciences Publication Activity Database
Kąkol, Jerzy; Leiderman, A. G.; Morris, S. A.
2017-01-01
Roč. 182, č. 1 (2017), s. 39-47 ISSN 0026-9255 R&D Projects: GA ČR GF16-34860L Institutional support: RVO:67985840 Keywords : locally convex topological vector space * separable topological space Subject RIV: BA - General Mathematics OBOR OECD: Pure mathematics Impact factor: 0.716, year: 2016 https://link.springer.com/article/10.1007%2Fs00605-016-0876-2
International Nuclear Information System (INIS)
Zhou, Yifan; Lin, Tian Ran; Sun, Yong; Bian, Yangqing; Ma, Lin
2015-01-01
Maintenance optimisation of series–parallel systems is a research topic of practical significance. Nevertheless, a cost-effective maintenance strategy is difficult to obtain due to the large strategy space for maintenance optimisation of such systems. The heuristic algorithm is often employed to deal with this problem. However, the solution obtained by the heuristic algorithm is not always the global optimum and the algorithm itself can be very time consuming. An alternative method based on linear programming is thus developed in this paper to overcome such difficulties by reducing strategy space of maintenance optimisation. A theoretical proof is provided in the paper to verify that the proposed method is at least as effective as the existing methods for strategy space reduction. Numerical examples for maintenance optimisation of series–parallel systems having multistate components and considering both economic dependence among components and multiple-level imperfect maintenance are also presented. The simulation results confirm that the proposed method is more effective than the existing methods in removing inappropriate maintenance strategies of multistate series–parallel systems. - Highlights: • A new method using linear programming is developed to reduce the strategy space. • The effectiveness of the new method for strategy reduction is theoretically proved. • Imperfect maintenance and economic dependence are considered during optimisation
A massively-parallel electronic-structure calculations based on real-space density functional theory
International Nuclear Information System (INIS)
Iwata, Jun-Ichi; Takahashi, Daisuke; Oshiyama, Atsushi; Boku, Taisuke; Shiraishi, Kenji; Okada, Susumu; Yabana, Kazuhiro
2010-01-01
Based on the real-space finite-difference method, we have developed a first-principles density functional program that efficiently performs large-scale calculations on massively-parallel computers. In addition to efficient parallel implementation, we also implemented several computational improvements, substantially reducing the computational costs of O(N 3 ) operations such as the Gram-Schmidt procedure and subspace diagonalization. Using the program on a massively-parallel computer cluster with a theoretical peak performance of several TFLOPS, we perform electronic-structure calculations for a system consisting of over 10,000 Si atoms, and obtain a self-consistent electronic-structure in a few hundred hours. We analyze in detail the costs of the program in terms of computation and of inter-node communications to clarify the efficiency, the applicability, and the possibility for further improvements.
Parallel magnetic resonance imaging as approximation in a reproducing kernel Hilbert space
International Nuclear Information System (INIS)
Athalye, Vivek; Lustig, Michael; Martin Uecker
2015-01-01
In magnetic resonance imaging data samples are collected in the spatial frequency domain (k-space), typically by time-consuming line-by-line scanning on a Cartesian grid. Scans can be accelerated by simultaneous acquisition of data using multiple receivers (parallel imaging), and by using more efficient non-Cartesian sampling schemes. To understand and design k-space sampling patterns, a theoretical framework is needed to analyze how well arbitrary sampling patterns reconstruct unsampled k-space using receive coil information. As shown here, reconstruction from samples at arbitrary locations can be understood as approximation of vector-valued functions from the acquired samples and formulated using a reproducing kernel Hilbert space with a matrix-valued kernel defined by the spatial sensitivities of the receive coils. This establishes a formal connection between approximation theory and parallel imaging. Theoretical tools from approximation theory can then be used to understand reconstruction in k-space and to extend the analysis of the effects of samples selection beyond the traditional image-domain g-factor noise analysis to both noise amplification and approximation errors in k-space. This is demonstrated with numerical examples. (paper)
Treinish, Lloyd A.; Gough, Michael L.; Wildenhain, W. David
1987-01-01
The capability was developed of rapidly producing visual representations of large, complex, multi-dimensional space and earth sciences data sets via the implementation of computer graphics modeling techniques on the Massively Parallel Processor (MPP) by employing techniques recently developed for typically non-scientific applications. Such capabilities can provide a new and valuable tool for the understanding of complex scientific data, and a new application of parallel computing via the MPP. A prototype system with such capabilities was developed and integrated into the National Space Science Data Center's (NSSDC) Pilot Climate Data System (PCDS) data-independent environment for computer graphics data display to provide easy access to users. While developing these capabilities, several problems had to be solved independently of the actual use of the MPP, all of which are outlined.
Ultra Reliable Closed Loop Life Support for Long Space Missions
Jones, Harry W.; Ewert, Michael K.
2010-01-01
Spacecraft human life support systems can achieve ultra reliability by providing sufficient spares to replace all failed components. The additional mass of spares for ultra reliability is approximately equal to the original system mass, provided that the original system reliability is not too low. Acceptable reliability can be achieved for the Space Shuttle and Space Station by preventive maintenance and by replacing failed units. However, on-demand maintenance and repair requires a logistics supply chain in place to provide the needed spares. In contrast, a Mars or other long space mission must take along all the needed spares, since resupply is not possible. Long missions must achieve ultra reliability, a very low failure rate per hour, since they will take years rather than weeks and cannot be cut short if a failure occurs. Also, distant missions have a much higher mass launch cost per kilogram than near-Earth missions. Achieving ultra reliable spacecraft life support systems with acceptable mass will require a well-planned and extensive development effort. Analysis must determine the reliability requirement and allocate it to subsystems and components. Ultra reliability requires reducing the intrinsic failure causes, providing spares to replace failed components and having "graceful" failure modes. Technologies, components, and materials must be selected and designed for high reliability. Long duration testing is needed to confirm very low failure rates. Systems design should segregate the failure causes in the smallest, most easily replaceable parts. The system must be designed, developed, integrated, and tested with system reliability in mind. Maintenance and reparability of failed units must not add to the probability of failure. The overall system must be tested sufficiently to identify any design errors. A program to develop ultra reliable space life support systems with acceptable mass should start soon since it must be a long term effort.
Parallel symbolic state-space exploration is difficult, but what is the alternative?
Directory of Open Access Journals (Sweden)
Gianfranco Ciardo
2009-12-01
Full Text Available State-space exploration is an essential step in many modeling and analysis problems. Its goal is to find the states reachable from the initial state of a discrete-state model described. The state space can used to answer important questions, e.g., "Is there a dead state?" and "Can N become negative?", or as a starting point for sophisticated investigations expressed in temporal logic. Unfortunately, the state space is often so large that ordinary explicit data structures and sequential algorithms cannot cope, prompting the exploration of (1 parallel approaches using multiple processors, from simple workstation networks to shared-memory supercomputers, to satisfy large memory and runtime requirements and (2 symbolic approaches using decision diagrams to encode the large structured sets and relations manipulated during state-space generation. Both approaches have merits and limitations. Parallel explicit state-space generation is challenging, but almost linear speedup can be achieved; however, the analysis is ultimately limited by the memory and processors available. Symbolic methods are a heuristic that can efficiently encode many, but not all, functions over a structured and exponentially large domain; here the pitfalls are subtler: their performance varies widely depending on the class of decision diagram chosen, the state variable order, and obscure algorithmic parameters. As symbolic approaches are often much more efficient than explicit ones for many practical models, we argue for the need to parallelize symbolic state-space generation algorithms, so that we can realize the advantage of both approaches. This is a challenging endeavor, as the most efficient symbolic algorithm, Saturation, is inherently sequential. We conclude by discussing challenges, efforts, and promising directions toward this goal.
Exploiting Stabilizers and Parallelism in State Space Generation with the Symmetry Method
DEFF Research Database (Denmark)
Lorentsen, Louise; Kristensen, Lars Michael
2001-01-01
The symmetry method is a main reduction paradigm for alleviating the state explosion problem. For large symmetry groups deciding whether two states are symmetric becomes time expensive due to the apparent high time complexity of the orbit problem. The contribution of this paper is to alleviate th...... the negative impact of the orbit problem by the specification of canonical representatives for equivalence classes of states in Coloured Petri Nets, and by giving algorithms exploiting stabilizers and parallelism for computing the condensed state space....
Nutritional criteria for closed-loop space food systems
Rambaut, P. C.
1980-01-01
The nutritional requirements for Skylab crews are summarized as a data base for long duration spaceflight nutrient requirements. Statistically significant increases in energy consumption were detected after three months, along with CO2/O2 exhalation during exercise and thyroxine level increases. Linoleic acid amounting to 3-4 g/day was found to fulfill all fat requirements, and carbohydrate and protein (amino acid) necessities are discussed, noting that vigorous exercise programs avoid deconditioning which enhances nitrogen loss. Urinary calcium losses continued at a rate 100% above a baseline figure, a condition which ingestion of vitamin D2 did not correct. Projections are given that spaceflights lasting more than eight years will necessitate recycling of human waste for nutrient growth, which can be processed into highly efficient space food with a variety of tastes.
Goedbloed, D J; Czypionka, T; Altmüller, J; Rodriguez, A; Küpfer, E; Segev, O; Blaustein, L; Templeton, A R; Nolte, A W; Steinfartz, S
2017-12-01
The utilization of similar habitats by different species provides an ideal opportunity to identify genes underlying adaptation and acclimatization. Here, we analysed the gene expression of two closely related salamander species: Salamandra salamandra in Central Europe and Salamandra infraimmaculata in the Near East. These species inhabit similar habitat types: 'temporary ponds' and 'permanent streams' during larval development. We developed two species-specific gene expression microarrays, each targeting over 12 000 transcripts, including an overlapping subset of 8331 orthologues. Gene expression was examined for systematic differences between temporary ponds and permanent streams in larvae from both salamander species to establish gene sets and functions associated with these two habitat types. Only 20 orthologues were associated with a habitat in both species, but these orthologues did not show parallel expression patterns across species more than expected by chance. Functional annotation of a set of 106 genes with the highest effect size for a habitat suggested four putative gene function categories associated with a habitat in both species: cell proliferation, neural development, oxygen responses and muscle capacity. Among these high effect size genes was a single orthologue (14-3-3 protein zeta/YWHAZ) that was downregulated in temporary ponds in both species. The emergence of four gene function categories combined with a lack of parallel expression of orthologues (except 14-3-3 protein zeta) suggests that parallel habitat adaptation or acclimatization by larvae from S. salamandra and S. infraimmaculata to temporary ponds and permanent streams is mainly realized by different genes with a converging functionality.
Phase space simulation of collisionless stellar systems on the massively parallel processor
International Nuclear Information System (INIS)
White, R.L.
1987-01-01
A numerical technique for solving the collisionless Boltzmann equation describing the time evolution of a self gravitating fluid in phase space was implemented on the Massively Parallel Processor (MPP). The code performs calculations for a two dimensional phase space grid (with one space and one velocity dimension). Some results from calculations are presented. The execution speed of the code is comparable to the speed of a single processor of a Cray-XMP. Advantages and disadvantages of the MPP architecture for this type of problem are discussed. The nearest neighbor connectivity of the MPP array does not pose a significant obstacle. Future MPP-like machines should have much more local memory and easier access to staging memory and disks in order to be effective for this type of problem
Enhanced 2D-DOA Estimation for Large Spacing Three-Parallel Uniform Linear Arrays
Directory of Open Access Journals (Sweden)
Dong Zhang
2018-01-01
Full Text Available An enhanced two-dimensional direction of arrival (2D-DOA estimation algorithm for large spacing three-parallel uniform linear arrays (ULAs is proposed in this paper. Firstly, we use the propagator method (PM to get the highly accurate but ambiguous estimation of directional cosine. Then, we use the relationship between the directional cosine to eliminate the ambiguity. This algorithm not only can make use of the elements of the three-parallel ULAs but also can utilize the connection between directional cosine to improve the estimation accuracy. Besides, it has satisfied estimation performance when the elevation angle is between 70° and 90° and it can automatically pair the estimated azimuth and elevation angles. Furthermore, it has low complexity without using any eigen value decomposition (EVD or singular value decompostion (SVD to the covariance matrix. Simulation results demonstrate the effectiveness of our proposed algorithm.
Fast MR image reconstruction for partially parallel imaging with arbitrary k-space trajectories.
Ye, Xiaojing; Chen, Yunmei; Lin, Wei; Huang, Feng
2011-03-01
Both acquisition and reconstruction speed are crucial for magnetic resonance (MR) imaging in clinical applications. In this paper, we present a fast reconstruction algorithm for SENSE in partially parallel MR imaging with arbitrary k-space trajectories. The proposed method is a combination of variable splitting, the classical penalty technique and the optimal gradient method. Variable splitting and the penalty technique reformulate the SENSE model with sparsity regularization as an unconstrained minimization problem, which can be solved by alternating two simple minimizations: One is the total variation and wavelet based denoising that can be quickly solved by several recent numerical methods, whereas the other one involves a linear inversion which is solved by the optimal first order gradient method in our algorithm to significantly improve the performance. Comparisons with several recent parallel imaging algorithms indicate that the proposed method significantly improves the computation efficiency and achieves state-of-the-art reconstruction quality.
Generalized Polar Decompositions for Closed Operators in Hilbert Spaces and Some Applications
Gesztesy, Fritz; Malamud, Mark; Mitrea, Marius; Naboko, Serguei
2008-01-01
We study generalized polar decompositions of densely defined, closed linear operators in Hilbert spaces and provide some applications to relatively (form) bounded and relatively (form) compact perturbations of self-adjoint, normal, and m-sectorial operators.
A Self Consistent Multiprocessor Space Charge Algorithm that is Almost Embarrassingly Parallel
International Nuclear Information System (INIS)
Nissen, Edward; Erdelyi, B.; Manikonda, S.L.
2012-01-01
We present a space charge code that is self consistent, massively parallelizeable, and requires very little communication between computer nodes; making the calculation almost embarrassingly parallel. This method is implemented in the code COSY Infinity where the differential algebras used in this code are important to the algorithm's proper functioning. The method works by calculating the self consistent space charge distribution using the statistical moments of the test particles, and converting them into polynomial series coefficients. These coefficients are combined with differential algebraic integrals to form the potential, and electric fields. The result is a map which contains the effects of space charge. This method allows for massive parallelization since its statistics based solver doesn't require any binning of particles, and only requires a vector containing the partial sums of the statistical moments for the different nodes to be passed. All other calculations are done independently. The resulting maps can be used to analyze the system using normal form analysis, as well as advance particles in numbers and at speeds that were previously impossible.
STEP: Self-supporting tailored k-space estimation for parallel imaging reconstruction.
Zhou, Zechen; Wang, Jinnan; Balu, Niranjan; Li, Rui; Yuan, Chun
2016-02-01
A new subspace-based iterative reconstruction method, termed Self-supporting Tailored k-space Estimation for Parallel imaging reconstruction (STEP), is presented and evaluated in comparison to the existing autocalibrating method SPIRiT and calibrationless method SAKE. In STEP, two tailored schemes including k-space partition and basis selection are proposed to promote spatially variant signal subspace and incorporated into a self-supporting structured low rank model to enforce properties of locality, sparsity, and rank deficiency, which can be formulated into a constrained optimization problem and solved by an iterative algorithm. Simulated and in vivo datasets were used to investigate the performance of STEP in terms of overall image quality and detail structure preservation. The advantage of STEP on image quality is demonstrated by retrospectively undersampled multichannel Cartesian data with various patterns. Compared with SPIRiT and SAKE, STEP can provide more accurate reconstruction images with less residual aliasing artifacts and reduced noise amplification in simulation and in vivo experiments. In addition, STEP has the capability of combining compressed sensing with arbitrary sampling trajectory. Using k-space partition and basis selection can further improve the performance of parallel imaging reconstruction with or without calibration signals. © 2015 Wiley Periodicals, Inc.
Spurious results from Fourier analysis of data with closely spaced frequencies
International Nuclear Information System (INIS)
Loumos, G.L.; Deeming, T.J.
1978-01-01
It is shown how erroneous results can occur using some period-finding methods, such as Fourier analysis, on data containing closely spaced frequencies. The frequency spacing accurately resolvable with data of length T is increased from the standard value of about 1/T quoted in the literature to approximately 1.5/T. (Auth.)
Reinertsen, Gloria M.
A study compared performances on a test of selective auditory attention between students educated in open-space versus closed classroom environments. An open-space classroom environment was defined as having no walls separating it from hallways or other classrooms. It was hypothesized that the incidence of auditory figure-ground (ability to focus…
Evaluation of the Intel iWarp parallel processor for space flight applications
Hine, Butler P., III; Fong, Terrence W.
1993-01-01
The potential of a DARPA-sponsored advanced processor, the Intel iWarp, for use in future SSF Data Management Systems (DMS) upgrades is evaluated through integration into the Ames DMS testbed and applications testing. The iWarp is a distributed, parallel computing system well suited for high performance computing applications such as matrix operations and image processing. The system architecture is modular, supports systolic and message-based computation, and is capable of providing massive computational power in a low-cost, low-power package. As a consequence, the iWarp offers significant potential for advanced space-based computing. This research seeks to determine the iWarp's suitability as a processing device for space missions. In particular, the project focuses on evaluating the ease of integrating the iWarp into the SSF DMS baseline architecture and the iWarp's ability to support computationally stressing applications representative of SSF tasks.
PAREMD: A parallel program for the evaluation of momentum space properties of atoms and molecules
Meena, Deep Raj; Gadre, Shridhar R.; Balanarayan, P.
2018-03-01
The present work describes a code for evaluating the electron momentum density (EMD), its moments and the associated Shannon information entropy for a multi-electron molecular system. The code works specifically for electronic wave functions obtained from traditional electronic structure packages such as GAMESS and GAUSSIAN. For the momentum space orbitals, the general expression for Gaussian basis sets in position space is analytically Fourier transformed to momentum space Gaussian basis functions. The molecular orbital coefficients of the wave function are taken as an input from the output file of the electronic structure calculation. The analytic expressions of EMD are evaluated over a fine grid and the accuracy of the code is verified by a normalization check and a numerical kinetic energy evaluation which is compared with the analytic kinetic energy given by the electronic structure package. Apart from electron momentum density, electron density in position space has also been integrated into this package. The program is written in C++ and is executed through a Shell script. It is also tuned for multicore machines with shared memory through OpenMP. The program has been tested for a variety of molecules and correlated methods such as CISD, Møller-Plesset second order (MP2) theory and density functional methods. For correlated methods, the PAREMD program uses natural spin orbitals as an input. The program has been benchmarked for a variety of Gaussian basis sets for different molecules showing a linear speedup on a parallel architecture.
International Nuclear Information System (INIS)
Hawryluk, A.; Botros, K.K.
2008-01-01
Expeller performance has been formulated in terms of its capability to create suction pressure at the throat. This formulation has been used to assess the effectiveness of evacuating combustible gases from a pipeline section from one end using dual expellers mounted in parallel on two adjacent blow-down stacks. A general formulation was derived to address any situation of asymmetry in the stack resistance, asymmetry in the expellers' power as well overall pipeline resistance to suction flow. Solutions of the closed-form equations were obtained and presented on performance graphs showing the ratio of the suction flow using dual expellers to that using either one in a single mode. It was found that there are conditions at which expelling with dual expellers exceed that of either expeller operating alone. It was also shown that when asymmetric expellers are used, where one expeller is more powerful than the other, the benefits of using two expellers is realized up to a limiting degree of asymmetry, beyond which the weaker expeller could be stalled and then reverse flow
Analytical model for vibration prediction of two parallel tunnels in a full-space
He, Chao; Zhou, Shunhua; Guo, Peijun; Di, Honggui; Zhang, Xiaohui
2018-06-01
This paper presents a three-dimensional analytical model for the prediction of ground vibrations from two parallel tunnels embedded in a full-space. The two tunnels are modelled as cylindrical shells of infinite length, and the surrounding soil is modelled as a full-space with two cylindrical cavities. A virtual interface is introduced to divide the soil into the right layer and the left layer. By transforming the cylindrical waves into the plane waves, the solution of wave propagation in the full-space with two cylindrical cavities is obtained. The transformations from the plane waves to cylindrical waves are then used to satisfy the boundary conditions on the tunnel-soil interfaces. The proposed model provides a highly efficient tool to predict the ground vibration induced by the underground railway, which accounts for the dynamic interaction between neighbouring tunnels. Analysis of the vibration fields produced over a range of frequencies and soil properties is conducted. When the distance between the two tunnels is smaller than three times the tunnel diameter, the interaction between neighbouring tunnels is highly significant, at times in the order of 20 dB. It is necessary to consider the interaction between neighbouring tunnels for the prediction of ground vibrations induced underground railways.
A parallel implementation of particle tracking with space charge effects on an INTEL iPSC/860
International Nuclear Information System (INIS)
Chang, L.; Bourianoff, G.; Cole, B.; Machida, S.
1993-05-01
Particle-tracking simulation is one of the scientific applications that is well-suited to parallel computations. At the Superconducting Super Collider, it has been theoretically and empirically demonstrated that particle tracking on a designed lattice can achieve very high parallel efficiency on a MIMD Intel iPSC/860 machine. The key to such success is the realization that the particles can be tracked independently without considering their interaction. The perfectly parallel nature of particle tracking is broken if the interaction effects between particles are included. The space charge introduces an electromagnetic force that will affect the motion of tracked particles in 3-D space. For accurate modeling of the beam dynamics with space charge effects, one needs to solve three-dimensional Maxwell field equations, usually by a particle-in-cell (PIC) algorithm. This will require each particle to communicate with its neighbor grids to compute the momentum changes at each time step. It is expected that the 3-D PIC method will degrade parallel efficiency of particle-tracking implementation on any parallel computer. In this paper, we describe an efficient scheme for implementing particle tracking with space charge effects on an INTEL iPSC/860 machine. Experimental results show that a parallel efficiency of 75% can be obtained
Use of Parallel Micro-Platform for the Simulation the Space Exploration
Velasco Herrera, Victor Manuel; Velasco Herrera, Graciela; Rosano, Felipe Lara; Rodriguez Lozano, Salvador; Lucero Roldan Serrato, Karen
The purpose of this work is to create a parallel micro-platform, that simulates the virtual movements of a space exploration in 3D. One of the innovations presented in this design consists of the application of a lever mechanism for the transmission of the movement. The development of such a robot is a challenging task very different of the industrial manipulators due to a totally different target system of requirements. This work presents the study and simulation, aided by computer, of the movement of this parallel manipulator. The development of this model has been developed using the platform of computer aided design Unigraphics, in which it was done the geometric modeled of each one of the components and end assembly (CAD), the generation of files for the computer aided manufacture (CAM) of each one of the pieces and the kinematics simulation of the system evaluating different driving schemes. We used the toolbox (MATLAB) of aerospace and create an adaptive control module to simulate the system.
SiGN-SSM: open source parallel software for estimating gene networks with state space models.
Tamada, Yoshinori; Yamaguchi, Rui; Imoto, Seiya; Hirose, Osamu; Yoshida, Ryo; Nagasaki, Masao; Miyano, Satoru
2011-04-15
SiGN-SSM is an open-source gene network estimation software able to run in parallel on PCs and massively parallel supercomputers. The software estimates a state space model (SSM), that is a statistical dynamic model suitable for analyzing short time and/or replicated time series gene expression profiles. SiGN-SSM implements a novel parameter constraint effective to stabilize the estimated models. Also, by using a supercomputer, it is able to determine the gene network structure by a statistical permutation test in a practical time. SiGN-SSM is applicable not only to analyzing temporal regulatory dependencies between genes, but also to extracting the differentially regulated genes from time series expression profiles. SiGN-SSM is distributed under GNU Affero General Public Licence (GNU AGPL) version 3 and can be downloaded at http://sign.hgc.jp/signssm/. The pre-compiled binaries for some architectures are available in addition to the source code. The pre-installed binaries are also available on the Human Genome Center supercomputer system. The online manual and the supplementary information of SiGN-SSM is available on our web site. tamada@ims.u-tokyo.ac.jp.
A Parallel Strategy for High-speed Interpolation of CNC Using Data Space Constraint Method
Directory of Open Access Journals (Sweden)
Shuan-qiang Yang
2013-12-01
Full Text Available A high-speed interpolation scheme using parallel computing is proposed in this paper. The interpolation method is divided into two tasks, namely, the rough task executing in PC and the fine task in the I/O card. During the interpolation procedure, the double buffers are constructed to exchange the interpolation data between the two tasks. Then, the data space constraint method is adapted to ensure the reliable and continuous data communication between the two buffers. Therefore, the proposed scheme can be realized in the common distribution of the operation systems without real-time performance. The high-speed and high-precision motion control can be achieved as well. Finally, an experiment is conducted on the self-developed CNC platform, the test results are shown to verify the proposed method.
Computations on the massively parallel processor at the Goddard Space Flight Center
Strong, James P.
1991-01-01
Described are four significant algorithms implemented on the massively parallel processor (MPP) at the Goddard Space Flight Center. Two are in the area of image analysis. Of the other two, one is a mathematical simulation experiment and the other deals with the efficient transfer of data between distantly separated processors in the MPP array. The first algorithm presented is the automatic determination of elevations from stereo pairs. The second algorithm solves mathematical logistic equations capable of producing both ordered and chaotic (or random) solutions. This work can potentially lead to the simulation of artificial life processes. The third algorithm is the automatic segmentation of images into reasonable regions based on some similarity criterion, while the fourth is an implementation of a bitonic sort of data which significantly overcomes the nearest neighbor interconnection constraints on the MPP for transferring data between distant processors.
Exploiting Stabilizers and Parallelism in State Space Generation with the Symmetry Method
DEFF Research Database (Denmark)
Lorentsen, Louise; Kristensen, Lars Michael
2001-01-01
The symmetry method is a main reduction paradigm for alleviating the state explosion problem. For large symmetry groups deciding whether two states are symmetric becomes time expensive due to the apparent high time complexity of the orbit problem. The contribution of this paper is to alleviate th...... the negative impact of the orbit problem by the specification of canonical representatives for equivalence classes of states in Coloured Petri Nets, and by giving algorithms exploiting stabilizers and parallelism for computing the condensed state space.......The symmetry method is a main reduction paradigm for alleviating the state explosion problem. For large symmetry groups deciding whether two states are symmetric becomes time expensive due to the apparent high time complexity of the orbit problem. The contribution of this paper is to alleviate...
Unified Lambert Tool for Massively Parallel Applications in Space Situational Awareness
Woollands, Robyn M.; Read, Julie; Hernandez, Kevin; Probe, Austin; Junkins, John L.
2018-03-01
This paper introduces a parallel-compiled tool that combines several of our recently developed methods for solving the perturbed Lambert problem using modified Chebyshev-Picard iteration. This tool (unified Lambert tool) consists of four individual algorithms, each of which is unique and better suited for solving a particular type of orbit transfer. The first is a Keplerian Lambert solver, which is used to provide a good initial guess (warm start) for solving the perturbed problem. It is also used to determine the appropriate algorithm to call for solving the perturbed problem. The arc length or true anomaly angle spanned by the transfer trajectory is the parameter that governs the automated selection of the appropriate perturbed algorithm, and is based on the respective algorithm convergence characteristics. The second algorithm solves the perturbed Lambert problem using the modified Chebyshev-Picard iteration two-point boundary value solver. This algorithm does not require a Newton-like shooting method and is the most efficient of the perturbed solvers presented herein, however the domain of convergence is limited to about a third of an orbit and is dependent on eccentricity. The third algorithm extends the domain of convergence of the modified Chebyshev-Picard iteration two-point boundary value solver to about 90% of an orbit, through regularization with the Kustaanheimo-Stiefel transformation. This is the second most efficient of the perturbed set of algorithms. The fourth algorithm uses the method of particular solutions and the modified Chebyshev-Picard iteration initial value solver for solving multiple revolution perturbed transfers. This method does require "shooting" but differs from Newton-like shooting methods in that it does not require propagation of a state transition matrix. The unified Lambert tool makes use of the General Mission Analysis Tool and we use it to compute thousands of perturbed Lambert trajectories in parallel on the Space Situational
Precision multiloop (PM Design with space closing circles for lingual orthodontics
Directory of Open Access Journals (Sweden)
Mugdha P Mankar
2016-01-01
Full Text Available The proficiency of ancient orthodontics has been benefitted colossally and is being continually promoted over the present, by use of multiple loop wires designed for correction of dentoalveolar malocclusions. The presented discussion provides an insight into a simple, frictionless biomechanical concept of anterior space closure in lingual orthodontics by means of precision multiloop design with incorporated space closing circles. A multiple loop wire design has been demonstrated where the entire interbracket distance is used as loop area.
Precision multiloop (PM Design) with space closing circles for lingual orthodontics
Mugdha P Mankar; Achint Chachada; Harish Atram; Avanti Kulkarni
2016-01-01
The proficiency of ancient orthodontics has been benefitted colossally and is being continually promoted over the present, by use of multiple loop wires designed for correction of dentoalveolar malocclusions. The presented discussion provides an insight into a simple, frictionless biomechanical concept of anterior space closure in lingual orthodontics by means of precision multiloop design with incorporated space closing circles. A multiple loop wire design has been demonstrated where the ent...
Closed Crawl Space Performance: Proof of Concept in the Production Builder Marketplace
Energy Technology Data Exchange (ETDEWEB)
Malkin-Weber, Melissa; Dastur, Cyrus; Mauceri, Maria; Hannas, Benjamin
2008-10-30
This overview is intended to be a very concise, limited summary of the key project activities discussed in the detailed report that follows. Due to the large scope of this project, the detailed report is broken into three individually titled sections. Each section repeats key background information, with the goal that the sections will eventually stand alone as complete reports on the major activities of the project. The information presented herein comes from ongoing research, so please note that all observations, findings and recommendations presented are preliminary and subject to change in the future. We invite and welcome your comments and suggestions for improving the project. Advanced Energy completed its first jointly-funded crawl space research project with the Department of Energy in 2005. That project, funded under award number DE-FC26-00NT40995 and titled 'A Field Study Comparison of the Energy and Moisture Performance Characteristics of Ventilated Versus Sealed Crawl Spaces in the South' demonstrated the substantial energy efficiency and moisture management benefits that result from using properly closed crawl space foundations for residential construction instead of traditional wall vented crawl space foundations. Two activities of this first project included (1) an assessment of ten existing homes to document commonly observed energy and moisture failures associated with wall-vented crawl space foundations and (2) a detailed literature review that documented both the history of closed crawl space research and the historical lack of scientific justification for building code requirements for crawl space ventilation. The most valuable activity of the 2005 project proved to be the field demonstration of various closed crawl space techniques, which were implemented in a set of twelve small (1040 square feet), simply designed homes in eastern North Carolina. These homes had matched envelope, mechanical and architectural designs, and comparable
Saccone, Elizabeth J; Szpak, Ancret; Churches, Owen; Nicholls, Michael E R
2018-01-01
Research suggests that the human brain codes manipulable objects as possibilities for action, or affordances, particularly objects close to the body. Near-body space is not only a zone for body-environment interaction but also is socially relevant, as we are driven to preserve our near-body, personal space from others. The current, novel study investigated how close proximity of a stranger modulates visuomotor processing of object affordances in shared, social space. Participants performed a behavioural object recognition task both alone and with a human confederate. All object images were in participants' reachable space but appeared relatively closer to the participant or the confederate. Results revealed when participants were alone, objects in both locations produced an affordance congruency effect but when the confederate was present, only objects nearer the participant elicited the effect. Findings suggest space is divided between strangers to preserve independent near-body space boundaries, and in turn this process influences motor coding for stimuli within that social space. To demonstrate that this visuomotor modulation represents a social phenomenon, rather than a general, attentional effect, two subsequent experiments employed nonhuman joint conditions. Neither a small, Japanese, waving cat statue (Experiment 2) nor a metronome (Experiment 3) modulated the affordance effect as in Experiment 1. These findings suggest a truly social explanation of the key interaction from Experiment 1. This study represents an important step toward understanding object affordance processing in real-world, social contexts and has implications broadly across fields of social action and cognition, and body space representation.
Transverse and longitudinal coupled bunch instabilities in trains of closely spaced bunches
International Nuclear Information System (INIS)
Thompson, K.A.; Ruth, R.D.
1989-03-01
Damping rings for the next generation of linear collider may need to contain several bunch trains within which the bunches are quire closely spaced (1 or 2 RF wavelengths). Methods are presented for studying the transverse and longitudinal coupled bunch instabilities, applicable to this problem and to other cases in which the placement of the bunches is not necessarily symmetric. 5 refs., 1 fig
Design of triads for probing the direct through space energy transfers in closely spaced assemblies.
Camus, Jean-Michel; Aly, Shawkat M; Fortin, Daniel; Guilard, Roger; Harvey, Pierre D
2013-08-05
Using a selective stepwise Suzuki cross-coupling reaction, two trimers built on three different chromophores were prepared. These trimers exhibit a D(^)A1-A2 structure where the donor D (octa-β-alkyl zinc(II)porphyrin either as diethylhexamethyl, 10a, or tetraethyltetramethyl, 10b, derivatives) through space transfers the S1 energy to two different acceptors, di(4-ethylbenzene) zinc(II)porphyrin (A1; acceptor 1) placed cofacial with D, and the corresponding free base (A2; acceptor 2), which is meso-meso-linked with A1. This structure design allows for the possibility of comparing two series of assemblies, 9a,b (D(^)A1) with 10a,b (D(^)Â1-A2), for the evaluation of the S1 energy transfer for the global process D*→A2 in the trimers. From the comparison of the decays of the fluorescence of D, the rates for through space energy transfer, kET for 10a,b (kET ≈ 6.4 × 10(9) (10a), 5.9 × 10(9) s(-1) (10b)), and those for the corresponding cofacial D(^)A1 systems, 9a,b, (kET ≈ 5.0 × 10(9) (9a), 4.7 × 10(9) s(-1) (9b)), provide an estimate for kET for the direct through space D*→A2 process (i.e., kET(D(^)A1-A2) - kET(D(^)A1) = kET(D*→A2) ∼ 1 × 10(9) s(-1)). This channel of relaxation represents ∼15% of kET for D*→A1.
Energy Technology Data Exchange (ETDEWEB)
Gianluca, Longoni; Alireza, Haghighat [Florida University, Nuclear and Radiological Engineering Department, Gainesville, FL (United States)
2003-07-01
In recent years, the SP{sub L} (simplified spherical harmonics) equations have received renewed interest for the simulation of nuclear systems. We have derived the SP{sub L} equations starting from the even-parity form of the S{sub N} equations. The SP{sub L} equations form a system of (L+1)/2 second order partial differential equations that can be solved with standard iterative techniques such as the Conjugate Gradient (CG). We discretized the SP{sub L} equations with the finite-volume approach in a 3-D Cartesian space. We developed a new 3-D general code, Pensp{sub L} (Parallel Environment Neutral-particle SP{sub L}). Pensp{sub L} solves both fixed source and criticality eigenvalue problems. In order to optimize the memory management, we implemented a Compressed Diagonal Storage (CDS) to store the SP{sub L} matrices. Pensp{sub L} includes parallel algorithms for space and moment domain decomposition. The computational load is distributed on different processors, using a mapping function, which maps the 3-D Cartesian space and moments onto processors. The code is written in Fortran 90 using the Message Passing Interface (MPI) libraries for the parallel implementation of the algorithm. The code has been tested on the Pcpen cluster and the parallel performance has been assessed in terms of speed-up and parallel efficiency. (author)
International Nuclear Information System (INIS)
Liu, H.
1996-01-01
Computer simulations using the multi-particle code PARMELA with a three-dimensional point-by-point space charge algorithm have turned out to be very helpful in supporting injector commissioning and operations at Thomas Jefferson National Accelerator Facility (Jefferson Lab, formerly called CEBAF). However, this algorithm, which defines a typical N 2 problem in CPU time scaling, is very time-consuming when N, the number of macro-particles, is large. Therefore, it is attractive to use massively parallel processors (MPPs) to speed up the simulations. Motivated by this, the authors modified the space charge subroutine for using the MPPs of the Cray T3D. The techniques used to parallelize and optimize the code on the T3D are discussed in this paper. The performance of the code on the T3D is examined in comparison with a Parallel Vector Processing supercomputer of the Cray C90 and an HP 735/15 high-end workstation
Flow and heat transfer in parallel channel attached with equally-spaced ribs, 2
International Nuclear Information System (INIS)
Kunugi, Tomoaki; Takizuka, Takakazu
1980-09-01
Using a computer code for the analysis of the flow and heat transfer in a parallel channel attached with equally-spaced ribs, calculations are performed when a pitch to rib-width ratio is 7 : 1, a rib-width to rib-height ratio is 2 : 1 and a channel-height to rib-height is 3 : 1. Assuming that the fluid properties and the heat-flux at the wall of this channel are constant, characteristics of the flow and heat transfer are analyzed in the range of Reynolds number from 10 to 250. The following results are obtained: (1) The separation region behind a rib grows downstream with the increase of Reynolds number. (2) The pressure drop of ribbed channel is greater than that of the smooth channel, and increases as Reynolds number increases. (3) The mean Nusselt number of ribbed channel is about 10 - 11 at the upper wall and about 7.5 at the lower wall in the range of Reynolds number from 10 to 250. (author)
Rainer, Löwen
2017-01-01
We prove that the automorphism group of a topological parallelism on real projective 3-space is compact. In a preceding article it was proved that at least the connected component of the identity is compact. The present proof does not depend on that earlier result.
DEFF Research Database (Denmark)
Knecht, Stefan; Jensen, Hans Jørgen Aagaard; Fleig, Timo
2008-01-01
We present a parallel implementation of a string-driven general active space configuration interaction program for nonrelativistic and scalar-relativistic electronic-structure calculations. The code has been modularly incorporated in the DIRAC quantum chemistry program package. The implementation...
Parallel field line and stream line tracing algorithms for space physics applications
Toth, G.; de Zeeuw, D.; Monostori, G.
2004-05-01
Field line and stream line tracing is required in various space physics applications, such as the coupling of the global magnetosphere and inner magnetosphere models, the coupling of the solar energetic particle and heliosphere models, or the modeling of comets, where the multispecies chemical equations are solved along stream lines of a steady state solution obtained with single fluid MHD model. Tracing a vector field is an inherently serial process, which is difficult to parallelize. This is especially true when the data corresponding to the vector field is distributed over a large number of processors. We designed algorithms for the various applications, which scale well to a large number of processors. In the first algorithm the computational domain is divided into blocks. Each block is on a single processor. The algorithm folows the vector field inside the blocks, and calculates a mapping of the block surfaces. The blocks communicate the values at the coinciding surfaces, and the results are interpolated. Finally all block surfaces are defined and values inside the blocks are obtained. In the second algorithm all processors start integrating along the vector field inside the accessible volume. When the field line leaves the local subdomain, the position and other information is stored in a buffer. Periodically the processors exchange the buffers, and continue integration of the field lines until they reach a boundary. At that point the results are sent back to the originating processor. Efficiency is achieved by a careful phasing of computation and communication. In the third algorithm the results of a steady state simulation are stored on a hard drive. The vector field is contained in blocks. All processors read in all the grid and vector field data and the stream lines are integrated in parallel. If a stream line enters a block, which has already been integrated, the results can be interpolated. By a clever ordering of the blocks the execution speed can be
DSU Department
2008-01-01
The French authorities have informed CERN that, once the corresponding road signs have been installed, the single-track road running parallel to the dual carriageway culminating at Gate E will be closed to all motorised vehicle traffic, with the exception of agricultural plant, motorcycles, and service, emergency and police vehicles. Relations with the Host States Service Tel.: 72848 mailto:relations.secretariat@cern.chhttp://www.cern.ch/relations
Directory of Open Access Journals (Sweden)
Zhang Hongsheng
2016-06-01
Full Text Available Spray cooling has proved its superior heat transfer performance in removing high heat flux for ground applications. However, the dissipation of vapor–liquid mixture from the heat surface and the closed-loop circulation of the coolant are two challenges in reduced or zero gravity space environments. In this paper, an ejected spray cooling system for space closed-loop application was proposed and the negative pressure in the ejected condenser chamber was applied to sucking the two-phase mixture from the spray chamber. Its ground experimental setup was built and experimental investigations on the smooth circle heat surface with a diameter of 5 mm were conducted with distilled water as the coolant spraying from a nozzle of 0.51 mm orifice diameter at the inlet temperatures of 69.2 °C and 78.2 °C under the conditions of heat flux ranging from 69.76 W/cm2 to 311.45 W/cm2, volume flow through the spray nozzle varying from 11.22 L/h to 15.76 L/h. Work performance of the spray nozzle and heat transfer performance of the spray cooling system were analyzed; results show that this ejected spray cooling system has a good heat transfer performance and provides valid foundation for space closed-loop application in the near future.
International Nuclear Information System (INIS)
Candel, A.; Kabel, A.; Ko, K.; Lee, L.; Li, Z.; Limborg, C.; Ng, C.; Prudencio, E.; Schussman, G.; Uplenchwar, R.
2007-01-01
Over the past years, SLAC's Advanced Computations Department (ACD) has developed the parallel finite element (FE) particle-in-cell code Pic3P (Pic2P) for simulations of beam-cavity interactions dominated by space-charge effects. As opposed to standard space-charge dominated beam transport codes, which are based on the electrostatic approximation, Pic3P (Pic2P) includes space-charge, retardation and boundary effects as it self-consistently solves the complete set of Maxwell-Lorentz equations using higher-order FE methods on conformal meshes. Use of efficient, large-scale parallel processing allows for the modeling of photoinjectors with unprecedented accuracy, aiding the design and operation of the next-generation of accelerator facilities. Applications to the Linac Coherent Light Source (LCLS) RF gun are presented
Parallel translation in warped product spaces: application to the Reissner-Nordstroem spacetime
International Nuclear Information System (INIS)
Raposo, A P; Del Riego, L
2005-01-01
A formal treatment of the parallel translation transformations in warped product manifolds is presented and related to those parallel translation transformations in each of the factor manifolds. A straightforward application to the Schwarzschild and Reissner-Nordstroem geometries, considered here as particular examples, explains some apparently surprising properties of the holonomy in these manifolds
Qadri, Salim; Parkin, Nicola A; Benson, Philip E
2016-06-01
To investigate the opinions of laypeople regarding the aesthetic outcome of treating patients with developmental absence of both maxillary lateral incisors using either orthodontic space closure (OSC) or space opening and prosthetic replacement (PR). Cross sectional, web-based survey. A panel of five orthodontists and five restorative dentists examined post-treatment intra-oral images of 21 patients with developmental absence of both upper lateral incisors. A consensus view was obtained about the 10 most attractive images (5 OSC; 5 PR). The 10 selected images were used in a web-based survey involving staff and students at the University of Sheffield. In the first section, the participants were asked to evaluate the attractiveness of the 10 randomly arranged single images using a 5-point Likert scale. In the second section, an image of OSC was paired with an image of PR according to their attractiveness ranking by the clinician panel, and the participants were asked to indicate which of the two images they preferred. The survey received 959 completed responses with 9590 judgements. The images of OSC were perceived to be more attractive (mean rating 3·34 out of 5; SD 0·56) compared with the images of PR (mean rating 3·14 out of 5; SD 0·58) (mean diff 0·21; P Space closing was perceived to be more attractive than space opening by lay people. The findings have implications for advising patients about the best aesthetic outcome when both maxillary lateral incisors are missing.
Processing closely spaced lesions during Nucleotide Excision Repair triggers mutagenesis in E. coli
Isogawa, Asako; Fujii, Shingo
2017-01-01
It is generally assumed that most point mutations are fixed when damage containing template DNA undergoes replication, either right at the fork or behind the fork during gap filling. Here we provide genetic evidence for a pathway, dependent on Nucleotide Excision Repair, that induces mutations when processing closely spaced lesions. This pathway, referred to as Nucleotide Excision Repair-induced Mutagenesis (NERiM), exhibits several characteristics distinct from mutations that occur within the course of replication: i) following UV irradiation, NER-induced mutations are fixed much more rapidly (t ½ ≈ 30 min) than replication dependent mutations (t ½ ≈ 80–100 min) ii) NERiM specifically requires DNA Pol IV in addition to Pol V iii) NERiM exhibits a two-hit dose-response curve that suggests processing of closely spaced lesions. A mathematical model let us define the geometry (infer the structure) of the toxic intermediate as being formed when NER incises a lesion that resides in close proximity of another lesion in the complementary strand. This critical NER intermediate requires Pol IV / Pol II for repair, it is either lethal if left unrepaired or mutation-prone when repaired. Finally, NERiM is found to operate in stationary phase cells providing an intriguing possibility for ongoing evolution in the absence of replication. PMID:28686598
Historical parallels of biological space experiments from Soyuz, Salyut and Mir to Shenzhou flights
Nechitailo, Galina S.; Kondyurin, Alexey
2016-07-01
Human exploitation of space is a great achievement of our civilization. After the first space flights a development of artificial biological environment in space systems is a second big step. First successful biological experiments on a board of space station were performed on Salyut and Mir stations in 70-90th of last century such as - first long time cultivation of plants in space (wheat, linen, lettuce, crepis); - first flowers in space (Arabidopsis); - first harvesting of seeds in space (Arabidopsis); - first harvesting of roots (radish); - first full life cycle from seeds to seeds in space (wheat), Guinness recorded; - first tissue culture experiments (Panax ginseng L, Crocus sativus L, Stevia rebaundiana B; - first tree growing in space for 2 years (Limonia acidissima), Guinness recorded. As a new wave, the modern experiments on a board of Shenzhou Chinese space ships are performed with plants and tissue culture. The space flight experiments are now focused on applications of the space biology results to Earth technologies. In particular, the tomato seeds exposed 6 years in space are used in pharmacy industry in more then 10 pharmaceutical products. Tissue culture experiments are performed on the board of Shenzhou spaceship for creation of new bioproducts including Space Panax ginseng, Space Spirulina, Space Stetatin, Space Tomato and others products with unique properties. Space investments come back.
The Significant Incidents and Close Calls in Human Space Flight Chart: Lessons Learned Gone Viral
Wood, Bill; Pate, Dennis; Thelen, David
2010-01-01
This presentation will explore the surprising history and events that transformed a mundane spreadsheet of historical spaceflight incidents into a popular and widely distributed visual compendium of lessons learned. The Significant Incidents and Close Calls in Human Space Flight Chart (a.k.a. The Significant Incidents Chart) is a popular and visually captivating reference product that has arisen from the work of the Johnson Space Center (JSC) Safety and Mission Assurance (S&MA) Flight Safety Office (FSO). It began as an internal tool intended to increase our team s awareness of historical and modern space flight incidents. Today, the chart is widely recognized across the agency as a reference tool. It appears in several training and education programs. It is used in familiarization training in the JSC Building 9 Mockup Facility and is seen by hundreds of center visitors each week. The chart visually summarizes injuries, fatalities, and close calls sustained during the continuing development of human space flight. The poster-sized chart displays over 100 total events that have direct connections to human space flight endeavors. The chart is updated periodically. The update process itself has become a collaborative effort. Many people, spanning multiple NASA organizations, have provided suggestions for additional entries. The FSO maintains a growing list of subscribers who have requested to receive updates. The presenters will discuss the origins and motivations behind the significant incidents chart. A review of the inclusion criteria used to select events will be offered. We will address how the chart is used today by S&MA and offer a vision of how it might be used by other organizations now and in the future. Particular emphasis will be placed on features of the chart that have met with broad acceptance and have helped spread awareness of the most important lessons in human spaceflight.
Factors influencing efficiency of sliding mechanics to close extraction space: a systematic review.
Barlow, M; Kula, K
2008-05-01
To review recent literature to determine strength of clinical evidence concerning the influence of various factors on the efficiency (rate of tooth movement) of closing extraction spaces using sliding mechanics. A comprehensive systematic review on prospective clinical trials. An electronic search (1966-2006) of several databases limiting the searches to English and using several keywords was performed. Also a hand search of five key journals specifically searching for prospective clinical trials relevant to orthodontic space closure using sliding mechanics was completed. Outcome Measure - Rate of tooth movement. Ten prospective clinical trials comparing rates of closure under different variables and focusing only on sliding mechanics were selected for review. Of these ten trials on rate of closure, two compared arch wire variables, seven compared material variables used to apply force, and one examined bracket variables. Other articles which were not prospective clinical trials on sliding mechanics, but containing relevant information were examined and included as background information. CONCLUSION - The results of clinical research support laboratory results that nickel-titanium coil springs produce a more consistent force and a faster rate of closure when compared with active ligatures as a method of force delivery to close extraction space along a continuous arch wire; however, elastomeric chain produces similar rates of closure when compared with nickel-titanium springs. Clinical and laboratory research suggest little advantage of 200 g nickel-titanium springs over 150 g springs. More clinical research is needed in this area.
Closed-form dynamics of a hexarot parallel manipulator by means of the principle of virtual work
Pedrammehr, Siamak; Nahavandi, Saeid; Abdi, Hamid
2018-04-01
In this research, a systematic approach to solving the inverse dynamics of hexarot manipulators is addressed using the methodology of virtual work. For the first time, a closed form of the mathematical formulation of the standard dynamic model is presented for this class of mechanisms. An efficient algorithm for solving this closed-form dynamic model of the mechanism is developed and it is used to simulate the dynamics of the system for different trajectories. Validation of the proposed model is performed using SimMechanics and it is shown that the results of the proposed mathematical model match with the results obtained by the SimMechanics model.
Thick epitaxial CdTe films grown by close space sublimation on Ge substrates
Energy Technology Data Exchange (ETDEWEB)
Jiang, Q; Haliday, D P; Tanner, B K; Brinkman, A W [Department of Physics, University of Durham. Science Site, Durham, DH1 3LE (United Kingdom); Cantwell, B J; Mullins, J T; Basu, A [Durham Scientific Crystals Ltd., NetPark, Thomas Wright Way, Sedgefield, County Durham, TS21 3FD (United Kingdom)], E-mail: Q.Z.Jiang@durham.ac.uk
2009-01-07
This paper reports, for the first time, the successful growth of 200 {mu}m thick CdTe films on mis-oriented Ge(1 0 0) substrates by a cost-effective optimized close space sublimation method. It is found that, as the thickness increases to a few hundred micrometres, subgrains are formed probably as a result of the large density of dislocations and strain within the initial interfacial layers. The films are of high quality (x-ray rocking curve width {approx}100 arcsec) and high resistance ({approx}10{sup 9} {omega} cm), and are thus candidates for x-ray and {gamma}-ray detectors. (fast track communication)
Induced Recrystallization of CdTe Thin Films Deposited by Close-Spaced Sublimation
International Nuclear Information System (INIS)
Mayo, B.
1998-01-01
We have deposited CdTe thin films by close-spaced sublimation at two different temperature ranges. The films deposited at the lower temperature partially recrystallized after CdCl2 treatment at 350C and completely recrystallized after the same treatment at 400C. The films deposited at higher temperature did not recrystallize at these two temperatures. These results confirmed that the mechanisms responsible for changes in physical properties of CdTe films treated with CdCl2 are recrystallization and grain growth, and provided an alternative method to deposit CSS films using lower temperatures
Directory of Open Access Journals (Sweden)
Yong-Lin Kuo
2014-01-01
Full Text Available This paper implements the model predictive control to fulfill the position control of a 3-DOF 3-RRR planar parallel manipulator. The research work covers experimental and numerical studies. First, an experimental hardware-in-the-loop system to control the manipulator is constructed. The manipulator is driven by three DC motors, and each motor has an encoder to measure the rotating angles of the motors. The entire system is designed as a semiclosed-loop control system. The controller receives the encoder signals as inputs to produce signals driving the motors. Secondly, the motor parameters are obtained by system identification, and the controllers are designed based on these parameters. Finally, the numerical simulations are performed by incorporating the manipulator kinematics and the motor dynamics; the results are compared with those from the experiments. Both results show that they are in good agreement at steady state. There are two main contributions in this paper. One is the application of the model predictive control to the planar parallel manipulator, and the other one is to overcome the effects of the uncertainties of the DC motors and the performance of the position control due to the dynamic behavior of the manipulator.
International Nuclear Information System (INIS)
Tilliette, Z.P.
1986-06-01
The present European ARIANE space program will expand into the large ARIANE 5 launch vehicle from 1995. It is assumed that important associated missions would require the generation of 200 kWe or more in space during several years at the very beginning of the next century. It is the reason why, in 1983, the French C.N.E.S. (Centre National d'Etudes Spatiales) and C.E.A. (Commissariat a l'Energie Atomique) have initiated preliminary studies of a space nuclear power system. The currently selected conversion system is a closed Brayton cycle. Reasons for this choice are given: high efficiency of a dynamic system; monophasic, inert working fluid; extensive turbomachinery experience, etc... A key aspect of the project is the adaptation to the heat rejection conditions, namely to the radiator geometry which depends upon the dimensions of the ARIANE 5 spacecraft. In addition to usual concepts already studied for space applications, another cycle arrangement is being investigated which could offer satisfactory compromises among many considerations, increase the efficiency of the system and make it more attractive as far as the specific mass (kg/kWe), the specific radiator area (m 2 /kWe) and various technological aspects are concerned. Comparative details are presented
Thermal/vacuum measurements of the Herschel space telescope by close-range photogrammetry
Parian, J. Amiri; Cozzani, A.; Appolloni, M.; Casarosa, G.
2017-11-01
In the frame of the development of a videogrammetric system to be used in thermal vacuum chambers at the European Space Research and Technology Centre (ESTEC) and other sites across Europe, the design of a network using micro-cameras was specified by the European Space agency (ESA)-ESTEC. The selected test set-up is the photogrammetric test of the Herschel Satellite Flight Model in the ESTEC Large Space Simulator. The photogrammetric system will be used to verify the Herschel Telescope alignment and Telescope positioning with respect to the Cryostat Vacuum Vessel (CVV) inside the Large Space Simulator during Thermal-Vacuum/Thermal-Balance test phases. We designed a close-range photogrammetric network by heuristic simulation and a videogrammetric system with an overall accuracy of 1:100,000. A semi-automated image acquisition system, which is able to work at low temperatures (-170°C) in order to acquire images according to the designed network has been constructed by ESA-ESTEC. In this paper we will present the videogrammetric system and sub-systems and the results of real measurements with a representative setup similar to the set-up of Herschel spacecraft which was realized in ESTEC Test Centre.
Closely spaced fibre Bragg grating sensors for detailed measurement of peristalsis in the human gut
Arkwright, John W.; Dinning, Phil G.; Underhill, Ian D.; Maunder, Simon A.; Blenman, Neil; Szczesniak, Michal M.; Cook, Ian J.
2009-10-01
We report the design and use of multi-channel fibre Bragg grating based manometry catheters with pressure sensors spaced at 1 cm intervals along its axis. The catheters have been tested in-vivo in both the human oesophagus and colon and have been shown to provide analogous results to commercially available solid state pressure sensors. The advantage of using fibre gratings comes from the ability to extend the number of sensor elements without increasing the diameter or complexity of the catheter or data acquisition system. We present our progress towards the fabrication of a manometry catheter suitable for recording manometric data along the full length of the human colon. Results from early phase equivalence testing and recent in-vivo trials in the human oesophagus and colon are presented. The colonic recordings were taken in basal and post-prandial periods of 2.5 hours each. The close axial spacing of the pressure sensors has identified the complex nature of propagating sequences in the colon in both antegrade (towards the anus) and retrograde (away from the anus) for the first time. By sub-sampling the data using data from sensors 7 cm apart the potential to misrepresent propagating sequences at wider sensor spacings is demonstrated and proposed as a potential reason why correlation between peristaltic abnormalities recorded using traditional catheters, with 7.5-10 cm spaced sensors, and actual patient symptoms remains elusive.
Yang, Chifu; Zhao, Jinsong; Li, Liyi; Agrawal, Sunil K
2018-01-01
Robotic spine brace based on parallel-actuated robotic system is a new device for treatment and sensing of scoliosis, however, the strong dynamic coupling and anisotropy problem of parallel manipulators result in accuracy loss of rehabilitation force control, including big error in direction and value of force. A novel active force control strategy named modal space force control is proposed to solve these problems. Considering the electrical driven system and contact environment, the mathematical model of spatial parallel manipulator is built. The strong dynamic coupling problem in force field is described via experiments as well as the anisotropy problem of work space of parallel manipulators. The effects of dynamic coupling on control design and performances are discussed, and the influences of anisotropy on accuracy are also addressed. With mass/inertia matrix and stiffness matrix of parallel manipulators, a modal matrix can be calculated by using eigenvalue decomposition. Making use of the orthogonality of modal matrix with mass matrix of parallel manipulators, the strong coupled dynamic equations expressed in work space or joint space of parallel manipulator may be transformed into decoupled equations formulated in modal space. According to this property, each force control channel is independent of others in the modal space, thus we proposed modal space force control concept which means the force controller is designed in modal space. A modal space active force control is designed and implemented with only a simple PID controller employed as exampled control method to show the differences, uniqueness, and benefits of modal space force control. Simulation and experimental results show that the proposed modal space force control concept can effectively overcome the effects of the strong dynamic coupling and anisotropy problem in the physical space, and modal space force control is thus a very useful control framework, which is better than the current joint
The structure of single-phase turbulent flows through closely spaced rod arrays
International Nuclear Information System (INIS)
Hooper, J.D.; Rehme, K.
1983-02-01
The axial and azimuthal turbulence intensity in the rod gap region has been shown, for developed single-phase turbulent flow through parallel rod arrays, to strongly increase with decreasing rod spacing. Two array geometries are reported, one constructed from a rectangular cross-section duct containing four rods and spaced at five p/d or w/d ratios. The second test section, constructed from six rods set in a regular square-pitch array, represented the interior flow region of a large array. The mean axial velocity, wall shear stress variation and axial pressure distribution were measured, together with hot-wire anemometer measurements of the Reynolds stresses. No significant non-zero secondary flow components were detected, using techniques capable of resolving secondary flow velocities to 1% of the local axial velocity. For the lowest p/d ratio of 1.036, cross-correlation measurements showed the presence of an energetic periodic azimuthal turbulent velocity component, correlated over a significant part of the flow area. The negligible contribution of secondary flows to the axial momentum balance, and the large azimuthal turbulent velocity component in the rod gap area, suggest a different mechanism than Reynolds stress gradient driven secondary flows for the turbulent transport process in the rod gap. (orig.) [de
International Nuclear Information System (INIS)
Ribeiro, Guilherme B.; Braz Filho, Francisco A.; Guimarães, Lamartine N.F.
2015-01-01
Nuclear power systems turned to space electric propulsion differ strongly from usual ground-based power systems regarding the importance of overall size and mass. For propulsion power systems, size and mass are essential drivers that should be minimized during conception processes. Considering this aspect, this paper aims the development of a design-based model of a Closed Regenerative Brayton Cycle that applies the thermal conductance of the main components in order to predict the energy conversion performance, allowing its use as a preliminary tool for heat exchanger and radiator panel sizing. The centrifugal-flow turbine and compressor characterizations were achieved using algebraic equations from literature data. A binary mixture of Helium–Xenon with molecular weight of 40 g/mole is applied and the impact of the components sizing in the energy efficiency is evaluated in this paper, including the radiator panel area. Moreover, an optimization analysis based on the final mass of heat the exchangers is performed. - Highlights: • A design-based model of a Closed Brayton Cycle is proposed for nuclear space needs. • Turbomachinery efficiency presented a strong influence on the system efficiency. • Radiator area presented the highest potential to increase the system efficiency. • There is maximum system efficiency for each total mass of heat exchangers. • Size or efficiency optimization was performed by changing heat exchanger proportion.
Assessment of the impact of VIV (Vortex Induced Vibrations) on closely spaced production jumpers
Energy Technology Data Exchange (ETDEWEB)
Saint-Marcoux, Jean-Francois; Legras, Jean-Luc; Bastos, Renato; Rochereau, Max [Acergy, London (United Kingdom)
2009-12-19
Brazilian deep water projects require new concepts both for Early Production and Extended Tests Systems for which Floating Production units with smaller hulls are cost-efficient. Further more the Brazilian environment precludes spread mooring. This results in closely spaced riser configurations. Acergy has investigated the issue of interference between closely spaced risers for a few years in practice (bundle Riser Towers, SCR's), experimentally (with Scripps Institution of Oceanography), and with CFD (with Texas A and M University). The result has been in 2008 the inclusion of the Blevins model in commercially available software. Nevertheless the assessment of the impact of VIV of the upstream riser remained elusive. Measurements performed in 2007 confirmed that the wake behind a cylinder under VIV was expanded and the hydrodynamic forces on the downstream riser strongly affected when the upstream cylinder was undergoing Vortex Induced Vibrations (VIV). Measurements conducted in 2008 up to a Reynolds number of 140 000 appear to validate an engineering approach of the impact of VIV that can be readily included in commercially available software for design engineering purpose. The paper describes the experimental measurements, the proposed wake model, comparison of the measurements and model. Application to the design of deep water riser and jumper systems is also included. (author)
Directory of Open Access Journals (Sweden)
C. Nagarajan
2012-09-01
Full Text Available This paper presents a Closed Loop CLL-T (capacitor inductor inductor Series Parallel Resonant Converter (SPRC has been simulated and the performance is analysised. A three element CLL-T SPRC working under load independent operation (voltage type and current type load is presented in this paper. The Steady state Stability Analysis of CLL-T SPRC has been developed using State Space technique and the regulation of output voltage is done by using Fuzzy controller. The simulation study indicates the superiority of fuzzy control over the conventional control methods. The proposed approach is expected to provide better voltage regulation for dynamic load conditions. A prototype 300 W, 100 kHz converter is designed and built to experimentally demonstrate, dynamic and steady state performance for the CLL-T SPRC are compared from the simulation studies.
Experiments and simulation of a net closing mechanism for tether-net capture of space debris
Sharf, Inna; Thomsen, Benjamin; Botta, Eleonora M.; Misra, Arun K.
2017-10-01
This research addresses the design and testing of a debris containment system for use in a tether-net approach to space debris removal. The tether-net active debris removal involves the ejection of a net from a spacecraft by applying impulses to masses on the net, subsequent expansion of the net, the envelopment and capture of the debris target, and the de-orbiting of the debris via a tether to the chaser spacecraft. To ensure a debris removal mission's success, it is important that the debris be successfully captured and then, secured within the net. To this end, we present a concept for a net closing mechanism, which we believe will permit consistently successful debris capture via a simple and unobtrusive design. This net closing system functions by extending the main tether connecting the chaser spacecraft and the net vertex to the perimeter and around the perimeter of the net, allowing the tether to actuate closure of the net in a manner similar to a cinch cord. A particular embodiment of the design in a laboratory test-bed is described: the test-bed itself is comprised of a scaled-down tether-net, a supporting frame and a mock-up debris. Experiments conducted with the facility demonstrate the practicality of the net closing system. A model of the net closure concept has been integrated into the previously developed dynamics simulator of the chaser/tether-net/debris system. Simulations under tether tensioning conditions demonstrate the effectiveness of the closure concept for debris containment, in the gravity-free environment of space, for a realistic debris target. The on-ground experimental test-bed is also used to showcase its utility for validating the dynamics simulation of the net deployment, and a full-scale automated setup would make possible a range of validation studies of other aspects of a tether-net debris capture mission.
A clinical study of space closure with nickel-titanium closed coil springs and an elastic module.
Samuels, R H; Rudge, S J; Mair, L H
1998-07-01
A previous study has shown that a 150-gram nickel-titanium closed coil spring (Sentalloy, GAC International Inc.) closed spaces more quickly and more consistently than an elastic module (Alastik, Unitec/3M). This study used the same friction sensitive sliding mechanics of pitting the six anterior teeth against the second bicuspid and first molars, to examine the rate of space closure of 100-gram and 200-gram nickel-titanium closed coil springs. The results for the three springs and elastic module were compared. The nickel-titanium closed coil springs produced a more consistent space closure than the elastic module. The 150- and 200-gram springs produced a faster rate of space closure than the elastic module or the 100-gram spring. No significant difference was noted between the rates of closure for the 150- and the 200-gram springs.
International Nuclear Information System (INIS)
Joyner, Claude Russell II; Fowler, Bruce; Matthews, John
2003-01-01
In space, whether in a stable satellite orbit around a planetary body or traveling as a deep space exploration craft, power is just as important as the propulsion. The need for power is especially important for in-space vehicles that use Electric Propulsion. Using nuclear power with electric propulsion has the potential to provide increased payload fractions and reduced mission times to the outer planets. One of the critical engineering and design aspects of nuclear electric propulsion at required mission optimized power levels is the mechanism that is used to convert the thermal energy of the reactor to electrical power. The use of closed Brayton cycles has been studied over the past 30 or years and shown to be the optimum approach for power requirements that range from ten to hundreds of kilowatts of power. It also has been found to be scalable to higher power levels. The Closed Brayton Cycle (CBC) engine power conversion unit (PCU) is the most flexible for a wide range of power conversion needs and uses state-of-the-art, demonstrated engineering approaches. It also is in use with many commercial power plants today. The long life requirements and need for uninterrupted operation for nuclear electric propulsion demands high reliability from a CBC engine. A CBC engine design for use with a Nuclear Electric Propulsion (NEP) system has been defined based on Pratt and Whitney's data from designing long-life turbo-machines such as the Space Shuttle turbopumps and military gas turbines and the use of proven integrated control/health management systems (EHMS). An integrated CBC and EHMS design that is focused on using low-risk and proven technologies will over come many of the life-related design issues. This paper will discuss the use of a CBC engine as the power conversion unit coupled to a gas-cooled nuclear reactor and the design trends relative to its use for powering electric thrusters in the 25 kWe to 100kWe power level
The closed Brayton cycle: An energy conversion system for near-term military space missions
Davis, Keith A.
The Particle Bed Reactor (PBR)-closed Brayton cycle (CBC) provides a 5 to 30 kWe class nuclear power system for surveillance and communication missions during the 1990s and will scale to 100 kWe and beyond for other space missions. The PBR-CBC is technically feasible and within the existing state of the art. The PBR-CBC system is flexible, scaleable, and offers development economy. The ability to operate over a wide power range promotes commonality between missions with similar but not identical power spectra. The PBR-CBC system mass is very competitive with rival nuclear dynamic and static power conversion and systems. The PBR-CBC provides growth potential for the future with even lower specific masses.
Evaluation of nuclides with closely spaced values of depletion constants in transmutation chains
International Nuclear Information System (INIS)
Vukadin, Z.S.
1977-01-01
New method of calculating nuclide concentrations in a transmutation chain is developed in this thesis. Method is based on originally derived recurrence formulas for expansion series of depletion functions and on originally obtained, nonsingular, Bateman coefficients. Explicit expression for the nuclide concentrations in a transmutation chain is obtained. This expression can be used as it stands for arbitrary values of nuclides depletion constants. By computing hypothetical transmutation chains and neptunium series, method is compared with the Bateman analytical solution, with the approximate solutions and with the matrix exponential method. It comes out that the method presented in this thesis is suitable for calculating very long depletion chains even in the case of some closely spaced and/or equal values of nuclide depletion constants. Though, presented method is of great practical applicability in a number of nuclear physics problems that are dealing with the nuclide transmutations: starting from the studies of the stellar evolution up to the design of nuclear reactors (author) [sr
Heat exchanger optimization of a closed Brayton cycle for nuclear space propulsion
Energy Technology Data Exchange (ETDEWEB)
Ribeiro, Guilherme B.; Guimaraes, Lamartine N.F.; Braz Filho, Francisco A., E-mail: gbribeiro@ieav.cta.br, E-mail: guimarae@ieav.cta.br, E-mail: braz@ieav.cta.br [Instituto de Estudos Avancados (IEAV), Sao Jose dos Campos, SP (Brazil). Divisao de Energia Nuclear
2015-07-01
Nuclear power systems turned to space electric propulsion differs strongly from usual ground-based power systems regarding the importance of overall size and weight. For propulsion power systems, weight and efficiency are essential drivers that should be managed during conception phase. Considering that, this paper aims the development of a thermal model of a closed Brayton cycle that applies the thermal conductance of heat exchangers in order to predict the energy conversion performance. The centrifugal-flow turbine and compressor characterization were achieved using algebraic equations from literature data. The binary mixture of He-Xe with molecular weight of 40 g/mole is applied and the impact of heat exchanger optimization in thermodynamic irreversibilities is evaluated in this paper. (author)
Heat exchanger optimization of a closed Brayton cycle for nuclear space propulsion
International Nuclear Information System (INIS)
Ribeiro, Guilherme B.; Guimaraes, Lamartine N.F.; Braz Filho, Francisco A.
2015-01-01
Nuclear power systems turned to space electric propulsion differs strongly from usual ground-based power systems regarding the importance of overall size and weight. For propulsion power systems, weight and efficiency are essential drivers that should be managed during conception phase. Considering that, this paper aims the development of a thermal model of a closed Brayton cycle that applies the thermal conductance of heat exchangers in order to predict the energy conversion performance. The centrifugal-flow turbine and compressor characterization were achieved using algebraic equations from literature data. The binary mixture of He-Xe with molecular weight of 40 g/mole is applied and the impact of heat exchanger optimization in thermodynamic irreversibilities is evaluated in this paper. (author)
Liddle, Donn
2017-01-01
When photogrammetrists read an article entitled "Photogrammetry in Space" they immediately think of terrestrial mapping using satellite imagery. However in the last 19 years the roll of close range photogrammetry in support of the manned space flight program has grown exponentially. Management and engineers have repeatedly entrusted the safety of the vehicles and their crews to the results of photogrammetric analysis. In February 2010, the Node 3 module was attached to the port side Common Berthing Mechanism (CBM) of the International Space Station (ISS). Since this was not the location at which the module was originally designed to be located on the ISS, coolant lines containing liquid ammonia, were installed externally from the US Lab to Node 3 during a spacewalk. During mission preparation I had developed a plan and a set of procedures to have the astronauts acquire stereo imagery of these coolant lines at the conclusion of the spacewalk to enable us to map their as-installed location relative to the rest of the space station. Unfortunately, the actual installation of the coolant lines took longer than expected and in an effort to wrap up the spacewalk on time, the mission director made a real-time call to drop the photography. My efforts to reschedule the photography on a later spacewalk never materialized, so rather than having an as-installed model for the location of coolant lines, the master ISS CAD database continued to display an as-designed model of the coolant lines. Fast forward to the summer of 2015, the ISS program planned to berth a Japanese cargo module to the nadir Common Berthing Mechanism (CBM), immediately adjacent to the Node 3 module. A CAD based clearance analysis revealed a negative four inch clearance between the ammonia lines and a thruster nozzle on the port side of the cargo vehicle. Recognizing that the model of the ammonia line used in the clearance analysis was "as-designed" rather than "as-installed", I was asked to determine the
Directory of Open Access Journals (Sweden)
Caffiyar Mohamed Yousuff
2017-08-01
Full Text Available Recent advances in inertial microfluidics designs have enabled high throughput, label-free separation of cells for a variety of bioanalytical applications. Various device configurations have been proposed for binary separation with a focus on enhancing the separation distance between particle streams to improve the efficiency of separate particle collection. These configurations have not demonstrated scaling beyond 3 particle streams either because the channel width is a constraint at the collection outlets or particle streams would be too closely spaced to be collected separately. We propose a method to design collection outlets for inertial focusing and separation devices which can collect closely-spaced particle streams and easily scale to an arbitrary number of collection channels without constraining the outlet channel width, which is the usual cause of clogging or cell damage. According to our approach, collection outlets are a series of side-branching channels perpendicular to the main channel of egress. The width and length of the outlets can be chosen subject to constraints from the position of the particle streams and fluidic resistance ratio computed from fluid dynamics simulations. We show the efficacy of this approach by demonstrating a successful collection of upto 3 particle streams of 7μm, 10μm and 15μm fluorescent beads which have been focused and separated by a spiral inertial device with a separation distance of only 10μm -15μm. With a throughput of 1.8mL/min, we achieved collection efficiency exceeding 90% for each particle at the respective collection outlet. The flexibility to use wide collection channels also enabled us to fabricate the microfluidic device with an epoxy mold that was created using xurography, a low cost, and imprecise fabrication technique.
Significance of structure–soil–structure interaction for closely spaced structures
International Nuclear Information System (INIS)
Roy, Christine; Bolourchi, Said; Eggers, Daniel
2015-01-01
Nuclear facilities typically consist of many closely spaced structures with different sizes and depths of embedment. Seismic response of each structure could be influenced by dynamic structure–soil–structure interaction (SSSI) behavior of adjacent closely spaced structures. This paper examines the impact of SSSI on the in-structure response spectra (ISRS) and peak accelerations of a light structure adjacent to a heavy structure and of a heavy structure adjacent to a similar heavy structure for several soil cases, foundation embedment depths, and separation distances. The impacts of a heavy surface or embedded structure on adjacent ground motions were studied. The analyses demonstrated the adjacent ground motions are sensitive to foundation embedment, soil profile, response frequency, and distance from the structure. Seismic responses of a light structure located near a heavy structure are calculated either by modeling both structures subjected to free field motions, or performing a cascade analysis by considering the light structure model subjected to modified ground motions due to the heavy structure. Cascade SSSI analyses are shown to adequately account for the effect of the heavy structure on the light structure without explicitly modeling both structures together in a single analysis. To further study the influence of SSSI behavior, this paper examines dynamic response of two adjacent heavy structures and compares this response to response of a single heavy structure neglecting adjacent structures. The SSSI responses of the two heavy structures are evaluated for varying soil conditions and structure separation distances using three-dimensional linear SSI analyses and considering anti-symmetry boundary conditions. The analyses demonstrate that the SSSI response of a light or a heavy structure can be influenced by the presence of a nearby heavy structure. Although this study considers linear analysis methodology, the conclusion of SSSI influences on dynamic
National Aeronautics and Space Administration — As mankind continues making strides in space exploration and associated technologies, the frequency, duration, and complexity of human space exploration missions...
National Aeronautics and Space Administration — In recent times long-term stay has become a common occurrence in the International Space Station (ISS). However adaptation to the space environment can sometimes...
Anticonvection device for a narrow space comprised between two parallel walls
International Nuclear Information System (INIS)
Costes, Didier.
1975-01-01
The invention relates to an anticonvection device providing strong limitations against the convection currents inside a space submitted to a vertical thermal gradient and more especially the space enclosed between the inner wall of a vessel generally cyclindrical in shape and of vertical axis, intended for a nuclear reactor, and the outer wall of a plug fitted together with said vessel. To this effect, said device is characterized in that it comprises a packing of a material of open porosity and thickness-wise elasticity, in the form of threads, fibers, knitted-cloths or sheets separated by distances shorter than the thickness of stagnancy under the temperature conditions inside said space [fr
Alves Júnior, A. A.; Sokoloff, M. D.
2017-10-01
MCBooster is a header-only, C++11-compliant library that provides routines to generate and perform calculations on large samples of phase space Monte Carlo events. To achieve superior performance, MCBooster is capable to perform most of its calculations in parallel using CUDA- and OpenMP-enabled devices. MCBooster is built on top of the Thrust library and runs on Linux systems. This contribution summarizes the main features of MCBooster. A basic description of the user interface and some examples of applications are provided, along with measurements of performance in a variety of environments
Convection heat transfer of closely-spaced spheres with surface blowing
Energy Technology Data Exchange (ETDEWEB)
Kleinstreuer, C. (North Carolina State Univ., Raleigh, NC (United States). Dept. of Mechanical and Aerospace Engineering); Chiang, H. (Thermofluid Technology Div., Industrial Technology Research Inst., Chutung (Taiwan, Province of China))
1993-05-01
A validated computer simulation model has been developed for the analysis of colinear spheres in a heated gas stream. Using the Galerkin finite element method, the steady-state Navier-Stokes and heat transfer equations have been solved describing laminar axisymmetric thermal flow past closely-spaced monodisperse spheres with fluid injection. Of interest are the coupled nonlinear interaction effects on the temperature fields and ultimately on the Nusselt number of each sphere for different free stream Reynolds numbers (20 [<=] Re [<=] 200) and intersphere distances (1.5 [<=] d[sub ij] [<=] 6.0) in the presence of surface blowing (0 [<=] v[sub b] [<=] 0.1). Fluid injection (i.e. blowing) and associated wake effects generate lower average heat transfer coefficients for each interacting sphere when the Reynolds number increases (Re > 100). Heat transfer is also reduced at small spacings especially for the second and third sphere. A Nusselt number correlation for each interacting (porous) sphere has been developed based on computer experiments. (orig.)
(r, s-(τ12,τ12*-θ-Generalized double fuzzy closed sets in bitopological spaces
Directory of Open Access Journals (Sweden)
E. El-Sanousy
2016-10-01
Full Text Available In this paper, we introduce the notion of (r, s-(i, j-θ-generalized double fuzzy closed sets in double fuzzy bitopological spaces. A new θ-double fuzzy closure C12θ on double fuzzy bitopological spaces by using double supra fuzzy topological spaces are defined. Furthermore, generalized double fuzzy θ-continuous (resp. irresolute and double fuzzy strongly θ-continuous mappings are introduced and some of their properties studied.
Development of parallel algorithms for electrical power management in space applications
Berry, Frederick C.
1989-01-01
The application of parallel techniques for electrical power system analysis is discussed. The Newton-Raphson method of load flow analysis was used along with the decomposition-coordination technique to perform load flow analysis. The decomposition-coordination technique enables tasks to be performed in parallel by partitioning the electrical power system into independent local problems. Each independent local problem represents a portion of the total electrical power system on which a loan flow analysis can be performed. The load flow analysis is performed on these partitioned elements by using the Newton-Raphson load flow method. These independent local problems will produce results for voltage and power which can then be passed to the coordinator portion of the solution procedure. The coordinator problem uses the results of the local problems to determine if any correction is needed on the local problems. The coordinator problem is also solved by an iterative method much like the local problem. The iterative method for the coordination problem will also be the Newton-Raphson method. Therefore, each iteration at the coordination level will result in new values for the local problems. The local problems will have to be solved again along with the coordinator problem until some convergence conditions are met.
Directory of Open Access Journals (Sweden)
Xueli Chen
2010-01-01
Full Text Available During the past decade, Monte Carlo method has obtained wide applications in optical imaging to simulate photon transport process inside tissues. However, this method has not been effectively extended to the simulation of free-space photon transport at present. In this paper, a uniform framework for noncontact optical imaging is proposed based on Monte Carlo method, which consists of the simulation of photon transport both in tissues and in free space. Specifically, the simplification theory of lens system is utilized to model the camera lens equipped in the optical imaging system, and Monte Carlo method is employed to describe the energy transformation from the tissue surface to the CCD camera. Also, the focusing effect of camera lens is considered to establish the relationship of corresponding points between tissue surface and CCD camera. Furthermore, a parallel version of the framework is realized, making the simulation much more convenient and effective. The feasibility of the uniform framework and the effectiveness of the parallel version are demonstrated with a cylindrical phantom based on real experimental results.
An image-space parallel convolution filtering algorithm based on shadow map
Li, Hua; Yang, Huamin; Zhao, Jianping
2017-07-01
Shadow mapping is commonly used in real-time rendering. In this paper, we presented an accurate and efficient method of soft shadows generation from planar area lights. First this method generated a depth map from light's view, and analyzed the depth-discontinuities areas as well as shadow boundaries. Then these areas were described as binary values in the texture map called binary light-visibility map, and a parallel convolution filtering algorithm based on GPU was enforced to smooth out the boundaries with a box filter. Experiments show that our algorithm is an effective shadow map based method that produces perceptually accurate soft shadows in real time with more details of shadow boundaries compared with the previous works.
Schain, Aaron J; Melo-Carrillo, Agustin; Strassman, Andrew M; Burstein, Rami
2017-03-15
Functioning of the glymphatic system, a network of paravascular tunnels through which cortical interstitial solutes are cleared from the brain, has recently been linked to sleep and traumatic brain injury, both of which can affect the progression of migraine. This led us to investigate the connection between migraine and the glymphatic system. Taking advantage of a novel in vivo method we developed using two-photon microscopy to visualize the paravascular space (PVS) in naive uninjected mice, we show that a single wave of cortical spreading depression (CSD), an animal model of migraine aura, induces a rapid and nearly complete closure of the PVS around surface as well as penetrating cortical arteries and veins lasting several minutes, and gradually recovering over 30 min. A temporal mismatch between the constriction or dilation of the blood vessel lumen and the closure of the PVS suggests that this closure is not likely to result from changes in vessel diameter. We also show that CSD impairs glymphatic flow, as indicated by the reduced rate at which intraparenchymally injected dye was cleared from the cortex to the PVS. This is the first observation of a PVS closure in connection with an abnormal cortical event that underlies a neurological disorder. More specifically, the findings demonstrate a link between the glymphatic system and migraine, and suggest a novel mechanism for regulation of glymphatic flow. SIGNIFICANCE STATEMENT Impairment of brain solute clearance through the recently described glymphatic system has been linked with traumatic brain injury, prolonged wakefulness, and aging. This paper shows that cortical spreading depression, the neural correlate of migraine aura, closes the paravascular space and impairs glymphatic flow. This closure holds the potential to define a novel mechanism for regulation of glymphatic flow. It also implicates the glymphatic system in the altered cortical and endothelial functioning of the migraine brain. Copyright © 2017
Yan, Haojing; Yan, Lin; Zamojski, Michel A.; Windhorst, Rogier A.; McCarthy, Patrick J.; Fan, Xiaohui; Röttgering, Huub J. A.; Koekemoer, Anton M.; Robertson, Brant E.; Davé, Romeel; Cai, Zheng
2011-02-01
We report the first results from the Hubble Infrared Pure Parallel Imaging Extragalactic Survey, which utilizes the pure parallel orbits of the Hubble Space Telescope to do deep imaging along a large number of random sightlines. To date, our analysis includes 26 widely separated fields observed by the Wide Field Camera 3, which amounts to 122.8 arcmin2 in total area. We have found three bright Y 098-dropouts, which are candidate galaxies at z >~ 7.4. One of these objects shows an indication of peculiar variability and its nature is uncertain. The other two objects are among the brightest candidate galaxies at these redshifts known to date (L>2L*). Such very luminous objects could be the progenitors of the high-mass Lyman break galaxies observed at lower redshifts (up to z ~ 5). While our sample is still limited in size, it is much less subject to the uncertainty caused by "cosmic variance" than other samples because it is derived using fields along many random sightlines. We find that the existence of the brightest candidate at z ≈ 7.4 is not well explained by the current luminosity function (LF) estimates at z ≈ 8. However, its inferred surface density could be explained by the prediction from the LFs at z ≈ 7 if it belongs to the high-redshift tail of the galaxy population at z ≈ 7. Based on observations made with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. These observations are associated with programs 11700 and 11702.
Thin film CdTe solar cells by close spaced sublimation: Recent results from pilot line
International Nuclear Information System (INIS)
Siepchen, B.; Drost, C.; Späth, B.; Krishnakumar, V.; Richter, H.; Harr, M.; Bossert, S.; Grimm, M.; Häfner, K.; Modes, T.; Zywitzki, O.; Morgner, H.
2013-01-01
CdTe is an attractive material to produce high efficient and low cost thin film solar cells. The semiconducting layers of this kind of solar cell can be deposited by the Close Spaced Sublimation (CSS) process. The advantages of this technique are high deposition rates and an excellent utilization of the raw material, leading to low production costs and competitive module prices. CTF Solar GmbH is offering equipment and process knowhow for the production of CdTe solar modules. For further improvement of the technology, research is done at a pilot line, which covers all relevant process steps for manufacture of CdTe solar cells. Herein, we present the latest results from the process development and our research activities on single functional layers as well as for complete solar cell devices. Efficiencies above 13% have already been obtained with Cu-free back contacts. An additional focus is set on different transparent conducting oxide materials for the front contact and a Sb 2 Te 3 based back contact. - Highlights: ► Laboratory established on industrial level for CdTe solar cell research ► 13.0% cell efficiency with our standard front contact and Cu-free back contact ► Research on ZnO-based transparent conducting oxide and Sb 2 Te 3 back contacts ► High resolution scanning electron microscopy analysis of ion polished cross section
Energy Technology Data Exchange (ETDEWEB)
Park, Choon Su [Center for Safety Measurements, Division of Metrology for Quality of Life, Korea Research institute of Standards and Science, Daejeon (Korea, Republic of); Jeon, Jong Hoon [Hyundai Heavy Industry Co.,Ltd., Ulsan (Korea, Republic of); Park, Jin Ho [Korea Atomiv Energy Institute, Daejeon (Korea, Republic of)
2013-10-15
It is of great importance to localize leakages in complex pipelines for assuring their safety. A sensor array that can detect where leakages occur enables us to monitor a wide area with a relatively low cost. Beam forming is a fast and efficient algorithm to estimate where sources are, but it is generally made use of in free field condition. In practice, however, many pipelines are placed in a closed space for the purpose of safety and maintenance. This leads us to take reflected waves into account to the beam forming for interior leakage localization. Beam power distribution of reflected waves in a closed space is formulated, and spatial average is introduced to suppress the effect of reflected waves. Computer simulations and experiments ensure how the proposed method is effective to localize leakage in a closed space for structural health monitoring.
International Nuclear Information System (INIS)
Yan Haojing; Yan Lin; Zamojski, Michel A.; Windhorst, Rogier A.; McCarthy, Patrick J.; Fan Xiaohui; Dave, Romeel; Roettgering, Huub J. A.; Koekemoer, Anton M.; Robertson, Brant E.; Cai Zheng
2011-01-01
We report the first results from the Hubble Infrared Pure Parallel Imaging Extragalactic Survey, which utilizes the pure parallel orbits of the Hubble Space Telescope to do deep imaging along a large number of random sightlines. To date, our analysis includes 26 widely separated fields observed by the Wide Field Camera 3, which amounts to 122.8 arcmin 2 in total area. We have found three bright Y 098 -dropouts, which are candidate galaxies at z ∼> 7.4. One of these objects shows an indication of peculiar variability and its nature is uncertain. The other two objects are among the brightest candidate galaxies at these redshifts known to date (L>2L*). Such very luminous objects could be the progenitors of the high-mass Lyman break galaxies observed at lower redshifts (up to z ∼ 5). While our sample is still limited in size, it is much less subject to the uncertainty caused by 'cosmic variance' than other samples because it is derived using fields along many random sightlines. We find that the existence of the brightest candidate at z ∼ 7.4 is not well explained by the current luminosity function (LF) estimates at z ∼ 8. However, its inferred surface density could be explained by the prediction from the LFs at z ∼ 7 if it belongs to the high-redshift tail of the galaxy population at z ∼ 7.
Kumar, Sameer
2010-06-15
Disclosed is a mechanism on receiving processors in a parallel computing system for providing order to data packets received from a broadcast call and to distinguish data packets received at nodes from several incoming asynchronous broadcast messages where header space is limited. In the present invention, processors at lower leafs of a tree do not need to obtain a broadcast message by directly accessing the data in a root processor's buffer. Instead, each subsequent intermediate node's rank id information is squeezed into the software header of packet headers. In turn, the entire broadcast message is not transferred from the root processor to each processor in a communicator but instead is replicated on several intermediate nodes which then replicated the message to nodes in lower leafs. Hence, the intermediate compute nodes become "virtual root compute nodes" for the purpose of replicating the broadcast message to lower levels of a tree.
Physiological Disorders in Closed Environment-Grown Crops for Space Life Support
Wheeler, Raymond; Morrow, Robert
Crop production for life support systems in space will require controlled environments where temperature, humidity, CO2, and light might differ from natural environments where plants evolved. Physiological disorders, i.e., abnormal plant growth and development, can occur under these controlled environments. Among the most common of these disorders are Ca deficiency injuries such as leaf tipburn (e.g., lettuce), blossom-end-rot in fruits (e.g., tomato and pepper), and internal tissue necrosis in fruits or tubers (e.g., cucumber and potato). Increased Ca nutrition to the plants typically has little effect on these disorders, but slowing overall growth or providing better air circulation to increase transpiration can be effective. A second common disorder is oedema or intumescence, which appears as callus-like growth or galls on leaves (e.g., sweetpotato, potato, pepper, and tomato). This disorder can be reduced by increasing the near UV radiation ( 300-400 nm) to the plants. Leaf injury and necrosis can occur under long photoperiods (e.g., tomato, potato, and pepper) and at super-elevated (i.e., ¿ than 4000 mol mol-1) CO2 concentrations (e.g., soybean, potato, and radish), and these can be managed by reducing the photoperiod and CO2 concentration, respectively. Lack of blue light in the spectrum (e.g., under red LEDs or LPS lamps) can result in leggy growth and/or leaves lacking in chlorophyll (e.g., wheat, bean, and radish). Volatile organic compounds (VOCs), most commonly ethylene, can accumulate in tightly closed systems and result in a variety of negative responses. Most of these disorders can be mitigated by altering the environmental set-points or by using more resistant cultivars.
International Nuclear Information System (INIS)
Manciu, Felicia S.; Salazar, Jessica G.; Diaz, Aryzbe; Quinones, Stella A.
2015-01-01
High quality materials with excellent ordered structure are needed for developing photovoltaic and infrared devices. With this end in mind, the results of our research prove the importance of a detailed, comprehensive spectroscopic and microscopic analysis in assessing cadmium telluride (CdTe) characteristics. The goal of this work is to examine not only material crystallinity and morphology, but also induced stress in the deposit material. A uniform, selective growth of polycrystalline CdTe by close-space sublimation on patterned Si(111) and Si(211) substrates is demonstrated by scanning electron microscopy images. Besides good crystallinity of the samples, as revealed by both Raman scattering and Fourier transform infrared absorption investigations, the far-infrared transmission data also show the presence of surface optical phonon modes, which is direct evidence of confinement in such a material. The qualitative identification of the induced stress was achieved by performing confocal Raman mapping microscopy on sample surfaces and by monitoring the existence of the rock-salt and zinc-blende structural phases of CdTe, which were associated with strained and unstrained morphologies, respectively. Although the induced stress in the material is still largely due to the high lattice mismatch between CdTe and the Si substrate, the current results provide a direct visualization of its partial release through the relaxation effect at crystallite boundaries and of preferential growth directions of less strain. Our study, thus offers significant value for improvement of material properties, by targeting the needed adjustments in the growth processes. - Highlights: • Assessing the characteristics of CdTe deposited on patterned Si substrates • Proving the utility of confocal Raman microscopy in monitoring the induced stress • Confirming the partial stress release through the grain boundary relaxation effect • Demonstrating the phonon confinement effect in low
A closely-spaced magnetotelluric study of the Ahuachapan-Chipilapa geothermal field, El Salvador
Energy Technology Data Exchange (ETDEWEB)
Romo, Jose Manuel; Flores, Carlos; Vega, Raymundo; Vazquez, Rogelio; Flores, Marco A. Perez; Trevino, Enrique Gomez; Esparza, Francisco J; Garcia, Victor H [Centro de Investigacion Cientifica y de Educacion Superior de Ensenada, Baja California (Mexico); Quijano, Julio E [Comision Ejecutiva Hidroelectrica del Rio Lempa (CEL), Santa Tecla (El Salvador)
1997-12-01
The distribution of electrical conductivity beneath the Ahuachapan-Chipilapa geothermal area was simulated using 2-D models based on 126 closely-spaced magnetotelluric (MT) measurements. The observed MT response was interpreted as being produced by the superposition of two orthogonal geological structural systems: an approximately E-W regional trend associated with the Central Graben structure, which affects the loner period response, and a local and younger N-S fault system that is responsible for the short-to-intermediate period data. The MT response in the 0.02-10 s range period was used to simulate the conductivity structure within the first 2 km depth. By correlating the low-resistivity zones between twelve 2-D models, maps of the spatial distribution of conductors at three different depth levels were constructed. Three deep conductors were identified, one of the associated with the Ahuachapan reservoir, another apparently related to the Laguna Verde volcano, and a third one controlled by El Tortuguero Graben. The subsurface geometry of these conductivity anomalies suggests that the the Chipilapa and La Labor hot springs are supplied by two separate sources of hot fluids, one coming from the east and the other from the south or southwest. The distribution of the shallow high-conductivity zones agrees with the hydrothermal alteration zones mapped at the surface, suggesting that at shallow levels the argillitization process contributes significantly to the low resistivity. The large number of drillholes and the dense MT site coverage allowed the definition of important correlations between high temperatures and high conductivity, as well as between deep conductivity anomalies and productive wells. On this basis two years for future drilling are proposed. (Author)
Allweiss, Alexandra; Grant, Carl A.; Manning, Karla
2015-01-01
This critical article provides insights into how media frames influence our understandings of school reform in urban spaces by examining images of students during the 2013 school closings in Chicago. Using visual framing analysis and informed by framing theory and critiques of neoliberalism we seek to explore two questions: (1) What role do media…
Pillinger, C. T.; Pillinger, J. M.
2013-09-01
The European Space Agency (ESA)'s comet chaser mission, Rosetta, has been more than a quarter of a century in coming to fruition. Whilst it might sound a long time humankind has been interested in comets for much longer. For over a thousand years depictions of comets have been appearing in Art 1 including many humorous cartoons 2. There are numerous cometary metaphors throughout literature. With this in mind we have recognised that there is a tremendous opportunity with comets to introduce science to different non-scientific audiences who would not necessarily believe they were interested in science. A similar approach was adopted with great success for the Beagle 2 involvement in ESA's Mars Express 3,4. By exploiting the perhaps sometimes less obvious connections to the Rosetta mission we hope to capture the attention of non-scientists and introduce them to science unawares - a case of a little sugar to help the medicine go down. It is our belief that the Rosetta mission has enormous potential for bringing science to the unconverted. We give here one example of a connection between Art and the Rosetta mission. By choosing the allegorical name Rosetta for its cometary mission, ESA have immediately invited comparison with the stone tablet which provided the key to translating the languages of ancient cultures, particularly Egyptian hieroglyphics. It is well known that a scientist, Thomas Young, foreign secretary of The Royal Society, made the break through which recognised the name Ptolemy in a cartouche on the Rosetta stone which can be seen today at the British Museum. The events concerning the 'capture' of the Rosetta stone were witnessed by scientists Sir William Hamilton (a renowned geophysicist as well as husband of Horatio Nelson's notorious mistress Lady Hamilton) and Edward Daniel Clarke, a geologist who would become first Professor of Mineralogy at Cambridge and an early meteoricist. Young's inspiration allowed Jean-Francois Champollion to decipher the
Nuttall, Ronald L.; Nuttall, Ena Vazquez
This study focuses on the effects of family size and spacing on intellectual, social, and personality development of children. The sample consisted of 533 suburban, middle class, large family (five or more) and small two child family children. The children, 233 boys and 300 girls, were teenagers attending either junior or senior high school.…
Potential Sedimentary Evidence of Two Closely Spaced Tsunamis on the West Coast of Aceh, Indonesia
Monecke, Katrin; Meilianda, Ella; Rushdy, Ibnu; Moena, Abudzar; Yolanda, Irvan P.
2016-04-01
Recent research in the coastal regions of Aceh, Indonesia, an area that was largely affected by the 2004 Sumatra Andaman earthquake and ensuing Indian Ocean tsunami, suggests the possibility that two closely spaced tsunamis occurred at the turn of the 14th to 15th century (Meltzner et al., 2010; Sieh et al., 2015). Here, we present evidence of two buried sand layers in the coastal marshes of West Aceh, possibly representing these penultimate predecessors of the 2004 tsunami. We discovered the sand layers in an until recently inaccessible area of a previously studied beach ridge plain about 15 km North of Meulaboh, West Aceh. Here, the 2004 tsunami left a continuous, typically a few cm thick sand sheet in the coastal hinterland in low-lying swales that accumulate organic-rich deposits and separate the sandy beach ridges. In keeping with the long-term progradation of the coastline, older deposits have to be sought after further inland. Using a hand auger, the buried sand layers were discovered in 3 cores in a flooded and highly vegetated swale in about 1 km distance to the shoreline. The pair of sand layers occurs in 70-100 cm depth and overlies 40-60 cm of dark-brown peat that rests on the basal sand of the beach ridge plain. The lower sand layer is only 1-6 cm thick, whereas the upper layer is consistently thicker, measuring 11-17 cm, with 8-14 cm of peat in between sand sheets. Both layers consist of massive, grey, medium sand and include plant fragments. They show very sharp upper and lower boundaries clearly distinguishing them from the surrounding peat and indicating an abrupt depositional event. A previously developed age model for sediments of this beach ridge plain suggest that this pair of layers could indeed correlate to a nearby buried sand sheet interpreted as tsunamigenic and deposited soon after 1290-1400AD (Monecke et al., 2008). The superb preservation at this new site allows the clear distinction of two depositional events, which, based on a first
Wigner’s phase-space function and atomic structure: II. Ground states for closed-shell atoms
DEFF Research Database (Denmark)
Springborg, Michael; Dahl, Jens Peder
1987-01-01
We present formulas for reduced Wigner phase-space functions for atoms, with an emphasis on the first-order spinless Wigner function. This function can be written as the sum of separate contributions from single orbitals (the natural orbitals). This allows a detailed study of the function. Here we...... display and analyze the function for the closed-shell atoms helium, beryllium, neon, argon, and zinc in the Hartree-Fock approximation. The quantum-mechanical exact results are compared with those obtained with the approximate Thomas-Fermi description of electron densities in phase space....
DEFF Research Database (Denmark)
Suvei, Stefan-Daniel; Vroon, Jered; Somoza Sanchez, Vella Veronica
2018-01-01
participants (n=83), with/without personal space invasion, and with/without a social gaze cue. With a questionnaire, we measured subjective perception of warmth, competence, and comfort after such an interaction. In addition, we used on-board sensors and a tracking system to measure the dynamics of social......How can a social robot get physically close to the people it needs to interact with? We investigated the effect of a social gaze cue by a human-sized mobile robot on the effects of personal space invasion by that robot. In our 2x2 between-subject experiment, our robot would approach our...
International Nuclear Information System (INIS)
Lyubimova, T; Mailfert, A
2013-01-01
The paper deals with the investigation of thermo-magnetic convection in a paramagnetic liquid subjected to a non-uniform magnetic field in weightlessness conditions. Indeed, in zero-g space conditions such as realized in International Space Station (ISS), or in artificial satellite, or in free-flight space vessels, the classical thermo-gravitational convection in fluid disappears. In any cases, it may be useful to restore the convective thermal exchange inside fluids such as liquid oxygen. In this paper, the restoration of heat exchange by the way of creation of magnetic convection is numerically studied.
Directory of Open Access Journals (Sweden)
tobias c. van Veen
2015-11-01
Full Text Available This text is written in memoriam to dubstep emcee and poet Space Ape (Stephen Samuel Gordon, b. June 17th, 1970; d. October 2nd, 2014. By his own words, Space Ape arose from the depths of the black Atlantic, on a mission to relieve the “pressure” through bass fiction. My aim is to explicate Space Ape’s bass fiction as the intersection of material and imaginal forces, connecting it to a broader Afrofuturist constellation of mythopoetic becomings. Memory and matter converge in the affect and sounding of Space Ape the “hostile alien” (“Space Ape”, Burial, 2006, a figure shaped at the intersection of the dread body, riddim warfare, and speculative lyricism. Space Ape set out to “xorcise” that which consumed him from within by embracing the “spirit of change”. Turning to process philosophy, I demonstrate how Space Ape’s bass fiction—his virtual body—activates the abstract concepts of becoming in the “close encounter” with the hostile alien.
Highly Efficient Closed-Loop CO2 Removal System for Deep-Space ECLSS, Phase I
National Aeronautics and Space Administration — TDA Research Inc.(TDA) in collaboration with University of Puerto Rico ? Mayaguez (UPRM is proposing to develop a highly efficient CO2 removal system based on UPRM...
Baxley, Brian T.; Murdoch, Jennifer L.; Swieringa, Kurt A.; Barmore, Bryan E.; Capron, William R.; Hubbs, Clay E.; Shay, Richard F.; Abbott, Terence S.
2013-01-01
The predicted increase in the number of commercial aircraft operations creates a need for improved operational efficiency. Two areas believed to offer increases in aircraft efficiency are optimized profile descents and dependent parallel runway operations. Using Flight deck Interval Management (FIM) software and procedures during these operations, flight crews can achieve by the runway threshold an interval assigned by air traffic control (ATC) behind the preceding aircraft that maximizes runway throughput while minimizing additional fuel consumption and pilot workload. This document describes an experiment where 24 pilots flew arrivals into the Dallas Fort-Worth terminal environment using one of three simulators at NASA?s Langley Research Center. Results indicate that pilots delivered their aircraft to the runway threshold within +/- 3.5 seconds of their assigned time interval, and reported low workload levels. In general, pilots found the FIM concept, procedures, speeds, and interface acceptable. Analysis of the time error and FIM speed changes as a function of arrival stream position suggest the spacing algorithm generates stable behavior while in the presence of continuous (wind) or impulse (offset) error. Concerns reported included multiple speed changes within a short time period, and an airspeed increase followed shortly by an airspeed decrease.
Efficient characterization of labeling uncertainty in closely-spaced targets tracking
Moreno Leon, Carlos; Moreno Leon, Carlos; Driessen, Hans; Mandal, Pranab K.
2016-01-01
In this paper we propose a novel solution to the labeled multi-target tracking problem. The method presented is specially effective in scenarios where the targets have once moved in close proximity. When this is the case, disregarding the labeling uncertainty present in a solution (after the targets
Ivantsov, Anatoliy; Hestroffer, Daniel; Eggl, Siegfried
2018-04-01
We present a catalog of potential candidates for asteroid mass determination based on mutual close encounters of numbered asteroids with massive perturbers (D>20 km). Using a novel geometric approach tuned to optimize observability, we predict optimal epochs for mass determination observations. In contrast to previous studies that often used simplified dynamical models, we have numerically propagated the trajectories of all numbered asteroids over the time interval from 2013 to 2023 using relativistic equations of motion including planetary perturbations, J2 of the Sun, the 16 major asteroid perturbers and the perturbations due to non-sphericities of the planets. We compiled a catalog of close encounters between asteroids where the observable perturbation of the sky plane trajectory is greater than 0.5 mas so that astrometric measurements of the perturbed asteroids in the Gaia data can be leveraged. The catalog v1.0 is available at ftp://dosya.akdeniz.edu.tr/ivantsov.
Can single molecule localization microscopy be used to map closely spaced RGD nanodomains?
Directory of Open Access Journals (Sweden)
Mahdie Mollazade
Full Text Available Cells sense and respond to nanoscale variations in the distribution of ligands to adhesion receptors. This makes single molecule localization microscopy (SMLM an attractive tool to map the distribution of ligands on nanopatterned surfaces. We explore the use of SMLM spatial cluster analysis to detect nanodomains of the cell adhesion-stimulating tripeptide arginine-glycine-aspartic acid (RGD. These domains were formed by the phase separation of block copolymers with controllable spacing on the scale of tens of nanometers. We first determined the topology of the block copolymer with atomic force microscopy (AFM and then imaged the localization of individual RGD peptides with direct stochastic optical reconstruction microscopy (dSTORM. To compare the data, we analyzed the dSTORM data with DBSCAN (density-based spatial clustering application with noise. The ligand distribution and polymer topology are not necessary identical since peptides may attach to the polymer outside the nanodomains and/or coupling and detection of peptides within the nanodomains is incomplete. We therefore performed simulations to explore the extent to which nanodomains could be mapped with dSTORM. We found that successful detection of nanodomains by dSTORM was influenced by the inter-domain spacing and the localization precision of individual fluorophores, and less by non-specific absorption of ligands to the substratum. For example, under our imaging conditions, DBSCAN identification of nanodomains spaced further than 50 nm apart was largely independent of background localisations, while nanodomains spaced closer than 50 nm required a localization precision of ~11 nm to correctly estimate the modal nearest neighbor distance (NDD between nanodomains. We therefore conclude that SMLM is a promising technique to directly map the distribution and nanoscale organization of ligands and would benefit from an improved localization precision.
Polygonal approximation and scale-space analysis of closed digital curves
Ray, Kumar S
2013-01-01
This book covers the most important topics in the area of pattern recognition, object recognition, computer vision, robot vision, medical computing, computational geometry, and bioinformatics systems. Students and researchers will find a comprehensive treatment of polygonal approximation and its real life applications. The book not only explains the theoretical aspects but also presents applications with detailed design parameters. The systematic development of the concept of polygonal approximation of digital curves and its scale-space analysis are useful and attractive to scholars in many fi
Quantum field theory in spaces with closed time-like curves
International Nuclear Information System (INIS)
Boulware, D.G.
1992-01-01
Gott spacetime has closed timelike curves, but no locally anomalous stress-energy. A complete orthonormal set of eigenfunctions of the wave operator is found in the special case of a spacetime in which the total deficit angle is 27π. A scalar quantum field theory is constructed using these eigenfunctions. The resultant interacting quantum field theory is not unitary because the field operators can create real, on-shell, particles in the acausal region. These particles propagate for finite proper time accumulating an arbitrary phase before being annihilated at the same spacetime point as that at which they were created. As a result, the effective potential within the acausal region is complex, and probability is not conserved. The stress tensor of the scalar field is evaluated in the neighborhood of the Cauchy horizon; in the case of a sufficiently small Compton wavelength of the field, the stress tensor is regular and cannot prevent the formation of the Cauchy horizon
Italy: the first European country to forbid smoking in closed spaces. First results.
Laurendi, G; Galeone, D; Spizzichino, L; Vasselli, S; D'Argenio, P
2007-04-01
Second-hand smoke is a well-known risk factor for several diseases, including lung cancer, chronic obstructive pulmonary disease, asthma. Evidence exists that smoke-free policies have an effect on reducing or eliminating the exposure to second-hand smoke, decreasing the prevalence of smokers, encouraging smokers to quit or preventing the initiation of smoking, and reducing cigarettes consumption among smokers. Italy has been the first European country to forbid smoking in closed places, also in working areas not open to the public, as protection to the health of the entire population. This article describes the first results obtained from the application of this new law, the positive effects and unexpected modifications in the behaviour and social habits of the Italian people, thus, revealing itself an important instrument to protect public health.
Closed-String Tachyons and the Hagedorn Transition in AdS Space
Barbón, José L F
2002-01-01
We discuss some aspects of the behaviour of a string gas at the Hagedorn temperature from a Euclidean point of view. Using AdS space as an infrared regulator, the Hagedorn tachyon can be effectively quasi-localized and its dynamics controled by a finite energetic balance. We propose that the off-shell RG flow matches to an Euclidean AdS black hole geometry in a generalization of the string/black-hole correspondence principle. The final stage of the RG flow can be interpreted semiclassically as the growth of a cool black hole in a hotter radiation bath. The end-point of the condensation is the large Euclidan AdS black hole, and the part of spacetime behind the horizon has been removed. In the flat-space limit, holography is manifest by the system creating its own transverse screen at infinity. This leads to an argument, based on the energetics of the system, explaining why the non-supersymmetric type 0A string theory decays into the supersymmetric type IIB vacuum. We also suggest a notion of `boundary entropy'...
Space-Bounded Church-Turing Thesis and Computational Tractability of Closed Systems.
Braverman, Mark; Schneider, Jonathan; Rojas, Cristóbal
2015-08-28
We report a new limitation on the ability of physical systems to perform computation-one that is based on generalizing the notion of memory, or storage space, available to the system to perform the computation. Roughly, we define memory as the maximal amount of information that the evolving system can carry from one instant to the next. We show that memory is a limiting factor in computation even in lieu of any time limitations on the evolving system-such as when considering its equilibrium regime. We call this limitation the space-bounded Church-Turing thesis (SBCT). The SBCT is supported by a simulation assertion (SA), which states that predicting the long-term behavior of bounded-memory systems is computationally tractable. In particular, one corollary of SA is an explicit bound on the computational hardness of the long-term behavior of a discrete-time finite-dimensional dynamical system that is affected by noise. We prove such a bound explicitly.
Closed Loop Guidance Trade Study for Space Launch System Block-1B Vehicle
Von der Porten, Paul; Ahmad, Naeem; Hawkins, Matt
2018-01-01
NASA is currently building the Space Launch System (SLS) Block-1 launch vehicle for the Exploration Mission 1 (EM-1) test flight. The design of the next evolution of SLS, Block-1B, is well underway. The Block-1B vehicle is more capable overall than Block-1; however, the relatively low thrust-to-weight ratio of the Exploration Upper Stage (EUS) presents a challenge to the Powered Explicit Guidance (PEG) algorithm used by Block-1. To handle the long burn durations (on the order of 1000 seconds) of EUS missions, two algorithms were examined. An alternative algorithm, OPGUID, was introduced, while modifications were made to PEG. A trade study was conducted to select the guidance algorithm for future SLS vehicles. The chosen algorithm needs to support a wide variety of mission operations: ascent burns to LEO, apogee raise burns, trans-lunar injection burns, hyperbolic Earth departure burns, and contingency disposal burns using the Reaction Control System (RCS). Additionally, the algorithm must be able to respond to a single engine failure scenario. Each algorithm was scored based on pre-selected criteria, including insertion accuracy, algorithmic complexity and robustness, extensibility for potential future missions, and flight heritage. Monte Carlo analysis was used to select the final algorithm. This paper covers the design criteria, approach, and results of this trade study, showing impacts and considerations when adapting launch vehicle guidance algorithms to a broader breadth of in-space operations.
International Nuclear Information System (INIS)
Crosson, E.R.; Berryman, K.W.; Richman, B.A.; Smith, T.I.; Swent, R.L.
1996-01-01
We have developed a technique for measuring the longitudinal phase space distribution of the Stanford Superconducting Accelerator close-quote s (SCA) electron beam which involves applying tomographic techniques to energy spectra taken as a function of the relative phase between the beam and the accelerating field, and optionally, as a function of the strength of a variable dispersion section in the system. The temporal profile of the beam obtained by projecting the inferred distribution onto the time axis is compared with that obtained from interferometric transition radiation measurements. copyright 1996 American Institute of Physics
Minato, Shohei; Ghose, Ranajit; Tsuji, Takeshi; Ikeda, Michiharu; Onishi, Kozo
2017-10-01
Fluid-filled fractures and fissures often determine the pathways and volume of fluid movement. They are critically important in crustal seismology and in the exploration of geothermal and hydrocarbon reservoirs. We introduce a model for tube wave scattering and generation at dipping, parallel-wall fractures intersecting a fluid-filled borehole. A new equation reveals the interaction of tube wavefield with multiple, closely spaced fractures, showing that the fracture dip significantly affects the tube waves. Numerical modeling demonstrates the possibility of imaging these fractures using a focusing analysis. The focused traces correspond well with the known fracture density, aperture, and dip angles. Testing the method on a VSP data set obtained at a fault-damaged zone in the Median Tectonic Line, Japan, presents evidences of tube waves being generated and scattered at open fractures and thin cataclasite layers. This finding leads to a new possibility for imaging, characterizing, and monitoring in situ hydraulic properties of dipping fractures using the tube wavefield.
Automation of closed environments in space for human comfort and safety
1992-01-01
This report culminates the work accomplished during a three year design project on the automation of an Environmental Control and Life Support System (ECLSS) suitable for space travel and colonization. The system would provide a comfortable living environment in space that is fully functional with limited human supervision. A completely automated ECLSS would increase astronaut productivity while contributing to their safety and comfort. The first section of this report, section 1.0, briefly explains the project, its goals, and the scheduling used by the team in meeting these goals. Section 2.0 presents an in-depth look at each of the component subsystems. Each subsection describes the mathematical modeling and computer simulation used to represent that portion of the system. The individual models have been integrated into a complete computer simulation of the CO2 removal process. In section 3.0, the two simulation control schemes are described. The classical control approach uses traditional methods to control the mechanical equipment. The expert control system uses fuzzy logic and artificial intelligence to control the system. By integrating the two control systems with the mathematical computer simulation, the effectiveness of the two schemes can be compared. The results are then used as proof of concept in considering new control schemes for the entire ECLSS. Section 4.0 covers the results and trends observed when the model was subjected to different test situations. These results provide insight into the operating procedures of the model and the different control schemes. The appendix, section 5.0, contains summaries of lectures presented during the past year, homework assignments, and the completed source code used for the computer simulation and control system.
Lin, Mingpei; Xu, Ming; Fu, Xiaoyu
2017-05-01
Currently, a tremendous amount of space debris in Earth's orbit imperils operational spacecraft. It is essential to undertake risk assessments of collisions and predict dangerous encounters in space. However, collision predictions for an enormous amount of space debris give rise to large-scale computations. In this paper, a parallel algorithm is established on the Compute Unified Device Architecture (CUDA) platform of NVIDIA Corporation for collision prediction. According to the parallel structure of NVIDIA graphics processors, a block decomposition strategy is adopted in the algorithm. Space debris is divided into batches, and the computation and data transfer operations of adjacent batches overlap. As a consequence, the latency to access shared memory during the entire computing process is significantly reduced, and a higher computing speed is reached. Theoretically, a simulation of collision prediction for space debris of any amount and for any time span can be executed. To verify this algorithm, a simulation example including 1382 pieces of debris, whose operational time scales vary from 1 min to 3 days, is conducted on Tesla C2075 of NVIDIA. The simulation results demonstrate that with the same computational accuracy as that of a CPU, the computing speed of the parallel algorithm on a GPU is 30 times that on a CPU. Based on this algorithm, collision prediction of over 150 Chinese spacecraft for a time span of 3 days can be completed in less than 3 h on a single computer, which meets the timeliness requirement of the initial screening task. Furthermore, the algorithm can be adapted for multiple tasks, including particle filtration, constellation design, and Monte-Carlo simulation of an orbital computation.
Probing the liquid and solid phases in closely spaced two-dimensional systems
Energy Technology Data Exchange (ETDEWEB)
Zhang, Ding
2014-03-06
Gas, liquid and solid phases are the most common states of matter in our daily encountered 3-dimensional space. The school example is the H{sub 2}O molecule with its phases vapor, water and ice. Interestingly, electrons - with their point-like nature and negative charges - can also organize themselves under certain conditions to bear properties of these three common phases. At relatively high temperature, where Boltzmann statistics prevails, the ensemble of electrons without interactions can be treated as a gas of free particles. Cooling down the system, this electron gas condenses into a Fermi liquid. Finally, as a result of the repulsive Coulomb forces, electrons try to avoid each other by maximizing their distances. When the Coulomb interaction becomes sufficiently strong, a regular lattice emerges - an electron solid. The story however does not end here. Nature has much more in store for us. Electronic systems in fact exhibit a large variety of phases induced by spatial confinement, an external magnetic field, Coulomb interactions, or interactions involving degrees of freedom other than charge such as spin and valley. Here in this thesis, we restrict ourselves to the study of electrons in a 2-dimenisonal (2D) plane. Already in such a 2D electron system (2DES), several distinct states of matter appear: integer and fractional quantum Hall liquids, the 2D Wigner solid, stripe and bubble phases etc. In 2DES it is sufficient to sweep the perpendicular magnetic field to pass from one of these phases into another. Experimentally, many of these phases can be revealed by simply measuring the resistance. For a quantum Hall state, the longitudinal resistance vanishes, while the Hall resistance exhibits a plateau. The quantum Hall plateau is a manifestation of localization induced by the inevitable sample disorder. Coulomb interaction can also play an important role to localize charges. Even in the disorder-free case, electrons - more precisely quasi-particles in the
Probing the liquid and solid phases in closely spaced two-dimensional systems
International Nuclear Information System (INIS)
Zhang, Ding
2014-01-01
Gas, liquid and solid phases are the most common states of matter in our daily encountered 3-dimensional space. The school example is the H 2 O molecule with its phases vapor, water and ice. Interestingly, electrons - with their point-like nature and negative charges - can also organize themselves under certain conditions to bear properties of these three common phases. At relatively high temperature, where Boltzmann statistics prevails, the ensemble of electrons without interactions can be treated as a gas of free particles. Cooling down the system, this electron gas condenses into a Fermi liquid. Finally, as a result of the repulsive Coulomb forces, electrons try to avoid each other by maximizing their distances. When the Coulomb interaction becomes sufficiently strong, a regular lattice emerges - an electron solid. The story however does not end here. Nature has much more in store for us. Electronic systems in fact exhibit a large variety of phases induced by spatial confinement, an external magnetic field, Coulomb interactions, or interactions involving degrees of freedom other than charge such as spin and valley. Here in this thesis, we restrict ourselves to the study of electrons in a 2-dimenisonal (2D) plane. Already in such a 2D electron system (2DES), several distinct states of matter appear: integer and fractional quantum Hall liquids, the 2D Wigner solid, stripe and bubble phases etc. In 2DES it is sufficient to sweep the perpendicular magnetic field to pass from one of these phases into another. Experimentally, many of these phases can be revealed by simply measuring the resistance. For a quantum Hall state, the longitudinal resistance vanishes, while the Hall resistance exhibits a plateau. The quantum Hall plateau is a manifestation of localization induced by the inevitable sample disorder. Coulomb interaction can also play an important role to localize charges. Even in the disorder-free case, electrons - more precisely quasi-particles in the partially
Huysmans, M. C. D. N. J. M.; Klein, M. H. J.; Kok, G. F.; Whitworth, J. M.
2007-01-01
Aim To determine the deviation of parallel-sided twist-drills during post-channel preparation and relate this to tooth type and position. Methodology Human teeth with single root canals were selected: maxillary second premolars (group i); maxillary lateral incisors (group ii); mandibular canines
The role of oxygen in CdS/CdTe solar cells deposited by close-spaced sublimation
Energy Technology Data Exchange (ETDEWEB)
Rose, D.H.; Levi, D.H.; Matson, R.J. [National Renewable Energy Lab., Golden, CO (United States)] [and others
1996-05-01
The presence of oxygen during close-spaced sublimation (CSS) of CdTe has been previously reported to be essential for high-efficiency CdS/CdTe solar cells because it increases the acceptor density in the absorber. The authors find that the presence of oxygen during CSS increases the nucleation site density of CdTe, thus decreasing pinhole density and grain size. Photoluminescence showed that oxygen decreases material quality in the bulk of the CdTe film, but positively impacts the critical CdS/CdTe interface. Through device characterization the authors were unable to verify an increase in acceptor density with increased oxygen. These results, along with the achievement of high-efficiency cells (13% AM1.5) without the use of oxygen, led the authors to conclude that the use of oxygen during CSS deposition of CdTe can be useful but is not essential.
International Nuclear Information System (INIS)
Espinoza, Marco; Leon, Kety; Martinez, Jorge
2014-01-01
Radon causes more than 50 % of total dose from natural background radiation per year. It is widely demonstrated the capacity of radon to induce lung cancer in people exposed to this radioactive gas for long periods. Radon emerges continuously from materials that constitute soils, building materials and minerals present in our natural environment, all over the world. In our country, it is necessary to get better regulations to control the exposition of people to this gas inside buildings, dwellings and facilities where people spend their time. Our country has very simple and scarce regulations on this respect. At present, national regulations about radon are adaptations of recommendations and guides published for international organizations but without national studies or statistics to give realistic support to those rules. This work propose a classification for closed spaces where people live and work in this country taking into consideration their 222 Rn concentration and probable doses involved. (authors).
Directory of Open Access Journals (Sweden)
James G. Worner
2017-05-01
Full Text Available James Worner is an Australian-based writer and scholar currently pursuing a PhD at the University of Technology Sydney. His research seeks to expose masculinities lost in the shadow of Australia’s Anzac hegemony while exploring new opportunities for contemporary historiography. He is the recipient of the Doctoral Scholarship in Historical Consciousness at the university’s Australian Centre of Public History and will be hosted by the University of Bologna during 2017 on a doctoral research writing scholarship. ‘Parallel Lines’ is one of a collection of stories, The Shapes of Us, exploring liminal spaces of modern life: class, gender, sexuality, race, religion and education. It looks at lives, like lines, that do not meet but which travel in proximity, simultaneously attracted and repelled. James’ short stories have been published in various journals and anthologies.
Robson, Philip M; Grant, Aaron K; Madhuranthakam, Ananth J; Lattanzi, Riccardo; Sodickson, Daniel K; McKenzie, Charles A
2008-10-01
Parallel imaging reconstructions result in spatially varying noise amplification characterized by the g-factor, precluding conventional measurements of noise from the final image. A simple Monte Carlo based method is proposed for all linear image reconstruction algorithms, which allows measurement of signal-to-noise ratio and g-factor and is demonstrated for SENSE and GRAPPA reconstructions for accelerated acquisitions that have not previously been amenable to such assessment. Only a simple "prescan" measurement of noise amplitude and correlation in the phased-array receiver, and a single accelerated image acquisition are required, allowing robust assessment of signal-to-noise ratio and g-factor. The "pseudo multiple replica" method has been rigorously validated in phantoms and in vivo, showing excellent agreement with true multiple replica and analytical methods. This method is universally applicable to the parallel imaging reconstruction techniques used in clinical applications and will allow pixel-by-pixel image noise measurements for all parallel imaging strategies, allowing quantitative comparison between arbitrary k-space trajectories, image reconstruction, or noise conditioning techniques. (c) 2008 Wiley-Liss, Inc.
International Nuclear Information System (INIS)
Mehdian, H.; Hajisharifi, K.; Hasanbeigi, A.
2014-01-01
In this paper, quantum fluid equations together with Maxwell's equations are used to study the stability problem of non-parallel and non-relativistic plasma shells colliding over a “background plasma” at arbitrary angle, as a first step towards a microscopic understanding of the collision shocks. The calculations have been performed for all magnitude and directions of wave vectors. The colliding plasma shells in the vacuum region have been investigated in the previous works as a counter-streaming model. While, in the presence of background plasma (more realistic system), the colliding shells are mainly non-paralleled. The obtained results show that the presence of background plasma often suppresses the maximum growth rate of instabilities (in particular case, this behavior is contrary). It is also found that the largest maximum growth rate occurs for the two-stream instability of the configuration consisting of counter-streaming currents in a very dilute plasma background. The results derived in this study can be used to analyze the systems of three colliding plasma slabs, provided that the used coordinate system is stationary relative to the one of the particle slabs. The present analytical investigations can be applied to describe the quantum violent astrophysical phenomena such as white dwarf stars collision with other dense astrophysical bodies or supernova remnants. Moreover, at the limit of ℏ→0, the obtained results described the classical (sufficiently dilute) events of colliding plasma shells such as gamma-ray bursts and flares in the solar winds
Energy Technology Data Exchange (ETDEWEB)
Romano, Luís F.R.; Ribeiro, Guilherme B., E-mail: luisromano_91@hotmail.com, E-mail: gbribeiro@ieav.cta.br [Instituto Tecnológico de Aeronáutica (ITA), São José dos Campos, SP (Brazil). Pós-Graduação Ciências e Tecnologias Espaciais
2017-07-01
Generating energy in space is a tough challenge, especially because it has to be used efficiently. The optimization of the system operation has to be though up since the design phase and all the minutiae between conception, production and operation should be carefully evaluated in order to deliver a functioning device that will meet all the mission's goals. This work seeks on further describing the operation of a Closed Brayton Cycle coupled toa nuclear microreactor used to generate energy to power spacecraft's systems, focusing specially on the cold side to evaluate the temperature of operation of the cold heat pipes in order to aid the selection of proper models to numerically describe the heat pipes and radiator s thermal operation. The cycle is designed to operate with a noble gas mixture of Helium-Xenon with a molecular weight of 40g/mole, selected for its transport properties and low turbomachinery charge and it is to exchange hear directly with the cold heat pipe' evaporator through convection at the cold heat exchanger. Properties such as size and mass are relevant to be analyzed due space applications requiring a careful development of the equipment in order to fit inside the launcher as well as lowering launch costs. Merit figures comparing both second law energetic efficiency and net energy availability with the device's radiator size are used in order to represent an energetic production density for the apparatus, which is ought to be launched from earth's surface. (author)
International Nuclear Information System (INIS)
Romano, Luís F.R.; Ribeiro, Guilherme B.
2017-01-01
Generating energy in space is a tough challenge, especially because it has to be used efficiently. The optimization of the system operation has to be though up since the design phase and all the minutiae between conception, production and operation should be carefully evaluated in order to deliver a functioning device that will meet all the mission's goals. This work seeks on further describing the operation of a Closed Brayton Cycle coupled toa nuclear microreactor used to generate energy to power spacecraft's systems, focusing specially on the cold side to evaluate the temperature of operation of the cold heat pipes in order to aid the selection of proper models to numerically describe the heat pipes and radiator s thermal operation. The cycle is designed to operate with a noble gas mixture of Helium-Xenon with a molecular weight of 40g/mole, selected for its transport properties and low turbomachinery charge and it is to exchange hear directly with the cold heat pipe' evaporator through convection at the cold heat exchanger. Properties such as size and mass are relevant to be analyzed due space applications requiring a careful development of the equipment in order to fit inside the launcher as well as lowering launch costs. Merit figures comparing both second law energetic efficiency and net energy availability with the device's radiator size are used in order to represent an energetic production density for the apparatus, which is ought to be launched from earth's surface. (author)
DEFF Research Database (Denmark)
Kwon, Jun Bum; Wang, Xiongfei; Bak, Claus Leth
2015-01-01
As the number of power electronics based systems are increasing, studies about overall stability and harmonic problems are rising. In order to analyze harmonics and stability, most research is using an analysis method, which is based on the Linear Time Invariant (LTI) approach. However, this can...... be difficult in terms of complex multi-parallel connected systems, especially in the case of renewable energy, where possibilities for intermittent operation due to the weather conditions exist. Hence, it can bring many different operating points to the power converter, and the impedance characteristics can...... can demonstrate other phenomenon, which can not be found in the conventional LTI approach. The theoretical modeling and analysis are verified by means of simulations and experiments....
Milenkovic, Zoran; DSouza, Christopher; Huish, David; Bendle, John; Kibler, Angela
2012-01-01
The exploration goals of Orion / MPCV Project will require a mature Rendezvous, Proximity Operations and Docking (RPOD) capability. Ground testing autonomous docking with a next-generation sensor such as the Vision Navigation Sensor (VNS) is a critical step along the path of ensuring successful execution of autonomous RPOD for Orion. This paper will discuss the testing rationale, the test configuration, the test limitations and the results obtained from tests that have been performed at the Lockheed Martin Space Operations Simulation Center (SOSC) to evaluate and mature the Orion RPOD system. We will show that these tests have greatly increased the confidence in the maturity of the Orion RPOD design, reduced some of the latent risks and in doing so validated the design philosophy of the Orion RPOD system. This paper is organized as follows: first, the objectives of the test are given. Descriptions of the SOSC facility, and the Orion RPOD system and associated components follow. The details of the test configuration of the components in question are presented prior to discussing preliminary results of the tests. The paper concludes with closing comments.
Khomchenko, Viktoriya; Mazin, Mikhail; Sopinskyy, Mykola; Lytvyn, Oksana; Dan'ko, Viktor; Piryatinskii, Yurii; Demydiuk, Pavlo
2018-05-01
The simple way for silver doping of ZnO films is presented. The ZnO films were prepared by reactive rf-magnetron sputtering on silicon and sapphire substrates. Ag doping is carried out by sublimation of the Ag source located at close space at atmospheric pressure in air. Then the ZnO and ZnO-Ag films were annealed in wet media. The microstructure and optical properties of the films were compared and studied by atomic force microscopy (AFM), X-ray diffraction (XRD), photoluminescence (PL) and cathodoluminescence (CL). XRD results indicated that all the ZnO films have a polycrystalline hexagonal structure and a preferred orientation with the c-axis perpendicular to the substrate. The annealing and Ag doping promote increasing grain's sizes and modification of grain size distribution. The effect of substrate temperature, substrate type, Ag doping and post-growth annealing of the films was studied by PL spectroscopy. The effect of Ag doping was obvious and identical for all the films, namely the wide visible bands of PL spectra are suppressed by Ag doping. The intensity of ultraviolet band increased 15 times as compared to their reference films on sapphire substrate. The ultraviolet/visible emission ratio was 20. The full width at half maximum (FWHM) for a 380 nm band was 14 nm, which is comparable with that of epitaxial ZnO. The data implies the high quality of ZnO-Ag films. Possible mechanisms to enhance UV emission are discussed.
Directory of Open Access Journals (Sweden)
Chan Jun Chun
2016-02-01
Full Text Available In this paper, we propose a new frequency-dependent amplitude panning method for stereophonic image enhancement applied to a sound source recorded using two closely spaced omni-directional microphones. The ability to detect the direction of such a sound source is limited due to weak spatial information, such as the inter-channel time difference (ICTD and inter-channel level difference (ICLD. Moreover, when sound sources are recorded in a convolutive or a real room environment, the detection of sources is affected by reverberation effects. Thus, the proposed method first tries to estimate the source direction depending on the frequency using azimuth-frequency analysis. Then, a frequency-dependent amplitude panning technique is proposed to enhance the stereophonic image by modifying the stereophonic law of sines. To demonstrate the effectiveness of the proposed method, we compare its performance with that of a conventional method based on the beamforming technique in terms of directivity pattern, perceived direction, and quality degradation under three different recording conditions (anechoic, convolutive, and real reverberant. The comparison shows that the proposed method gives us better stereophonic images in a stereo loudspeaker reproduction than the conventional method without any annoying effects.
Modeling and Control of Primary Parallel Isolated Boost Converter
DEFF Research Database (Denmark)
Mira Albert, Maria del Carmen; Hernandez Botella, Juan Carlos; Sen, Gökhan
2012-01-01
In this paper state space modeling and closed loop controlled operation have been presented for primary parallel isolated boost converter (PPIBC) topology as a battery charging unit. Parasitic resistances have been included to have an accurate dynamic model. The accuracy of the model has been...
Some aspects of radial flow between parallel disks
International Nuclear Information System (INIS)
Tabatabai, M.; Pollard, A.
1985-01-01
Radial flow of air between two closely spaced parallel disks is examined experimentally. A comprehensive review of the previous work performed on similar flow situations is given by Tabatabai and Pollard. The present paper is a discussion of some of the results obtained so far and offers some observations on the decay of turbulence in this flow. (author)
Directory of Open Access Journals (Sweden)
Jonathan W Stone
Full Text Available We present new modifications to the Wuchty algorithm in order to better define and explore possible conformations for an RNA sequence. The new features, including parallelization, energy-independent lonely pair constraints, context-dependent chemical probing constraints, helix filters, and optional multibranch loops, provide useful tools for exploring the landscape of RNA folding. Chemical probing alone may not necessarily define a single unique structure. The helix filters and optional multibranch loops are global constraints on RNA structure that are an especially useful tool for generating models of encapsidated viral RNA for which cryoelectron microscopy or crystallography data may be available. The computations generate a combinatorially complete set of structures near a free energy minimum and thus provide data on the density and diversity of structures near the bottom of a folding funnel for an RNA sequence. The conformational landscapes for some RNA sequences may resemble a low, wide basin rather than a steep funnel that converges to a single structure.
Hennelly, B. M.; Javidi, B.; Sheridan, J. T.
2005-09-01
A number of methods have been recently proposed in the literature for the encryption of 2-D information using linear optical systems. In particular the double random phase encoding system has received widespread attention. This system uses two Random Phase Keys (RPK) positioned in the input spatial domain and the spatial frequency domain and if these random phases are described by statistically independent white noises then the encrypted image can be shown to be a white noise. Decryption only requires knowledge of the RPK in the frequency domain. The RPK may be implemented using a Spatial Light Modulators (SLM). In this paper we propose and investigate the use of SLMs for secure optical multiplexing. We show that in this case it is possible to encrypt multiple images in parallel and multiplex them for transmission or storage. The signal energy is effectively spread in the spatial frequency domain. As expected the number of images that can be multiplexed together and recovered without loss is proportional to the ratio of the input image and the SLM resolution. Many more images may be multiplexed with some loss in recovery. Furthermore each individual encryption is more robust than traditional double random phase encoding since decryption requires knowledge of both RPK and a lowpass filter in order to despread the spectrum and decrypt the image. Numerical simulations are presented and discussed.
Jerath, Ravinder; Cearley, Shannon M; Barnes, Vernon A; Jensen, Mike
2018-01-01
A fundamental function of the visual system is detecting motion, yet visual perception is poorly understood. Current research has determined that the retina and ganglion cells elicit responses for motion detection; however, the underlying mechanism for this is incompletely understood. Previously we proposed that retinogeniculo-cortical oscillations and photoreceptors work in parallel to process vision. Here we propose that motion could also be processed within the retina, and not in the brain as current theory suggests. In this paper, we discuss: 1) internal neural space formation; 2) primary, secondary, and tertiary roles of vision; 3) gamma as the secondary role; and 4) synchronization and coherence. Movement within the external field is instantly detected by primary processing within the space formed by the retina, providing a unified view of the world from an internal point of view. Our new theory begins to answer questions about: 1) perception of space, erect images, and motion, 2) purpose of lateral inhibition, 3) speed of visual perception, and 4) how peripheral color vision occurs without a large population of cones located peripherally in the retina. We explain that strong oscillatory activity influences on brain activity and is necessary for: 1) visual processing, and 2) formation of the internal visuospatial area necessary for visual consciousness, which could allow rods to receive precise visual and visuospatial information, while retinal waves could link the lateral geniculate body with the cortex to form a neural space formed by membrane potential-based oscillations and photoreceptors. We propose that vision is tripartite, with three components that allow a person to make sense of the world, terming them "primary, secondary, and tertiary roles" of vision. Finally, we propose that Gamma waves that are higher in strength and volume allow communication among the retina, thalamus, and various areas of the cortex, and synchronization brings cortical
National Aeronautics and Space Administration — It is impractical for astronauts to travel with all necessary supplies in future long-term space exploration missions. Therefore, it is imperative that technologies...
Automatic parallelization of while-Loops using speculative execution
International Nuclear Information System (INIS)
Collard, J.F.
1995-01-01
Automatic parallelization of imperative sequential programs has focused on nests of for-loops. The most recent of them consist in finding an affine mapping with respect to the loop indices to simultaneously capture the temporal and spatial properties of the parallelized program. Such a mapping is usually called a open-quotes space-time transformation.close quotes This work describes an extension of these techniques to while-loops using speculative execution. We show that space-time transformations are a good framework for summing up previous restructuration techniques of while-loop, such as pipelining. Moreover, we show that these transformations can be derived and applied automatically
Deshmane, Anagha; Gulani, Vikas; Griswold, Mark A; Seiberlich, Nicole
2012-07-01
Parallel imaging is a robust method for accelerating the acquisition of magnetic resonance imaging (MRI) data, and has made possible many new applications of MR imaging. Parallel imaging works by acquiring a reduced amount of k-space data with an array of receiver coils. These undersampled data can be acquired more quickly, but the undersampling leads to aliased images. One of several parallel imaging algorithms can then be used to reconstruct artifact-free images from either the aliased images (SENSE-type reconstruction) or from the undersampled data (GRAPPA-type reconstruction). The advantages of parallel imaging in a clinical setting include faster image acquisition, which can be used, for instance, to shorten breath-hold times resulting in fewer motion-corrupted examinations. In this article the basic concepts behind parallel imaging are introduced. The relationship between undersampling and aliasing is discussed and two commonly used parallel imaging methods, SENSE and GRAPPA, are explained in detail. Examples of artifacts arising from parallel imaging are shown and ways to detect and mitigate these artifacts are described. Finally, several current applications of parallel imaging are presented and recent advancements and promising research in parallel imaging are briefly reviewed. Copyright © 2012 Wiley Periodicals, Inc.
Mangan, M.; Miller, T.; Waythomas, C.; Trusdell, F.; Calvert, A.; Layer, P.
2009-01-01
Emmons Lake Volcanic Center (ELVC) on the lower Alaskan Peninsula is one of the largest and most diverse volcanic centers in the Aleutian Arc. Since the Middle Pleistocene, eruption of ~ 350 km3 of basalt through rhyolite has produced a 30 km, arc front chain of nested calderas and overlapping stratovolcanoes. ELVC has experienced as many as five major caldera-forming eruptions, the most recent, at ~ 27 ka, produced ~ 50 km3 of rhyolitic ignimbrite and ash fall. These violent silicic events were interspersed with less energetic, but prodigious, outpourings of basalt through dacite. Holocene eruptions are mostly basaltic andesite to andesite and historically recorded activity includes over 40 eruptions within the last 200 yr, all from Pavlof volcano, the most active site in the Aleutian Arc. Geochemical and geophysical observations suggest that although all ELVC eruptions derive from a common clinopyroxene + spinel + plagioclase fractionating high-aluminum basalt parent in the lower crust, magma follows one of two closely spaced, but distinct paths to the surface. Under the eastern end of the chain, magma moves rapidly and cleanly through a relatively young (~ 28 ka), hydraulically connected dike plexus. Steady supply, short magma residence times, and limited interaction with crustal rocks preserve the geochemistry of deep crustal processes. Below the western part of the chain, magma moves haltingly through a long-lived (~ 500 ka) and complex intrusive column in which many generations of basaltic to andesitic melts have mingled and fractionated. Buoyant, silicic melts periodically separate from the lower parts of the column to feed voluminous eruptions of dacite and rhyolite. Mafic lavas record a complicated passage through cumulate zones and hydrous silicic residues as manifested by disequilibrium phenocryst textures, incompatible element enrichments, and decoupling of REEs and HFSEs ratios. Such features are absent in mafic lavas from the younger part of the chain
Crockett, Thomas W.
1995-01-01
This article provides a broad introduction to the subject of parallel rendering, encompassing both hardware and software systems. The focus is on the underlying concepts and the issues which arise in the design of parallel rendering algorithms and systems. We examine the different types of parallelism and how they can be applied in rendering applications. Concepts from parallel computing, such as data decomposition, task granularity, scalability, and load balancing, are considered in relation to the rendering problem. We also explore concepts from computer graphics, such as coherence and projection, which have a significant impact on the structure of parallel rendering algorithms. Our survey covers a number of practical considerations as well, including the choice of architectural platform, communication and memory requirements, and the problem of image assembly and display. We illustrate the discussion with numerous examples from the parallel rendering literature, representing most of the principal rendering methods currently used in computer graphics.
International Nuclear Information System (INIS)
Vu Ngoc Phat; Jong Yeoul Park
1995-10-01
The paper studies a class of set-values operators with emphasis on properties of their adjoints and existence of eigenvalues and eigenvectors of infinite-dimensional convex closed set-valued operators. Sufficient conditions for existence of eigenvalues and eigenvectors of set-valued convex closed operators are derived. These conditions specify possible features of control problems. The results are applied to some constrained control problems of infinite-dimensional systems described by discrete-time inclusions whose right-hand-sides are convex closed set- valued functions. (author). 8 refs
1982-01-01
Parallel Computations focuses on parallel computation, with emphasis on algorithms used in a variety of numerical and physical applications and for many different types of parallel computers. Topics covered range from vectorization of fast Fourier transforms (FFTs) and of the incomplete Cholesky conjugate gradient (ICCG) algorithm on the Cray-1 to calculation of table lookups and piecewise functions. Single tridiagonal linear systems and vectorized computation of reactive flow are also discussed.Comprised of 13 chapters, this volume begins by classifying parallel computers and describing techn
Algorithms for parallel computers
International Nuclear Information System (INIS)
Churchhouse, R.F.
1985-01-01
Until relatively recently almost all the algorithms for use on computers had been designed on the (usually unstated) assumption that they were to be run on single processor, serial machines. With the introduction of vector processors, array processors and interconnected systems of mainframes, minis and micros, however, various forms of parallelism have become available. The advantage of parallelism is that it offers increased overall processing speed but it also raises some fundamental questions, including: (i) which, if any, of the existing 'serial' algorithms can be adapted for use in the parallel mode. (ii) How close to optimal can such adapted algorithms be and, where relevant, what are the convergence criteria. (iii) How can we design new algorithms specifically for parallel systems. (iv) For multi-processor systems how can we handle the software aspects of the interprocessor communications. Aspects of these questions illustrated by examples are considered in these lectures. (orig.)
Directory of Open Access Journals (Sweden)
Weiran Wang
2013-06-01
Full Text Available In order to improve the performance of bearingless brushless DC motor, a closed-loop suspended force controller combining the discrete space voltage vector modulation is applied and the direct torque control is presented in this paper. Firstly, we increase the number of the control vector to reduce the torque ripple. Then, the suspending equation is constructed which is spired by the direct torque control algorithm. As a result, the closed-loop suspended force controller is built. The simulated and experimental results evaluate the performance of the proposed method. The more advantage is that the proposed algorithm can achieve the fast torque response, reduce the torque ripple, and follow ideal stator flux track. Furthermore, the motor which implants the closed-loop suspended force controller cannot onlyobtain the dynamic response rapidly and displacement control accurately, but also has the characteristics of bearingless brushless DC motor (such as simple structure, high energy efficiency, small volume and low failure rate.
Shi, Chengdi; Cai, Leyi; Hu, Wei; Sun, Junying
2017-09-19
ABSTRACTS Objective: To study the method of X-ray diagnosis of unstable pelvic fractures displaced in three-dimensional (3D) space and its clinical application in closed reduction. Five models of hemipelvic displacement were made in an adult pelvic specimen. Anteroposterior radiographs of the pelvis were analyzed in PACS. The method of X-ray diagnosis was applied in closed reductions. From February 2012 to June 2016, 23 patients (15 men, 8 women; mean age, 43.4 years) with unstable pelvic fractures were included. All patients were treated by closed reduction and percutaneous cannulate screw fixation of the pelvic ring. According to Tile's classification, the patients were classified into type B1 in 7 cases, B2 in 3, B3 in 3, C1 in 5, C2 in 3, and C3 in 2. The operation time and intraoperative blood loss were recorded. Postoperative images were evaluated by Matta radiographic standards. Five models of displacement were made successfully. The X-ray features of the models were analyzed. For clinical patients, the average operation time was 44.8 min (range, 20-90 min) and the average intraoperative blood loss was 35.7 (range, 20-100) mL. According to the Matta standards, 7 cases were excellent, 12 cases were good, and 4 were fair. The displacements in 3D space of unstable pelvic fractures can be diagnosed rapidly by X-ray analysis to guide closed reduction, with a satisfactory clinical outcome.
Casanova, Henri; Robert, Yves
2008-01-01
""…The authors of the present book, who have extensive credentials in both research and instruction in the area of parallelism, present a sound, principled treatment of parallel algorithms. … This book is very well written and extremely well designed from an instructional point of view. … The authors have created an instructive and fascinating text. The book will serve researchers as well as instructors who need a solid, readable text for a course on parallelism in computing. Indeed, for anyone who wants an understandable text from which to acquire a current, rigorous, and broad vi
DEFF Research Database (Denmark)
Knecht, Stefan; Jensen, Hans Jørgen Aagaard; Fleig, Timo
2010-01-01
We present a parallel implementation of a large-scale relativistic double-group configuration interaction CIprogram. It is applicable with a large variety of two- and four-component Hamiltonians. The parallel algorithm is based on a distributed data model in combination with a static load balanci...
Kordy, M.; Wannamaker, P.; Maris, V.; Cherkaev, E.; Hill, G.
2016-01-01
Following the creation described in Part I of a deformable edge finite-element simulator for 3-D magnetotelluric (MT) responses using direct solvers, in Part II we develop an algorithm named HexMT for 3-D regularized inversion of MT data including topography. Direct solvers parallelized on large-RAM, symmetric multiprocessor (SMP) workstations are used also for the Gauss-Newton model update. By exploiting the data-space approach, the computational cost of the model update becomes much less in both time and computer memory than the cost of the forward simulation. In order to regularize using the second norm of the gradient, we factor the matrix related to the regularization term and apply its inverse to the Jacobian, which is done using the MKL PARDISO library. For dense matrix multiplication and factorization related to the model update, we use the PLASMA library which shows very good scalability across processor cores. A synthetic test inversion using a simple hill model shows that including topography can be important; in this case depression of the electric field by the hill can cause false conductors at depth or mask the presence of resistive structure. With a simple model of two buried bricks, a uniform spatial weighting for the norm of model smoothing recovered more accurate locations for the tomographic images compared to weightings which were a function of parameter Jacobians. We implement joint inversion for static distortion matrices tested using the Dublin secret model 2, for which we are able to reduce nRMS to ˜1.1 while avoiding oscillatory convergence. Finally we test the code on field data by inverting full impedance and tipper MT responses collected around Mount St Helens in the Cascade volcanic chain. Among several prominent structures, the north-south trending, eruption-controlling shear zone is clearly imaged in the inversion.
Sigel, Richard A.
1999-01-01
Since 1992, there have been researchers have been studying the population ecology and conservation biology of the amphibians and reptiles of the Kennedy Space Center (KSC) This research is an outgrowth of my Master's work in the late 1970's under Lew Ehrhart at UCF. The primary emphasis of our studies are (1) examination of long-term changes in the abundance of amphibians and reptile populations, (2) occurrence and effects of Upper Respiratory Tract Disease (URTD) in gopher tortoises (Gopherus polyphemus), and (3) ecological studies of selected species.
Kordy, M. A.; Wannamaker, P. E.; Maris, V.; Cherkaev, E.; Hill, G. J.
2014-12-01
We have developed an algorithm for 3D simulation and inversion of magnetotelluric (MT) responses using deformable hexahedral finite elements that permits incorporation of topography. Direct solvers parallelized on symmetric multiprocessor (SMP), single-chassis workstations with large RAM are used for the forward solution, parameter jacobians, and model update. The forward simulator, jacobians calculations, as well as synthetic and real data inversion are presented. We use first-order edge elements to represent the secondary electric field (E), yielding accuracy O(h) for E and its curl (magnetic field). For very low frequency or small material admittivity, the E-field requires divergence correction. Using Hodge decomposition, correction may be applied after the forward solution is calculated. It allows accurate E-field solutions in dielectric air. The system matrix factorization is computed using the MUMPS library, which shows moderately good scalability through 12 processor cores but limited gains beyond that. The factored matrix is used to calculate the forward response as well as the jacobians of field and MT responses using the reciprocity theorem. Comparison with other codes demonstrates accuracy of our forward calculations. We consider a popular conductive/resistive double brick structure and several topographic models. In particular, the ability of finite elements to represent smooth topographic slopes permits accurate simulation of refraction of electromagnetic waves normal to the slopes at high frequencies. Run time tests indicate that for meshes as large as 150x150x60 elements, MT forward response and jacobians can be calculated in ~2.5 hours per frequency. For inversion, we implemented data space Gauss-Newton method, which offers reduction in memory requirement and a significant speedup of the parameter step versus model space approach. For dense matrix operations we use tiling approach of PLASMA library, which shows very good scalability. In synthetic
International Nuclear Information System (INIS)
Vaccaro, P.O.; Meyer, G.O.; Saura, J.
1991-01-01
CdS/CdTe solar cells were made by depositing CdTe films by an isothermal close-spaced vapor transport method on sintered CdS/glass substrates. The influence of amounts of CdCl2 ranging from 0 wt% to 8 wt% in the CdTe source on the solar cells performance was studied. Increasing the CdCl2 content enhances the CdTe grainsize but degrades the spectral response and increases the reverse saturation current. An optimal CdCl2 concentration of 1 wt% was found for a growth temperature of 620 deg C. (Author)
International Nuclear Information System (INIS)
Hooper, J.D.
1984-01-01
Experimental studies of developed axial single-phase flow through closely spaced rod arrays have shown, with reducing p/d ratio, the development of high axial and azimuthal turbulence intensities in the rod gap region. Associated with this is the existence of very high levels of the azimuthal Reynolds shear stress component either side of the rod gap centre. Spatial correlation analysis of the three turbulent velocity components has shown a large scale coherent and almost periodic structure in the rod gap region. The structure is markedly different to the currently accepted secondary flow model. 14 references
International Nuclear Information System (INIS)
Hu, F X; Qian, X L; Wang, G J; Wang, J; Sun, J R; Zhang, X X; Cheng, Z H; Shen, B G
2003-01-01
A large change in the magnetic entropy, |ΔS|, was observed in the Fe-based NaZn 13 -type compound LaFe 11.375 Al 1.625 , which was nearly temperature independent over a wide temperature range (an about 70 K span from ∼ 140 to 210 K). This behaviour of the magnetic entropy change is associated with two closely spaced magnetic transitions. X-ray diffraction investigation at different temperatures indicates that the crystal structure remains cubic, of NaZn 13 type, when the magnetic state changes with temperature, but the cell parameter changes dramatically at the first-order transition point
Concurrent computation of attribute filters on shared memory parallel machines
Wilkinson, Michael H.F.; Gao, Hui; Hesselink, Wim H.; Jonker, Jan-Eppo; Meijster, Arnold
2008-01-01
Morphological attribute filters have not previously been parallelized mainly because they are both global and nonseparable. We propose a parallel algorithm that achieves efficient parallelism for a large class of attribute filters, including attribute openings, closings, thinnings, and thickenings,
Yura, H T; Thrane, L; Andersen, P E
2000-12-01
Within the paraxial approximation, a closed-form solution for the Wigner phase-space distribution function is derived for diffuse reflection and small-angle scattering in a random medium. This solution is based on the extended Huygens-Fresnel principle for the optical field, which is widely used in studies of wave propagation through random media. The results are general in that they apply to both an arbitrary small-angle volume scattering function, and arbitrary (real) ABCD optical systems. Furthermore, they are valid in both the single- and multiple-scattering regimes. Some general features of the Wigner phase-space distribution function are discussed, and analytic results are obtained for various types of scattering functions in the asymptotic limit s > 1, where s is the optical depth. In particular, explicit results are presented for optical coherence tomography (OCT) systems. On this basis, a novel way of creating OCT images based on measurements of the momentum width of the Wigner phase-space distribution is suggested, and the advantage over conventional OCT images is discussed. Because all previous published studies regarding the Wigner function are carried out in the transmission geometry, it is important to note that the extended Huygens-Fresnel principle and the ABCD matrix formalism may be used successfully to describe this geometry (within the paraxial approximation). Therefore for completeness we present in an appendix the general closed-form solution for the Wigner phase-space distribution function in ABCD paraxial optical systems for direct propagation through random media, and in a second appendix absorption effects are included.
Energy Technology Data Exchange (ETDEWEB)
Kim, Eugene; Kim, Yeo Ju; Kim, Mi Young; Cho, Soon Gu [Inha University Hospital, Department of Radiology, Choong-gu, Incheon (Korea, Republic of); Cha, Jang Gyu [Soonchunhyang University Hospital, Department of Radiology, Bucheon (Korea, Republic of); Lee, Dae Hyung [Inha University Hospital, Clinical Trail Center, Incheon (Korea, Republic of); Kim, Ryuh Sup [Inha University Hospital, Department of Orthopedic Surgery, Incheon (Korea, Republic of)
2015-10-15
To evaluate kinematic changes in menisci and tibiofemoral joint spaces in extension and flexion using asymptomatic volunteers using a wide-bore 3-T closed MRI system. Twenty-two knees from asymptomatic volunteers were examined in knee extension and flexion using a 3-T MRI (sagittal 2D FSE T2-weighted sequence and sagittal 3D isotropic FSE proton density-weighted cube sequence). The meniscal positions, meniscal floating and flounce were evaluated. The widths of the medial and lateral tibiofemoral joint spaces and coronal tibiofemoral angles were measured. In the anteroposterior direction, meniscal extrusion was most frequently seen in the anterior horn of the medial menisci (100 %) in extensions (maximum 6.04 mm). Most of the menisci moved significantly to the posterior side from extension to flexion. The anteroposterior meniscal movement was the greatest for the anterior horn of the medial meniscus and least for the posterior horn of the medial meniscus. In the mediolateral direction, meniscal extrusion was seen in 52 % of the medial menisci in extensions (maximum 1.91 mm) and 29 % of lateral menisci in flexions (maximum 2.36 mm). From the extension to flexion, all medial and lateral menisci moved significantly to the lateral side. Meniscal floating was frequently observed in the posterior horn of medial menisci in extension. Meniscal flounce was frequently seen in lateral menisci in flexion with a widened lateral tibiofemoral joint space gap. The coronal tibiofemoral angle showed medial wedging in flexion, but not in extension. Wide-bore 3-T closed MRI revealed significant kinematic changes in the menisci and tibiofemoral joint spaces in asymptomatic volunteers. (orig.)
Hansen, Jeff L.
2000-01-01
A conceptual design study was completed for a 360 kW Helium-Xenon closed Brayton cycle turbogenerator. The selected configuration is comprised of a single-shaft gas turbine engine coupled directly to a high-speed generator. The engine turbomachinery includes a 2.5:1 pressure ratio compression system with an inlet corrected flow of 0.44 kg/sec. The single centrifugal stage impeller discharges into a scroll via a vaned diffuser. The scroll routes the air into the cold side sector of the recuperator. The hot gas exits a nuclear reactor radiator at 1300 K and enters the turbine via a single-vaned scroll. The hot gases are expanded through the turbine and then diffused before entering the hot side sector of the recuperator. The single shaft design is supported by air bearings. The high efficiency shaft mounted permanent magnet generator produces an output of 370 kW at a speed of 60,000 rpm. The total weight of the turbogenerator is estimated to be only 123 kg (less than 5% of the total power plant) and has a volume of approximately 0.11 cubic meters. This turbogenerator is a key element in achieving the 40 to 45% overall power plant thermal efficiency.
Nelson, M.; Dempster, W. F.; Silverstone, S.; Alling, A.; Allen, J. P.; van Thillo, M.
An experiment utilizing cowpeas Vigna unguiculata pinto beans Phaseolus vulgaris L and Apogee ultra-dwarf wheat was conducted in the soil-based closed ecological facility Laboratory Biosphere from February to May 2005 The lighting regime was 13 hours light 11 hours dark at a light intensity of 960 mu mol m -2 s -1 45 moles m -2 day -1 supplied by high-pressure sodium lamps The pinto beans and cowpeas were grown at two different plant densities The pinto bean produced 710 g m -2 total aboveground biomass and 341 g m -2 at 33 5 plants per m 2 and at 37 5 plants per m 2 produced 1092 g m -2 total biomass and 537 g m -2 of dry seed an increase of almost 50 Cowpeas at 28 plants m -2 yielded 1060 g m -2 of total biomass and 387 g seed m -2 outproducing the less dense planting by more than double 209 in biomass and 86 more seed as the planting of 21 plants m -2 produced 508 g m-2 of total biomass and 209 g m-2 of seed Edible yield rate EYR for the denser cowpea bean was 4 6 g m -2 day -1 vs 2 5 g m -2 day -1 for the less dense stand average yield was 3 5 g m -2 day -1 EYR for the denser pinto bean was 8 5 g m -2 day -1 vs 5 3 g m -2 day -1 average EYR for the pinto beans was 7 0 g m -2 day -1 Yield efficiency rate YER the ratio of edible to non-edible biomass was 0 97 for the dense pinto bean 0 92 for the less dense pinto bean and average 0 94 for the entire crop The cowpeas
Newman, Andrew J; Hayes, Sarah H; Rao, Abhiram S; Allman, Brian L; Manohar, Senthilvelan; Ding, Dalian; Stolzberg, Daniel; Lobarinas, Edward; Mollendorf, Joseph C; Salvi, Richard
2015-03-15
Military personnel and civilians living in areas of armed conflict have increased risk of exposure to blast overpressures that can cause significant hearing loss and/or brain injury. The equipment used to simulate comparable blast overpressures in animal models within laboratory settings is typically very large and prohibitively expensive. To overcome the fiscal and space limitations introduced by previously reported blast wave generators, we developed a compact, low-cost blast wave generator to investigate the effects of blast exposures on the auditory system and brain. The blast wave generator was constructed largely from off the shelf components, and reliably produced blasts with peak sound pressures of up to 198dB SPL (159.3kPa) that were qualitatively similar to those produced from muzzle blasts or explosions. Exposure of adult rats to 3 blasts of 188dB peak SPL (50.4kPa) resulted in significant loss of cochlear hair cells, reduced outer hair cell function and a decrease in neurogenesis in the hippocampus. Existing blast wave generators are typically large, expensive, and are not commercially available. The blast wave generator reported here provides a low-cost method of generating blast waves in a typical laboratory setting. This compact blast wave generator provides scientists with a low cost device for investigating the biological mechanisms involved in blast wave injury to the rodent cochlea and brain that may model many of the damaging effects sustained by military personnel and civilians exposed to intense blasts. Copyright © 2015 Elsevier B.V. All rights reserved.
International Nuclear Information System (INIS)
Jejcic, A.; Maillard, J.; Maurel, G.; Silva, J.; Wolff-Bacha, F.
1997-01-01
The work in the field of parallel processing has developed as research activities using several numerical Monte Carlo simulations related to basic or applied current problems of nuclear and particle physics. For the applications utilizing the GEANT code development or improvement works were done on parts simulating low energy physical phenomena like radiation, transport and interaction. The problem of actinide burning by means of accelerators was approached using a simulation with the GEANT code. A program of neutron tracking in the range of low energies up to the thermal region has been developed. It is coupled to the GEANT code and permits in a single pass the simulation of a hybrid reactor core receiving a proton burst. Other works in this field refers to simulations for nuclear medicine applications like, for instance, development of biological probes, evaluation and characterization of the gamma cameras (collimators, crystal thickness) as well as the method for dosimetric calculations. Particularly, these calculations are suited for a geometrical parallelization approach especially adapted to parallel machines of the TN310 type. Other works mentioned in the same field refer to simulation of the electron channelling in crystals and simulation of the beam-beam interaction effect in colliders. The GEANT code was also used to simulate the operation of germanium detectors designed for natural and artificial radioactivity monitoring of environment
Nelson, M.; Dempster, W. F.; Allen, J. P.; Silverstone, S.; Alling, A.; van Thillo, M.
An experiment utilizing cowpeas ( Vigna unguiculata L.), pinto beans ( Phaseolus vulgaris L.) and Apogee ultra-dwarf wheat ( Triticum sativa L.) was conducted in the soil-based closed ecological facility, Laboratory Biosphere, from February to May 2005. The lighting regime was 13 h light/11 h dark at a light intensity of 960 μmol m -2 s -1, 45 mol m -2 day -1 supplied by high-pressure sodium lamps. The pinto beans and cowpeas were grown at two different planting densities. Pinto bean production was 341.5 g dry seed m -2 (5.42 g m -2 day -1) and 579.5 dry seed m -2 (9.20 g m -2 day -1) at planted densities of 32.5 plants m -2 and 37.5 plants m -2, respectively. Cowpea yielded 187.9 g dry seed m -2 (2.21 g m -2 day -1) and 348.8 dry seed m -2 (4.10 g m -2 day -1) at planted densities of 20.8 plants m -2 and 27.7 plants m -2, respectively. The crop was grown at elevated atmospheric carbon dioxide levels, with levels ranging from 300-3000 ppm daily during the majority of the crop cycle. During early stages (first 10 days) of the crop, CO 2 was allowed to rise to 7860 ppm while soil respiration dominated, and then was brought down by plant photosynthesis. CO 2 was injected 27 times during days 29-71 to replenish CO 2 used by the crop during photosynthesis. Temperature regime was 24-28 °C day/deg 20-24 °C night. Pinto bean matured and was harvested 20 days earlier than is typical for this variety, while the cowpea, which had trouble establishing, took 25 days more for harvest than typical for this variety. Productivity and atmospheric dynamic results of these studies contribute toward the design of an envisioned ground-based test bed prototype Mars base.
McCallum, Ethan
2011-01-01
It's tough to argue with R as a high-quality, cross-platform, open source statistical software product-unless you're in the business of crunching Big Data. This concise book introduces you to several strategies for using R to analyze large datasets. You'll learn the basics of Snow, Multicore, Parallel, and some Hadoop-related tools, including how to find them, how to use them, when they work well, and when they don't. With these packages, you can overcome R's single-threaded nature by spreading work across multiple CPUs, or offloading work to multiple machines to address R's memory barrier.
International Nuclear Information System (INIS)
Potlog, T.
2007-01-01
Thin Film CdS/CdTe solar cells were fabricated by Close Space Sublimation at the substrate temperature ranging from 300 degrees ± 5 degrees to 340 degrees ± degrees. The best photovoltaic parameters were achieved at substrate temperature 320 degrees and source temperature 610 degrees. The open circuit voltage and current density changes significantly with the substrate temperature and depends on the dimension of the grain sizes. Grain size is an efficiency limiting parameter for CdTe layers with large grains. The open circuit voltage and current density are the best for the cells having dimension of grains between 1.0 μm and ∼ 5.0 μm. CdS/CdTe solar cells with an efficiency of ∼ 10% were obtained. (author)
Energy Technology Data Exchange (ETDEWEB)
Uruno, Aya; Usui, Ayaka [Department of Electrical Engineering and Bioscience, Waseda University, 3-4-1 Okubo, Shinjuku, Tokyo 169-8555 (Japan); Kobayashi, Masakazu [Department of Electrical Engineering and Bioscience, Waseda University, 3-4-1 Okubo, Shinjuku, Tokyo 169-8555 (Japan); Kagami Memorial Research Institute for Materials Science and Technology, Waseda University, 2-8-26 Nishiwaseda, Shinjuku, Tokyo 169-0051 (Japan)
2014-07-15
AgGaTe{sub 2} layers were grown on a- and c-plane sapphire substrates by a closed space sublimation method with varying the source temperature. Grown films were evaluated by θ -2θ and pole figure measurements of X-ray diffraction. AgGaTe{sub 2} layers were grown to have strong preference for the (103) orientation. However, it was cleared the Ag{sub 5}Te{sub 3} was formed along with the AgGaTe{sub 2} when the layer was grown on c-plane sapphire. The orientation of the film was analyzed by using the pole figure, and resulted in AgGaTe{sub 2} without Ag{sub 5}Te{sub 3} layers could be grown on a-plane sapphire. (copyright 2014 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)
DEFF Research Database (Denmark)
Cramer, Christian N; Kelstrup, Christian D; Olsen, Jesper V
2017-01-01
bonds are present in complicated patterns. This includes the presence of disulfide bonds in nested patterns and closely spaced cysteines. Unambiguous mapping of such disulfide bonds typically requires advanced MS approaches. In this study, we exploited in-source reduction (ISR) of disulfide bonds during...... the electrospray ionization process to facilitate disulfide bond assignments. We successfully developed a LC-ISR-MS/MS methodology to use as an online and fully automated partial reduction procedure. Postcolumn partial reduction by ISR provided fast and easy identification of peptides involved in disulfide bonding......Mapping of disulfide bonds is an essential part of protein characterization to ensure correct cysteine pairings. For this, mass spectrometry (MS) is the most widely used technique due to fast and accurate characterization. However, MS-based disulfide mapping is challenged when multiple disulfide...
Energy Technology Data Exchange (ETDEWEB)
Shimoni, Y; Kouri, D J; Kumar, A [Houston Univ., Tex. (USA). Dept. of Physics
1977-12-01
Full close coupling calculations of magnetic transitions in He + H/sub 2/ collisions are reported. The results are analyzed using the coupling space frame approach of Kouri and Shimoni. This enables one to study the magnetic transition T-matrices as a function of orbital angular momentum number l. The results for transitions which are elastic in rotor state j are found to be dominated by j/sub z/-conserving transitions. Those which are inelastic in j are dominated by j/sub z/-conserving transitions for very low l but at higher l values, the non-j/sub z/-conserving transitions dominate. The results for He + H/sub 2/ are consistent with the recent studies of Shimoni and Kouri of the coupled states approximation.
Zhang, H.-m.; Chen, X.-f.; Chang, S.
- It is difficult to compute synthetic seismograms for a layered half-space with sources and receivers at close to or the same depths using the generalized R/T coefficient method (Kennett, 1983; Luco and Apsel, 1983; Yao and Harkrider, 1983; Chen, 1993), because the wavenumber integration converges very slowly. A semi-analytic method for accelerating the convergence, in which part of the integration is implemented analytically, was adopted by some authors (Apsel and Luco, 1983; Hisada, 1994, 1995). In this study, based on the principle of the Repeated Averaging Method (Dahlquist and Björck, 1974; Chang, 1988), we propose an alternative, efficient, numerical method, the peak-trough averaging method (PTAM), to overcome the difficulty mentioned above. Compared with the semi-analytic method, PTAM is not only much simpler mathematically and easier to implement in practice, but also more efficient. Using numerical examples, we illustrate the validity, accuracy and efficiency of the new method.
Energy Technology Data Exchange (ETDEWEB)
Xiaonan Li; Sheldon, P.; Moutinho, H.; Matson, R. [National Renewable Energy Lab., Golden, CO (United States)
1996-05-01
The authors describe a methodology developed and applied to the close-spaced sublimation technique for thin-film CdTe deposition. The developed temperature profiles consisted of three discrete temperature segments, which the authors called the nucleation, plugging, and annealing temperatures. They have demonstrated that these temperature profiles can be used to grow large-grain material, plug pinholes, and improve CdS/CdTe photovoltaic device performance by about 15%. The improved material and device properties have been obtained while maintaining deposition temperatures compatible with commercially available substrates. This temperature profiling technique can be easily applied to a manufacturing environment by adjusting the temperature as a function of substrate position instead of time.
Energy Technology Data Exchange (ETDEWEB)
Han, Jun-feng, E-mail: pkuhjf@bit.edu.cn [Institut des Matériaux Jean Rouxel (IMN), Université de Nantes, UMR CNRS 6502, 2 rue de la Houssinière, BP 32229, 44322 Nantes Cedex 3 (France); Institute of Materials Science, Darmstadt University of Technology, Petersenstr. 23, 64287 Darmstadt (Germany); School of Physics, Beijing Institute of Technology, Beijing 100081 (China); Fu, Gan-hua; Krishnakumar, V.; Schimper, Hermann-Josef [Institute of Materials Science, Darmstadt University of Technology, Petersenstr. 23, 64287 Darmstadt (Germany); Liao, Cheng [Department of Physics, Peking University, Beijing 100871 (China); Jaegermann, Wolfram [Institute of Materials Science, Darmstadt University of Technology, Petersenstr. 23, 64287 Darmstadt (Germany); Besland, M.P. [Institut des Matériaux Jean Rouxel (IMN), Université de Nantes, UMR CNRS 6502, 2 rue de la Houssinière, BP 32229, 44322 Nantes Cedex 3 (France)
2015-05-01
The CdS layers were deposited by two different methods, close space sublimation (CSS) and chemical bath deposition (CBD) technique. The CdS/CdTe interface properties were investigated by transmission electron microscope (TEM) and X-ray photoelectron spectroscopy (XPS). The TEM images showed a large CSS-CdS grain size in the range of 70-80 nm. The interface between CSS-CdS and CdTe were clear and sharp, indicating an abrupt hetero-junction. On the other hand, CBD-CdS layer had much smaller grain size in the 5-10 nm range. The interface between CBD-CdS and CdTe was not as clear as CSS-CdS. With the stepwise coverage of CdTe layer, the XPS core levels of Cd 3d and S 2p in CSS-CdS had a sudden shift to lower binding energies, while those core levels shifted gradually in CBD-CdS. In addition, XPS depth profile analyses indicated a strong diffusion in the interface between CBD-CdS and CdTe. The solar cells prepared using CSS-CdS yielded better device performance than the CBD-CdS layer. The relationships between the solar cell performances and properties of CdS/CdTe interfaces were discussed. - Highlights: • Studies of CdS deposited by close space sublimation and chemical bath deposition • An observation of CdS/CdTe interface by transmission electron microscope • A careful investigation of CdS/CdTe interface by X ray photoelectron spectra • An easier diffusion at the chemical bath deposition CdS and CdTe interface.
Energy Technology Data Exchange (ETDEWEB)
Abounachit, O. [LP2M2E, Faculté des Sciences et Techniques, Université Cadi Ayyad, Gueliz, BP 549 , Marrakech, Maroc (Morocco); Chehouani, H., E-mail: chehouani@hotmail.fr [LP2M2E, Faculté des Sciences et Techniques, Université Cadi Ayyad, Gueliz, BP 549 , Marrakech, Maroc (Morocco); Djessas, K. [CNRS-PROMES Tecnosud, Rambla de la Thermodynamique, 66100 Perpignan (France)
2013-07-01
The quality of CuGaTe{sub 2} (CGT) thin films elaborated by close spaced vapor transport technique has been studied as a function of the source temperature (T{sub S}), iodine pressure (P{sub I2}) and the amount (X{sub Cu}) of pure copper added to the stoichiometric starting material. A thermodynamic model was developed for the Cu–Ga–Te–I system to describe the CGT deposition. The model predicts the solid phase composition with possible impurities for the operating conditions previously mentioned. The conditions of stoichiometric and near-stoichiometric deposition were determined. The value of T{sub S} must range from 450 to 550 °C for P{sub I2} varying between 0.2 and 7 kPa. Adding an amount up to 10% of pure copper to the starting material improves the quality of the deposit layers and lowers the operating interval temperature to 325–550 °C. These optimal conditions were tested experimentally at 480 °C and 500 °C. The X-ray diffraction, scanning electron microscopy, and energy dispersive spectroscopy have proved that the addition of pure copper to the stoichiometric source material can be considered as a supplementary operating parameter to improve the quality of CGT thin films. - Highlights: • The stoichiometric CuGaTe{sub 2} (CGT) has been deposited by close spaced vapor transport. • The Cu–Ga–Te–I system has been studied theoretically by minimizing the Gibbs energy. • The quality of thin films has been improved by pure copper added to the source CGT. • The temperature, pressure and the amount of copper added to grow CGT are determined. • The thermodynamic predictions are in good agreement with experimental results.
Parallel hierarchical radiosity rendering
Energy Technology Data Exchange (ETDEWEB)
Carter, Michael [Iowa State Univ., Ames, IA (United States)
1993-07-01
In this dissertation, the step-by-step development of a scalable parallel hierarchical radiosity renderer is documented. First, a new look is taken at the traditional radiosity equation, and a new form is presented in which the matrix of linear system coefficients is transformed into a symmetric matrix, thereby simplifying the problem and enabling a new solution technique to be applied. Next, the state-of-the-art hierarchical radiosity methods are examined for their suitability to parallel implementation, and scalability. Significant enhancements are also discovered which both improve their theoretical foundations and improve the images they generate. The resultant hierarchical radiosity algorithm is then examined for sources of parallelism, and for an architectural mapping. Several architectural mappings are discussed. A few key algorithmic changes are suggested during the process of making the algorithm parallel. Next, the performance, efficiency, and scalability of the algorithm are analyzed. The dissertation closes with a discussion of several ideas which have the potential to further enhance the hierarchical radiosity method, or provide an entirely new forum for the application of hierarchical methods.
DEFF Research Database (Denmark)
Mizuno, T.; Kobayashi, T.; Takara, H.
2014-01-01
We demonstrate dense SDM transmission of 20-WDM multi-carrier PDM-32QAM signals over a 40-km 12-core x 3-mode fiber with 247.9-b/s/Hz spectral efficiency. Parallel MIMO equalization enables 21-ns DMD compensation with 61 TDE taps per subcarrier....
The parallel volume at large distances
DEFF Research Database (Denmark)
Kampf, Jürgen
In this paper we examine the asymptotic behavior of the parallel volume of planar non-convex bodies as the distance tends to infinity. We show that the difference between the parallel volume of the convex hull of a body and the parallel volume of the body itself tends to . This yields a new proof...... for the fact that a planar body can only have polynomial parallel volume, if it is convex. Extensions to Minkowski spaces and random sets are also discussed....
The parallel volume at large distances
DEFF Research Database (Denmark)
Kampf, Jürgen
In this paper we examine the asymptotic behavior of the parallel volume of planar non-convex bodies as the distance tends to infinity. We show that the difference between the parallel volume of the convex hull of a body and the parallel volume of the body itself tends to 0. This yields a new proof...... for the fact that a planar body can only have polynomial parallel volume, if it is convex. Extensions to Minkowski spaces and random sets are also discussed....
Directory of Open Access Journals (Sweden)
E. S. Belenkaya
2007-06-01
Full Text Available We study the dependence of Saturn's magnetospheric magnetic field structure on the interplanetary magnetic field (IMF, together with the corresponding variations of the open-closed field line boundary in the ionosphere. Specifically we investigate the interval from 8 to 30 January 2004, when UV images of Saturn's southern aurora were obtained by the Hubble Space Telescope (HST, and simultaneous interplanetary measurements were provided by the Cassini spacecraft located near the ecliptic ~0.2 AU upstream of Saturn and ~0.5 AU off the planet-Sun line towards dawn. Using the paraboloid model of Saturn's magnetosphere, we calculate the magnetospheric magnetic field structure for several values of the IMF vector representative of interplanetary compression regions. Variations in the magnetic structure lead to different shapes and areas of the open field line region in the ionosphere. Comparison with the HST auroral images shows that the area of the computed open flux region is generally comparable to that enclosed by the auroral oval, and sometimes agrees in detail with its poleward boundary, though more typically being displaced by a few degrees in the tailward direction.
Directory of Open Access Journals (Sweden)
Wagner Anacleto Pinheiro
2006-03-01
Full Text Available Unlike other thin film deposition techniques, close spaced sublimation (CSS requires a short source-substrate distance. The kind of source used in this technique strongly affects the control of the deposition parameters, especially the deposition rate. When depositing CdTe thin films by CSS, the most common CdTe sources are: single-crystal or polycrystalline wafers, powders, pellets or pieces, a thick CdTe film deposited onto glass or molybdenum substrate (CdTe source-plate and a sintered CdTe powder. In this work, CdTe thin films were deposited by CSS technique from different CdTe sources: particles, powder, compact powder, a paste made of CdTe and propylene glycol and source-plates (CdTe/Mo and CdTe/glass. The largest deposition rate was achieved when a paste made of CdTe and propylene glycol was used as the source. CdTe source-plates led to lower rates, probably due to the poor heat transmission, caused by the introduction of the plate substrate. The results also showed that compacting the powder the deposition rate increases due to the better thermal contact between powder particles.
Energy Technology Data Exchange (ETDEWEB)
Okamoto, Tamotsu; Akiba, Sho; Takahashi, Kohei; Nagatsuka, Satsuki; Kanda, Yohei [Department of Electrical and Electronic Engineering, Kisarazu National College of Technology, 2-11-1 Kiyomidai-higashi, Kisarazu, Chiba 292-0041 (Japan); Tokuda, Satoshi; Kishihara, Hiroyuki; Sato, Toshiyuki [Technology Research Laboratory, Shimadzu Corporation, 3-9-4 Hikaridai, Seika-cho, Soraku-gun, Kyoto 619-0237 (Japan)
2014-07-15
The effects of a ZnTe layer on the deposition of a Cd{sub 1-x}Zn{sub x}Te (CZT) layer in the initial stage of the close-spaced sublimation (CSS) deposition were investigated. The deposition rate was almost constant in the initial stage of the CdTe deposition on the ZnTe/graphite substrates. However, the deposition rate within 1 minute was lower than that after 1 minute in the CdTe deposition on graphite substrates. This result suggests that nucleation of CdTe directly deposited on graphite substrate is difficult when compared to that with a ZnTe layer. Furthermore, the effects of CdCl{sub 2} and ZnTe additions to the CdTe sources in the CSS deposition were also investigated. Both the grain size and the intensity of donor-acceptor pair (DAP) emission in photoluminescence (PL) spectra were decreased by the effect of CdCl{sub 2} addition. Zn content in CZT films was controlled by the ZnTe ratio in the CdTe/ZnTe powder sources. (copyright 2014 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)
International Nuclear Information System (INIS)
Jelatis, G.J.
1983-01-01
Third sound in superfluid helium four films has been investigated using two parallel-plate waveguides. These investigations led to the observation of fifth sound, a new mode of sound propagation. Both waveguides consisted of two parallel pieces of vitreous quartz. The sound speed was obtained by measuring the time-of-flight of pulsed third sound over a known distance. Investigations from 1.0-1.7K were possible with the use of superconducting bolometers, which measure the temperature component of the third sound wave. Observations were initially made with a waveguide having a plate separation fixed at five microns. Adiabatic third sound was measured in the geometry. Isothermal third sound was also observed, using the usual, single-substrate technique. Fifth sound speeds, calculated from the two-fluid theory of helium and the speeds of the two forms of third sound, agreed in size and temperature dependence with theoretical predictions. Nevertheless, only equivocal observations of fifth sound were made. As a result, the film-substrate interaction was examined, and estimates of the Kapitza conductance were made. Assuming the dominance of the effects of this conductance over those due to the ECEs led to a new expression for fifth sound. A reanalysis of the initial data was made, which contained no adjustable parameters. The observation of fifth sound was seen to be consistent with the existence of an anomalously low boundary conductance
Streaming for Functional Data-Parallel Languages
DEFF Research Database (Denmark)
Madsen, Frederik Meisner
In this thesis, we investigate streaming as a general solution to the space inefficiency commonly found in functional data-parallel programming languages. The data-parallel paradigm maps well to parallel SIMD-style hardware. However, the traditional fully materializing execution strategy...... by extending two existing data-parallel languages: NESL and Accelerate. In the extensions we map bulk operations to data-parallel streams that can evaluate fully sequential, fully parallel or anything in between. By a dataflow, piecewise parallel execution strategy, the runtime system can adjust to any target...... flattening necessitates all sub-computations to materialize at the same time. For example, naive n by n matrix multiplication requires n^3 space in NESL because the algorithm contains n^3 independent scalar multiplications. For large values of n, this is completely unacceptable. We address the problem...
Shipley, Heath V.; Lange-Vagle, Daniel; Marchesini, Danilo; Brammer, Gabriel B.; Ferrarese, Laura; Stefanon, Mauro; Kado-Fong, Erin; Whitaker, Katherine E.; Oesch, Pascal A.; Feinstein, Adina D.; Labbé, Ivo; Lundgren, Britt; Martis, Nicholas; Muzzin, Adam; Nedkova, Kalina; Skelton, Rosalind; van der Wel, Arjen
2018-03-01
We present Hubble multi-wavelength photometric catalogs, including (up to) 17 filters with the Advanced Camera for Surveys and Wide Field Camera 3 from the ultra-violet to near-infrared for the Hubble Frontier Fields and associated parallels. We have constructed homogeneous photometric catalogs for all six clusters and their parallels. To further expand these data catalogs, we have added ultra-deep K S -band imaging at 2.2 μm from the Very Large Telescope HAWK-I and Keck-I MOSFIRE instruments. We also add post-cryogenic Spitzer imaging at 3.6 and 4.5 μm with the Infrared Array Camera (IRAC), as well as archival IRAC 5.8 and 8.0 μm imaging when available. We introduce the public release of the multi-wavelength (0.2–8 μm) photometric catalogs, and we describe the unique steps applied for the construction of these catalogs. Particular emphasis is given to the source detection band, the contamination of light from the bright cluster galaxies (bCGs), and intra-cluster light (ICL). In addition to the photometric catalogs, we provide catalogs of photometric redshifts and stellar population properties. Furthermore, this includes all the images used in the construction of the catalogs, including the combined models of bCGs and ICL, the residual images, segmentation maps, and more. These catalogs are a robust data set of the Hubble Frontier Fields and will be an important aid in designing future surveys, as well as planning follow-up programs with current and future observatories to answer key questions remaining about first light, reionization, the assembly of galaxies, and many more topics, most notably by identifying high-redshift sources to target.
International Nuclear Information System (INIS)
Gevorkyan, A.S.; Abajyan, H.G.
2011-01-01
We have investigated the statistical properties of an ensemble of disordered 1D spatial spin chains (SSCs) of finite length, placed in an external field, with consideration of relaxation effects. The short-range interaction complex-classical Hamiltonian was first used for solving this problem. A system of recurrent equations is obtained on the nodes of the spin-chain lattice. An efficient mathematical algorithm is developed on the basis of these equations with consideration of the advanced Sylvester conditions which allow step by step construct a huge number of stable spin chains in parallel. The distribution functions of different parameters of spin-glass system are constructed from the first principles of the complex classical mechanics by analyzing the calculation results of the 1D SSCs ensemble. It is shown that the behavior of the parameter distributions is quite different depending on the external fields. The energy ensembles and constants of spin-spin interactions are changed smoothly depending on the external field in the limit of statistical equilibrium, while some of them such as the mean value of polarizations of ensemble and parameters of its orderings are frustrated. We have also studied some critical properties of the ensemble of such catastrophes in the Clausius-Mossotti equation depending on the value of the external field. We have shown that the generalized complex-classical approach excludes these catastrophes allowing one to organize continuous parallel computing on the whole region of values of the external field including critical points. A new representation of the partition function based on these investigations is suggested. As opposed to usual definition, this function is a complex one and its derivatives are everywhere defined, including critical points
Photoluminescence spectra of n-doped double quantum wells in a parallel magnetic field
International Nuclear Information System (INIS)
Huang, D.; Lyo, S.K.
1999-01-01
We show that the photoluminescence (PL) line shapes from tunnel-split ground sublevels of n-doped thin double quantum wells (DQW close-quote s) are sensitively modulated by an in-plane magnetic field B parallel at low temperatures (T). The modulation is caused by the B parallel -induced distortion of the electronic structure. The latter arises from the relative shift of the energy-dispersion parabolas of the two quantum wells (QW close-quote s) in rvec k space, both in the conduction and valence bands, and formation of an anticrossing gap in the conduction band. Using a self-consistent density-functional theory, the PL spectra and the band-gap narrowing are calculated as a function of B parallel , T, and the homogeneous linewidths. The PL spectra from symmetric and asymmetric DQW close-quote s are found to show strikingly different behavior. In symmetric DQW close-quote s with a high density of electrons, two PL peaks are obtained at B parallel =0, representing the interband transitions between the pair of the upper (i.e., antisymmetric) levels and that of the lower (i.e., symmetric) levels of the ground doublets. As B parallel increases, the upper PL peak develops an N-type kink, namely a maximum followed by a minimum, and merges with the lower peak, which rises monotonically as a function of B parallel due to the diamagnetic energy. When the electron density is low, however, only a single PL peak, arising from the transitions between the lower levels, is obtained. In asymmetric DQW close-quote s, the PL spectra show mainly one dominant peak at all B parallel close-quote s. In this case, the holes are localized in one of the QW close-quote s at low T and recombine only with the electrons in the same QW. At high electron densities, the upper PL peak shows an N-type kink like in symmetric DQW close-quote s. However, the lower peak is absent at low B parallel close-quote s because it arises from the inter-QW transitions. Reasonable agreement is obtained with recent
International Nuclear Information System (INIS)
Soltz, R; Vranas, P; Blumrich, M; Chen, D; Gara, A; Giampap, M; Heidelberger, P; Salapura, V; Sexton, J; Bhanot, G
2007-01-01
The theory of the strong nuclear force, Quantum Chromodynamics (QCD), can be numerically simulated from first principles on massively-parallel supercomputers using the method of Lattice Gauge Theory. We describe the special programming requirements of lattice QCD (LQCD) as well as the optimal supercomputer hardware architectures that it suggests. We demonstrate these methods on the BlueGene massively-parallel supercomputer and argue that LQCD and the BlueGene architecture are a natural match. This can be traced to the simple fact that LQCD is a regular lattice discretization of space into lattice sites while the BlueGene supercomputer is a discretization of space into compute nodes, and that both are constrained by requirements of locality. This simple relation is both technologically important and theoretically intriguing. The main result of this paper is the speedup of LQCD using up to 131,072 CPUs on the largest BlueGene/L supercomputer. The speedup is perfect with sustained performance of about 20% of peak. This corresponds to a maximum of 70.5 sustained TFlop/s. At these speeds LQCD and BlueGene are poised to produce the next generation of strong interaction physics theoretical results
PDDP, A Data Parallel Programming Model
Directory of Open Access Journals (Sweden)
Karen H. Warren
1996-01-01
Full Text Available PDDP, the parallel data distribution preprocessor, is a data parallel programming model for distributed memory parallel computers. PDDP implements high-performance Fortran-compatible data distribution directives and parallelism expressed by the use of Fortran 90 array syntax, the FORALL statement, and the WHERE construct. Distributed data objects belong to a global name space; other data objects are treated as local and replicated on each processor. PDDP allows the user to program in a shared memory style and generates codes that are portable to a variety of parallel machines. For interprocessor communication, PDDP uses the fastest communication primitives on each platform.
Nakamura, M; Kitayama, K
1998-05-10
Optical space code-division multiple access is a scheme to multiplex and link data between two-dimensional processors such as smart pixels and spatial light modulators or arrays of optical sources like vertical-cavity surface-emitting lasers. We examine the multiplexing characteristics of optical space code-division multiple access by using optical orthogonal signature patterns. The probability density function of interference noise in interfering optical orthogonal signature patterns is calculated. The bit-error rate is derived from the result and plotted as a function of receiver threshold, code length, code weight, and number of users. Furthermore, we propose a prethresholding method to suppress the interference noise, and we experimentally verify that the method works effectively in improving system performance.
Fringe Capacitance of a Parallel-Plate Capacitor.
Hale, D. P.
1978-01-01
Describes an experiment designed to measure the forces between charged parallel plates, and determines the relationship among the effective electrode area, the measured capacitance values, and the electrode spacing of a parallel plate capacitor. (GA)
Streaming nested data parallelism on multicores
DEFF Research Database (Denmark)
Madsen, Frederik Meisner; Filinski, Andrzej
2016-01-01
The paradigm of nested data parallelism (NDP) allows a variety of semi-regular computation tasks to be mapped onto SIMD-style hardware, including GPUs and vector units. However, some care is needed to keep down space consumption in situations where the available parallelism may vastly exceed...
Parallel Polarization State Generation.
She, Alan; Capasso, Federico
2016-05-17
The control of polarization, an essential property of light, is of wide scientific and technological interest. The general problem of generating arbitrary time-varying states of polarization (SOP) has always been mathematically formulated by a series of linear transformations, i.e. a product of matrices, imposing a serial architecture. Here we show a parallel architecture described by a sum of matrices. The theory is experimentally demonstrated by modulating spatially-separated polarization components of a laser using a digital micromirror device that are subsequently beam combined. This method greatly expands the parameter space for engineering devices that control polarization. Consequently, performance characteristics, such as speed, stability, and spectral range, are entirely dictated by the technologies of optical intensity modulation, including absorption, reflection, emission, and scattering. This opens up important prospects for polarization state generation (PSG) with unique performance characteristics with applications in spectroscopic ellipsometry, spectropolarimetry, communications, imaging, and security.
Green's function for electrons in a narrow quantum well in a parallel magnetic field
International Nuclear Information System (INIS)
Horing, Norman J. Morgenstern; Glasser, M. Lawrence; Dong Bing
2005-01-01
Electron dynamics in a narrow quantum well in a parallel magnetic field of arbitrary strength are examined here. We derive an explicit analytical closed-form solution for the Green's function of Landau-quantized electrons in skipping states of motion between the narrow well walls coupled with in-plane translational motion and hybridized with the zero-field lowest subband energy eigenstate. Such Landau-quantized modes are not uniformly spaced
Parallel Programming with Intel Parallel Studio XE
Blair-Chappell , Stephen
2012-01-01
Optimize code for multi-core processors with Intel's Parallel Studio Parallel programming is rapidly becoming a "must-know" skill for developers. Yet, where to start? This teach-yourself tutorial is an ideal starting point for developers who already know Windows C and C++ and are eager to add parallelism to their code. With a focus on applying tools, techniques, and language extensions to implement parallelism, this essential resource teaches you how to write programs for multicore and leverage the power of multicore in your programs. Sharing hands-on case studies and real-world examples, the
Breuning, K.H.
2008-01-01
Unilateral closure of maxillary extraction spaces in patients with Class III malocclusion can be challenging. This case report describes the closure of first premolar and first molar extraction spaces in a patient with a Class III dental relationship. Two miniscrews were used for intraoral skeletal
Parallel algorithms for mapping pipelined and parallel computations
Nicol, David M.
1988-01-01
Many computational problems in image processing, signal processing, and scientific computing are naturally structured for either pipelined or parallel computation. When mapping such problems onto a parallel architecture it is often necessary to aggregate an obvious problem decomposition. Even in this context the general mapping problem is known to be computationally intractable, but recent advances have been made in identifying classes of problems and architectures for which optimal solutions can be found in polynomial time. Among these, the mapping of pipelined or parallel computations onto linear array, shared memory, and host-satellite systems figures prominently. This paper extends that work first by showing how to improve existing serial mapping algorithms. These improvements have significantly lower time and space complexities: in one case a published O(nm sup 3) time algorithm for mapping m modules onto n processors is reduced to an O(nm log m) time complexity, and its space requirements reduced from O(nm sup 2) to O(m). Run time complexity is further reduced with parallel mapping algorithms based on these improvements, which run on the architecture for which they create the mappings.
Secoond order parallel tensors on some paracontact manifolds | Liu ...
African Journals Online (AJOL)
The object of the present paper is to study the symmetric and skewsymmetric properties of a second order parallel tensor on paracontact metric (k;μ)- spaces and almost β-para-Kenmotsu (k;μ)-spaces. In this paper, we prove that if there exists a second order symmetric parallel tensor on a paracontact metric (k;μ)- space M, ...
On synchronous parallel computations with independent probabilistic choice
International Nuclear Information System (INIS)
Reif, J.H.
1984-01-01
This paper introduces probabilistic choice to synchronous parallel machine models; in particular parallel RAMs. The power of probabilistic choice in parallel computations is illustrate by parallelizing some known probabilistic sequential algorithms. The authors characterize the computational complexity of time, space, and processor bounded probabilistic parallel RAMs in terms of the computational complexity of probabilistic sequential RAMs. They show that parallelism uniformly speeds up time bounded probabilistic sequential RAM computations by nearly a quadratic factor. They also show that probabilistic choice can be eliminated from parallel computations by introducing nonuniformity
Parallel imaging with phase scrambling.
Zaitsev, Maxim; Schultz, Gerrit; Hennig, Juergen; Gruetter, Rolf; Gallichan, Daniel
2015-04-01
Most existing methods for accelerated parallel imaging in MRI require additional data, which are used to derive information about the sensitivity profile of each radiofrequency (RF) channel. In this work, a method is presented to avoid the acquisition of separate coil calibration data for accelerated Cartesian trajectories. Quadratic phase is imparted to the image to spread the signals in k-space (aka phase scrambling). By rewriting the Fourier transform as a convolution operation, a window can be introduced to the convolved chirp function, allowing a low-resolution image to be reconstructed from phase-scrambled data without prominent aliasing. This image (for each RF channel) can be used to derive coil sensitivities to drive existing parallel imaging techniques. As a proof of concept, the quadratic phase was applied by introducing an offset to the x(2) - y(2) shim and the data were reconstructed using adapted versions of the image space-based sensitivity encoding and GeneRalized Autocalibrating Partially Parallel Acquisitions algorithms. The method is demonstrated in a phantom (1 × 2, 1 × 3, and 2 × 2 acceleration) and in vivo (2 × 2 acceleration) using a 3D gradient echo acquisition. Phase scrambling can be used to perform parallel imaging acceleration without acquisition of separate coil calibration data, demonstrated here for a 3D-Cartesian trajectory. Further research is required to prove the applicability to other 2D and 3D sampling schemes. © 2014 Wiley Periodicals, Inc.
Morse, H Stephen
1994-01-01
Practical Parallel Computing provides information pertinent to the fundamental aspects of high-performance parallel processing. This book discusses the development of parallel applications on a variety of equipment.Organized into three parts encompassing 12 chapters, this book begins with an overview of the technology trends that converge to favor massively parallel hardware over traditional mainframes and vector machines. This text then gives a tutorial introduction to parallel hardware architectures. Other chapters provide worked-out examples of programs using several parallel languages. Thi
Akl, Selim G
1985-01-01
Parallel Sorting Algorithms explains how to use parallel algorithms to sort a sequence of items on a variety of parallel computers. The book reviews the sorting problem, the parallel models of computation, parallel algorithms, and the lower bounds on the parallel sorting problems. The text also presents twenty different algorithms, such as linear arrays, mesh-connected computers, cube-connected computers. Another example where algorithm can be applied is on the shared-memory SIMD (single instruction stream multiple data stream) computers in which the whole sequence to be sorted can fit in the
Combinatorics of spreads and parallelisms
Johnson, Norman
2010-01-01
Partitions of Vector Spaces Quasi-Subgeometry Partitions Finite Focal-SpreadsGeneralizing André SpreadsThe Going Up Construction for Focal-SpreadsSubgeometry Partitions Subgeometry and Quasi-Subgeometry Partitions Subgeometries from Focal-SpreadsExtended André SubgeometriesKantor's Flag-Transitive DesignsMaximal Additive Partial SpreadsSubplane Covered Nets and Baer Groups Partial Desarguesian t-Parallelisms Direct Products of Affine PlanesJha-Johnson SL(2,
New algorithms for parallel MRI
International Nuclear Information System (INIS)
Anzengruber, S; Ramlau, R; Bauer, F; Leitao, A
2008-01-01
Magnetic Resonance Imaging with parallel data acquisition requires algorithms for reconstructing the patient's image from a small number of measured lines of the Fourier domain (k-space). In contrast to well-known algorithms like SENSE and GRAPPA and its flavors we consider the problem as a non-linear inverse problem. However, in order to avoid cost intensive derivatives we will use Landweber-Kaczmarz iteration and in order to improve the overall results some additional sparsity constraints.
NonLinear Parallel OPtimization Tool, Phase II
National Aeronautics and Space Administration — The technological advancement proposed is a novel large-scale Noninear Parallel OPtimization Tool (NLPAROPT). This software package will eliminate the computational...
Introduction to parallel programming
Brawer, Steven
1989-01-01
Introduction to Parallel Programming focuses on the techniques, processes, methodologies, and approaches involved in parallel programming. The book first offers information on Fortran, hardware and operating system models, and processes, shared memory, and simple parallel programs. Discussions focus on processes and processors, joining processes, shared memory, time-sharing with multiple processors, hardware, loops, passing arguments in function/subroutine calls, program structure, and arithmetic expressions. The text then elaborates on basic parallel programming techniques, barriers and race
Fox, Geoffrey C; Messina, Guiseppe C
2014-01-01
A clear illustration of how parallel computers can be successfully appliedto large-scale scientific computations. This book demonstrates how avariety of applications in physics, biology, mathematics and other scienceswere implemented on real parallel computers to produce new scientificresults. It investigates issues of fine-grained parallelism relevant forfuture supercomputers with particular emphasis on hypercube architecture. The authors describe how they used an experimental approach to configuredifferent massively parallel machines, design and implement basic systemsoftware, and develop
Introduction to parallel algorithms and architectures arrays, trees, hypercubes
Leighton, F Thomson
1991-01-01
Introduction to Parallel Algorithms and Architectures: Arrays Trees Hypercubes provides an introduction to the expanding field of parallel algorithms and architectures. This book focuses on parallel computation involving the most popular network architectures, namely, arrays, trees, hypercubes, and some closely related networks.Organized into three chapters, this book begins with an overview of the simplest architectures of arrays and trees. This text then presents the structures and relationships between the dominant network architectures, as well as the most efficient parallel algorithms for
Reconfigurable Parallel Computer Architectures for Space Applications
2012-08-07
63 B-1. Dependency diagram of the hardware blocks implemented with VHDL .................. 64 C-1. The...distribution is unlimited. The CU has been fully implemented in a FPGA using VHDL . The CU hardware design is depicted in Figure 12. It consists of a main...the hardware design implemented in the FPGA using VHDL . The block diagram shows the dependency of all the VHDL blocks included in the design. Each
Non-Cartesian parallel imaging reconstruction.
Wright, Katherine L; Hamilton, Jesse I; Griswold, Mark A; Gulani, Vikas; Seiberlich, Nicole
2014-11-01
Non-Cartesian parallel imaging has played an important role in reducing data acquisition time in MRI. The use of non-Cartesian trajectories can enable more efficient coverage of k-space, which can be leveraged to reduce scan times. These trajectories can be undersampled to achieve even faster scan times, but the resulting images may contain aliasing artifacts. Just as Cartesian parallel imaging can be used to reconstruct images from undersampled Cartesian data, non-Cartesian parallel imaging methods can mitigate aliasing artifacts by using additional spatial encoding information in the form of the nonhomogeneous sensitivities of multi-coil phased arrays. This review will begin with an overview of non-Cartesian k-space trajectories and their sampling properties, followed by an in-depth discussion of several selected non-Cartesian parallel imaging algorithms. Three representative non-Cartesian parallel imaging methods will be described, including Conjugate Gradient SENSE (CG SENSE), non-Cartesian generalized autocalibrating partially parallel acquisition (GRAPPA), and Iterative Self-Consistent Parallel Imaging Reconstruction (SPIRiT). After a discussion of these three techniques, several potential promising clinical applications of non-Cartesian parallel imaging will be covered. © 2014 Wiley Periodicals, Inc.
Parallel Atomistic Simulations
Energy Technology Data Exchange (ETDEWEB)
HEFFELFINGER,GRANT S.
2000-01-18
Algorithms developed to enable the use of atomistic molecular simulation methods with parallel computers are reviewed. Methods appropriate for bonded as well as non-bonded (and charged) interactions are included. While strategies for obtaining parallel molecular simulations have been developed for the full variety of atomistic simulation methods, molecular dynamics and Monte Carlo have received the most attention. Three main types of parallel molecular dynamics simulations have been developed, the replicated data decomposition, the spatial decomposition, and the force decomposition. For Monte Carlo simulations, parallel algorithms have been developed which can be divided into two categories, those which require a modified Markov chain and those which do not. Parallel algorithms developed for other simulation methods such as Gibbs ensemble Monte Carlo, grand canonical molecular dynamics, and Monte Carlo methods for protein structure determination are also reviewed and issues such as how to measure parallel efficiency, especially in the case of parallel Monte Carlo algorithms with modified Markov chains are discussed.
Alain Berinstain; Alan Scott; Matthew Bamsey; Michael Dixon; Cody Thompson; Thomas Graham
2012-01-01
The ability to monitor and control plant nutrient ions in fertigation solutions, on an ion-specific basis, is critical to the future of controlled environment agriculture crop production, be it in traditional terrestrial settings (e.g., greenhouse crop production) or as a component of bioregenerative life support systems for long duration space exploration. Several technologies are currently available that can provide the required measurement of ion-specific activities in solution. The greenh...
Directory of Open Access Journals (Sweden)
Alain Berinstain
2012-10-01
Full Text Available The ability to monitor and control plant nutrient ions in fertigation solutions, on an ion-specific basis, is critical to the future of controlled environment agriculture crop production, be it in traditional terrestrial settings (e.g., greenhouse crop production or as a component of bioregenerative life support systems for long duration space exploration. Several technologies are currently available that can provide the required measurement of ion-specific activities in solution. The greenhouse sector has invested in research examining the potential of a number of these technologies to meet the industry’s demanding requirements, and although no ideal solution yet exists for on-line measurement, growers do utilize technologies such as high-performance liquid chromatography to provide off-line measurements. An analogous situation exists on the International Space Station where, technological solutions are sought, but currently on-orbit water quality monitoring is considerably restricted. This paper examines the specific advantages that on-line ion-selective sensors could provide to plant production systems both terrestrially and when utilized in space-based biological life support systems and how similar technologies could be applied to nominal on-orbit water quality monitoring. A historical development and technical review of the various ion-selective monitoring technologies is provided.
Energy Technology Data Exchange (ETDEWEB)
M. Chen; CM Regan; D. Noe
2006-01-09
Few data exist for UO{sub 2} or UN within the notional design space for the Prometheus-1 reactor (low fission rate, high temperature, long duration). As such, basic testing is required to validate predictions (and in some cases determine) performance aspects of these fuels. Therefore, the MICE-3B test of UO{sub 2} pellets was designed to provide data on gas release, unrestrained swelling, and restrained swelling at the upper range of fission rates expected for a space reactor. These data would be compared with model predictions and used to determine adequacy of a space reactor design basis relative to fission gas release and swelling of UO{sub 2} fuel and to assess potential pellet-clad interactions. A primary goal of an irradiation test for UN fuel was to assess performance issues currently associated with this fuel type such as gas release, swelling and transient performance. Information learned from this effort may have enabled use of UN fuel for future applications.
Parallel-In-Time For Moving Meshes
Energy Technology Data Exchange (ETDEWEB)
Falgout, R. D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Manteuffel, T. A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Southworth, B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Schroder, J. B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2016-02-04
With steadily growing computational resources available, scientists must develop e ective ways to utilize the increased resources. High performance, highly parallel software has be- come a standard. However until recent years parallelism has focused primarily on the spatial domain. When solving a space-time partial di erential equation (PDE), this leads to a sequential bottleneck in the temporal dimension, particularly when taking a large number of time steps. The XBraid parallel-in-time library was developed as a practical way to add temporal parallelism to existing se- quential codes with only minor modi cations. In this work, a rezoning-type moving mesh is applied to a di usion problem and formulated in a parallel-in-time framework. Tests and scaling studies are run using XBraid and demonstrate excellent results for the simple model problem considered herein.
CERN. Geneva
2016-01-01
The traditionally used and well established parallel programming models OpenMP and MPI are both targeting lower level parallelism and are meant to be as language agnostic as possible. For a long time, those models were the only widely available portable options for developing parallel C++ applications beyond using plain threads. This has strongly limited the optimization capabilities of compilers, has inhibited extensibility and genericity, and has restricted the use of those models together with other, modern higher level abstractions introduced by the C++11 and C++14 standards. The recent revival of interest in the industry and wider community for the C++ language has also spurred a remarkable amount of standardization proposals and technical specifications being developed. Those efforts however have so far failed to build a vision on how to seamlessly integrate various types of parallelism, such as iterative parallel execution, task-based parallelism, asynchronous many-task execution flows, continuation s...
Parallelism in matrix computations
Gallopoulos, Efstratios; Sameh, Ahmed H
2016-01-01
This book is primarily intended as a research monograph that could also be used in graduate courses for the design of parallel algorithms in matrix computations. It assumes general but not extensive knowledge of numerical linear algebra, parallel architectures, and parallel programming paradigms. The book consists of four parts: (I) Basics; (II) Dense and Special Matrix Computations; (III) Sparse Matrix Computations; and (IV) Matrix functions and characteristics. Part I deals with parallel programming paradigms and fundamental kernels, including reordering schemes for sparse matrices. Part II is devoted to dense matrix computations such as parallel algorithms for solving linear systems, linear least squares, the symmetric algebraic eigenvalue problem, and the singular-value decomposition. It also deals with the development of parallel algorithms for special linear systems such as banded ,Vandermonde ,Toeplitz ,and block Toeplitz systems. Part III addresses sparse matrix computations: (a) the development of pa...
Institute of Scientific and Technical Information of China (English)
郑家芝
2016-01-01
为了准确的进行相邻的相干信号源定位，提出了一种基于多重信号分类群延迟(MUSIC-group delay)的改进算法。首先，将空间平滑技术引入到波达方向(DoA)估计当中去除部分相干信号。由于在信号源相邻的情况下子空间算法的性能降低，就结合了 MUSIC-Group Delay算法来区分相邻的信号源，这种方法因为自身的加和性通过 MUSIC 相位谱来计算群延迟函数，从而能估计出相邻的信号源。理论分析和仿真结果表明提出的方法估计相邻的相干信号源比子空间算法更精确，分辨率更高。%In this paper,the closely spaced coherent-source localization is considered,and an improved method based on the group delay of Multiple Signal Classification (MUSIC)is presented.Firstly,we introduce the spatial smoothing technique into direction of arrival (DoA)estimation to get rid of the coherent part of signals.Due to the degraded per-formance of sub-space based methods on the condition of nearby sources,we then utilize the MUSIC-Group Delay algo-rithm to distinguish the closely spaced sources,which can resolve spatially close sources by the use of the group delay function computed from the MUSIC phase spectrum for efficient DoA estimation owing to its spatial additive property. Theoretical analysis and simulation results demonstrate that the proposed approach can estimate the DoA of the coherent close signal sources more precisely and have higher resolution compared with sub-space based methods.
DEFF Research Database (Denmark)
Sitchinava, Nodar; Zeh, Norbert
2012-01-01
We present the parallel buffer tree, a parallel external memory (PEM) data structure for batched search problems. This data structure is a non-trivial extension of Arge's sequential buffer tree to a private-cache multiprocessor environment and reduces the number of I/O operations by the number of...... in the optimal OhOf(psortN + K/PB) parallel I/O complexity, where K is the size of the output reported in the process and psortN is the parallel I/O complexity of sorting N elements using P processors....
Parallel Algorithms and Patterns
Energy Technology Data Exchange (ETDEWEB)
Robey, Robert W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-06-16
This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.
Application Portable Parallel Library
Cole, Gary L.; Blech, Richard A.; Quealy, Angela; Townsend, Scott
1995-01-01
Application Portable Parallel Library (APPL) computer program is subroutine-based message-passing software library intended to provide consistent interface to variety of multiprocessor computers on market today. Minimizes effort needed to move application program from one computer to another. User develops application program once and then easily moves application program from parallel computer on which created to another parallel computer. ("Parallel computer" also include heterogeneous collection of networked computers). Written in C language with one FORTRAN 77 subroutine for UNIX-based computers and callable from application programs written in C language or FORTRAN 77.
Energy Technology Data Exchange (ETDEWEB)
Fertitta, E.; Paulus, B. [Institut für Chemie und Biochemie, Freie Universität Berlin, Takustr. 3, 14195 Berlin (Germany); Barcza, G.; Legeza, Ö. [Strongly Correlated Systems “Lendület” Research Group, Wigner Research Centre for Physics, P.O. Box 49, Budapest (Hungary)
2015-09-21
The method of increments (MoI) has been employed using the complete active space formalism in order to calculate the dissociation curve of beryllium ring-shaped clusters Be{sub n} of different sizes. Benchmarks obtained through different quantum chemical methods including the ab initio density matrix renormalization group were used to verify the validity of the MoI truncation which showed a reliable behavior for the whole dissociation curve. Moreover we investigated the size dependence of the correlation energy at different interatomic distances in order to extrapolate the values for the periodic chain and to discuss the transition from a metal-like to an insulator-like behavior of the wave function through quantum chemical considerations.
Energy Technology Data Exchange (ETDEWEB)
Dominguez, A.; Siana, B.; Masters, D. [Department of Physics and Astronomy, University of California Riverside, Riverside, CA 92521 (United States); Henry, A. L.; Martin, C. L. [Department of Physics, University of California, Santa Barbara, CA 93106 (United States); Scarlata, C.; Bedregal, A. G. [Minnesota Institute for Astrophysics, University of Minnesota, Minneapolis, MN 55455 (United States); Malkan, M.; Ross, N. R. [Department of Physics and Astronomy, University of California Los Angeles, Los Angeles, CA 90095 (United States); Atek, H.; Colbert, J. W. [Spitzer Science Center, Caltech, Pasadena, CA 91125 (United States); Teplitz, H. I.; Rafelski, M. [Infrared Processing and Analysis Center, Caltech, Pasadena, CA 91125 (United States); McCarthy, P.; Hathi, N. P.; Dressler, A. [Observatories of the Carnegie Institution for Science, Pasadena, CA 91101 (United States); Bunker, A., E-mail: albertod@ucr.edu [Department of Physics, Oxford University, Denys Wilkinson Building, Keble Road, Oxford, OX1 3RH (United Kingdom)
2013-02-15
Spectroscopic observations of H{alpha} and H{beta} emission lines of 128 star-forming galaxies in the redshift range 0.75 {<=} z {<=} 1.5 are presented. These data were taken with slitless spectroscopy using the G102 and G141 grisms of the Wide-Field-Camera 3 (WFC3) on board the Hubble Space Telescope as part of the WFC3 Infrared Spectroscopic Parallel survey. Interstellar dust extinction is measured from stacked spectra that cover the Balmer decrement (H{alpha}/H{beta}). We present dust extinction as a function of H{alpha} luminosity (down to 3 Multiplication-Sign 10{sup 41} erg s{sup -1}), galaxy stellar mass (reaching 4 Multiplication-Sign 10{sup 8} M {sub Sun }), and rest-frame H{alpha} equivalent width. The faintest galaxies are two times fainter in H{alpha} luminosity than galaxies previously studied at z {approx} 1.5. An evolution is observed where galaxies of the same H{alpha} luminosity have lower extinction at higher redshifts, whereas no evolution is found within our error bars with stellar mass. The lower H{alpha} luminosity galaxies in our sample are found to be consistent with no dust extinction. We find an anti-correlation of the [O III] {lambda}5007/H{alpha} flux ratio as a function of luminosity where galaxies with L {sub H{alpha}} < 5 Multiplication-Sign 10{sup 41} erg s{sup -1} are brighter in [O III] {lambda}5007 than H{alpha}. This trend is evident even after extinction correction, suggesting that the increased [O III] {lambda}5007/H{alpha} ratio in low-luminosity galaxies is likely due to lower metallicity and/or higher ionization parameters.
Building a parallel file system simulator
International Nuclear Information System (INIS)
Molina-Estolano, E; Maltzahn, C; Brandt, S A; Bent, J
2009-01-01
Parallel file systems are gaining in popularity in high-end computing centers as well as commercial data centers. High-end computing systems are expected to scale exponentially and to pose new challenges to their storage scalability in terms of cost and power. To address these challenges scientists and file system designers will need a thorough understanding of the design space of parallel file systems. Yet there exist few systematic studies of parallel file system behavior at petabyte- and exabyte scale. An important reason is the significant cost of getting access to large-scale hardware to test parallel file systems. To contribute to this understanding we are building a parallel file system simulator that can simulate parallel file systems at very large scale. Our goal is to simulate petabyte-scale parallel file systems on a small cluster or even a single machine in reasonable time and fidelity. With this simulator, file system experts will be able to tune existing file systems for specific workloads, scientists and file system deployment engineers will be able to better communicate workload requirements, file system designers and researchers will be able to try out design alternatives and innovations at scale, and instructors will be able to study very large-scale parallel file system behavior in the class room. In this paper we describe our approach and provide preliminary results that are encouraging both in terms of fidelity and simulation scalability.
Parallel discrete event simulation
Overeinder, B.J.; Hertzberger, L.O.; Sloot, P.M.A.; Withagen, W.J.
1991-01-01
In simulating applications for execution on specific computing systems, the simulation performance figures must be known in a short period of time. One basic approach to the problem of reducing the required simulation time is the exploitation of parallelism. However, in parallelizing the simulation
Parallel reservoir simulator computations
International Nuclear Information System (INIS)
Hemanth-Kumar, K.; Young, L.C.
1995-01-01
The adaptation of a reservoir simulator for parallel computations is described. The simulator was originally designed for vector processors. It performs approximately 99% of its calculations in vector/parallel mode and relative to scalar calculations it achieves speedups of 65 and 81 for black oil and EOS simulations, respectively on the CRAY C-90
International Nuclear Information System (INIS)
Reig, J.
2007-01-01
Good afternoon. Before providing the closing remarks on behalf of the NEA, I would like to take this opportunity and make some personal reflections, if you allow me Mr. Chairman. I have had the opportunity to take part in the three workshops on public communication organised by the NEA. In the first one in Paris in 2000, representing my country, Spain, and in the two last ones in Ottawa in 2004 and Tokyo today, on behalf of the NEA. The topics for the three workshops follow a logical order, first the focus was on investing in trust in a time when public communication was becoming a big challenge for the regulators. Second, maintaining and measuring public confidence to assess how credible regulators are in front of the public; and finally here in Tokyo, transparency, which is a basic element to achieve trust and credibility. In my view, a regulatory decision has three main components, it has to be technically sound. legally correct and well communicated. The emphasis in the early years was in the technical matters, till legal issues became a key element to achieve the political acceptance from governments and local authorities. Finally the public communication aspects resulted into a major effort and challenge to achieve social acceptance. (author)
Totally parallel multilevel algorithms
Frederickson, Paul O.
1988-01-01
Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.
Energy Technology Data Exchange (ETDEWEB)
1991-10-23
An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.
Massively parallel mathematical sieves
Energy Technology Data Exchange (ETDEWEB)
Montry, G.R.
1989-01-01
The Sieve of Eratosthenes is a well-known algorithm for finding all prime numbers in a given subset of integers. A parallel version of the Sieve is described that produces computational speedups over 800 on a hypercube with 1,024 processing elements for problems of fixed size. Computational speedups as high as 980 are achieved when the problem size per processor is fixed. The method of parallelization generalizes to other sieves and will be efficient on any ensemble architecture. We investigate two highly parallel sieves using scattered decomposition and compare their performance on a hypercube multiprocessor. A comparison of different parallelization techniques for the sieve illustrates the trade-offs necessary in the design and implementation of massively parallel algorithms for large ensemble computers.
Directory of Open Access Journals (Sweden)
Iskandar Iskandar
2016-07-01
Full Text Available This paper proposes variable step closed loop power control algorithm combined with space diversity to improve the performance of High Altitude Platforms (HAPs communication at low elevation angle using Code Division Multiple Access (CDMA. In this contribution, we first develop HAPs channel model which is derived from experimental measurement. From our experiment, we found HAPs channel characteristic can be modeled as a Ricean distribution because the presence of line of sight path. Different elevation angle resulting different K factor value. This value is then used in Signal to Interference Ratio (SIR based closed loop power control evaluation. The variable step algorithm is simulated under various elevation angles with different speed of mobile user. The performance is presented in terms of user elevation angle, user speed, step size and space diversity order. We found that the performance of variable step closed-loop power control less effective at low elevation angle. However our simulation shows that space diversity is able to improve the performance of closed loop power control for HAPs channel at low elevation angle.*****Kajian ini mengusulkan suatu algoritma kontrol daya langkah variabel loop tertutup dikombinasikan dengan keragaman ruang untuk meningkatkan kinerja komunikasi High Altitude Platforms(HAPs pada sudut elevasi rendah menggunakan Code Division Multiple Access (CDMA. Kami berkontribusi untuk mengembangkan model kanal HAPs yang berasal dari pengukuran eksperimental sebelumnya. Dari percobaan tersebut, kami menemukan karakteristik kanal HAPs yang dapat dimodelkan sebagai distribusi Ricean karena kehadiran jalur tanpa penghalang. Eksperimen menunjukkan bahwa perbedaan sudut elevasi menghasilkan perbedaan nilai factor K. Nilai ini kemudian digunakan dalam Signal to Interference Ratio (SIR berbasiskan evaluasi kontrol daya loop tertutup. Algoritma langkah variabel disimulasikan dibawah sudut elevasi yang berbeda dengan kecepatan
Some nonlinear space decomposition algorithms
Energy Technology Data Exchange (ETDEWEB)
Tai, Xue-Cheng; Espedal, M. [Univ. of Bergen (Norway)
1996-12-31
Convergence of a space decomposition method is proved for a general convex programming problem. The space decomposition refers to methods that decompose a space into sums of subspaces, which could be a domain decomposition or a multigrid method for partial differential equations. Two algorithms are proposed. Both can be used for linear as well as nonlinear elliptic problems and they reduce to the standard additive and multiplicative Schwarz methods for linear elliptic problems. Two {open_quotes}hybrid{close_quotes} algorithms are also presented. They converge faster than the additive one and have better parallelism than the multiplicative method. Numerical tests with a two level domain decomposition for linear, nonlinear and interface elliptic problems are presented for the proposed algorithms.
Graph topologies on closed multifunctions
Directory of Open Access Journals (Sweden)
Giuseppe Di Maio
2003-10-01
Full Text Available In this paper we study function space topologies on closed multifunctions, i.e. closed relations on X x Y using various hypertopologies. The hypertopologies are in essence, graph topologies i.e topologies on functions considered as graphs which are subsets of X x Y . We also study several topologies, including one that is derived from the Attouch-Wets filter on the range. We state embedding theorems which enable us to generalize and prove some recent results in the literature with the use of known results in the hyperspace of the range space and in the function space topologies of ordinary functions.
Spatial charge motion on an uniform density matrix-general equations in opened and closed circuits
International Nuclear Information System (INIS)
Aguiar Monsanto, S. de.
1983-01-01
The motion of a space charge cloud embedded in a matrix of constant immobile charge density is studied in open as well as in closed circuit. In the first case, open circuit, the solution is almost trivial as compared as the other one in which, after some work, the problem is reduced to an ordinary differential equation. The method of solution is parallel to that employed in the study of monopolar free space charge motion. The voltage and the current produced by a system with no net charge but with unbalanced local charge density were calculated using the general equations derived in the first part of the work. (Author) [pt
Trembach, Vera
2014-01-01
Space is an introduction to the mysteries of the Universe. Included are Task Cards for independent learning, Journal Word Cards for creative writing, and Hands-On Activities for reinforcing skills in Math and Language Arts. Space is a perfect introduction to further research of the Solar System.
Miquel, J. (Editor); Economos, A. C. (Editor)
1982-01-01
Presentations are given which address the effects of space flght on the older person, the parallels between the physiological responses to weightlessness and the aging process, and experimental possibilities afforded by the weightless environment to fundamental research in gerontology and geriatrics.
Parallelism and array processing
International Nuclear Information System (INIS)
Zacharov, V.
1983-01-01
Modern computing, as well as the historical development of computing, has been dominated by sequential monoprocessing. Yet there is the alternative of parallelism, where several processes may be in concurrent execution. This alternative is discussed in a series of lectures, in which the main developments involving parallelism are considered, both from the standpoint of computing systems and that of applications that can exploit such systems. The lectures seek to discuss parallelism in a historical context, and to identify all the main aspects of concurrency in computation right up to the present time. Included will be consideration of the important question as to what use parallelism might be in the field of data processing. (orig.)
Parallel magnetic resonance imaging
International Nuclear Information System (INIS)
Larkman, David J; Nunes, Rita G
2007-01-01
Parallel imaging has been the single biggest innovation in magnetic resonance imaging in the last decade. The use of multiple receiver coils to augment the time consuming Fourier encoding has reduced acquisition times significantly. This increase in speed comes at a time when other approaches to acquisition time reduction were reaching engineering and human limits. A brief summary of spatial encoding in MRI is followed by an introduction to the problem parallel imaging is designed to solve. There are a large number of parallel reconstruction algorithms; this article reviews a cross-section, SENSE, SMASH, g-SMASH and GRAPPA, selected to demonstrate the different approaches. Theoretical (the g-factor) and practical (coil design) limits to acquisition speed are reviewed. The practical implementation of parallel imaging is also discussed, in particular coil calibration. How to recognize potential failure modes and their associated artefacts are shown. Well-established applications including angiography, cardiac imaging and applications using echo planar imaging are reviewed and we discuss what makes a good application for parallel imaging. Finally, active research areas where parallel imaging is being used to improve data quality by repairing artefacted images are also reviewed. (invited topical review)
Simulation Exploration through Immersive Parallel Planes: Preprint
Energy Technology Data Exchange (ETDEWEB)
Brunhart-Lupo, Nicholas; Bush, Brian W.; Gruchalla, Kenny; Smith, Steve
2016-03-01
We present a visualization-driven simulation system that tightly couples systems dynamics simulations with an immersive virtual environment to allow analysts to rapidly develop and test hypotheses in a high-dimensional parameter space. To accomplish this, we generalize the two-dimensional parallel-coordinates statistical graphic as an immersive 'parallel-planes' visualization for multivariate time series emitted by simulations running in parallel with the visualization. In contrast to traditional parallel coordinate's mapping the multivariate dimensions onto coordinate axes represented by a series of parallel lines, we map pairs of the multivariate dimensions onto a series of parallel rectangles. As in the case of parallel coordinates, each individual observation in the dataset is mapped to a polyline whose vertices coincide with its coordinate values. Regions of the rectangles can be 'brushed' to highlight and select observations of interest: a 'slider' control allows the user to filter the observations by their time coordinate. In an immersive virtual environment, users interact with the parallel planes using a joystick that can select regions on the planes, manipulate selection, and filter time. The brushing and selection actions are used to both explore existing data as well as to launch additional simulations corresponding to the visually selected portions of the input parameter space. As soon as the new simulations complete, their resulting observations are displayed in the virtual environment. This tight feedback loop between simulation and immersive analytics accelerates users' realization of insights about the simulation and its output.
Simulation Exploration through Immersive Parallel Planes
Energy Technology Data Exchange (ETDEWEB)
Brunhart-Lupo, Nicholas J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Bush, Brian W [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Gruchalla, Kenny M [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Smith, Steve [Los Alamos Visualization Associates
2017-05-25
We present a visualization-driven simulation system that tightly couples systems dynamics simulations with an immersive virtual environment to allow analysts to rapidly develop and test hypotheses in a high-dimensional parameter space. To accomplish this, we generalize the two-dimensional parallel-coordinates statistical graphic as an immersive 'parallel-planes' visualization for multivariate time series emitted by simulations running in parallel with the visualization. In contrast to traditional parallel coordinate's mapping the multivariate dimensions onto coordinate axes represented by a series of parallel lines, we map pairs of the multivariate dimensions onto a series of parallel rectangles. As in the case of parallel coordinates, each individual observation in the dataset is mapped to a polyline whose vertices coincide with its coordinate values. Regions of the rectangles can be 'brushed' to highlight and select observations of interest: a 'slider' control allows the user to filter the observations by their time coordinate. In an immersive virtual environment, users interact with the parallel planes using a joystick that can select regions on the planes, manipulate selection, and filter time. The brushing and selection actions are used to both explore existing data as well as to launch additional simulations corresponding to the visually selected portions of the input parameter space. As soon as the new simulations complete, their resulting observations are displayed in the virtual environment. This tight feedback loop between simulation and immersive analytics accelerates users' realization of insights about the simulation and its output.
Energy Technology Data Exchange (ETDEWEB)
Espinoza, Marco; Leon, Kety; Martinez, Jorge [Direccion de Servicios, Instituto Peruano de Energia Nuclear, Lima (Peru)
2014-07-01
Radon causes more than 50 % of total dose from natural background radiation per year. It is widely demonstrated the capacity of radon to induce lung cancer in people exposed to this radioactive gas for long periods. Radon emerges continuously from materials that constitute soils, building materials and minerals present in our natural environment, all over the world. In our country, it is necessary to get better regulations to control the exposition of people to this gas inside buildings, dwellings and facilities where people spend their time. Our country has very simple and scarce regulations on this respect. At present, national regulations about radon are adaptations of recommendations and guides published for international organizations but without national studies or statistics to give realistic support to those rules. This work propose a classification for closed spaces where people live and work in this country taking into consideration their {sup 222}Rn concentration and probable doses involved. (authors).
The STAPL Parallel Graph Library
Harshvardhan,; Fidel, Adam; Amato, Nancy M.; Rauchwerger, Lawrence
2013-01-01
This paper describes the stapl Parallel Graph Library, a high-level framework that abstracts the user from data-distribution and parallelism details and allows them to concentrate on parallel graph algorithm development. It includes a customizable
Analysis of a parallel multigrid algorithm
Chan, Tony F.; Tuminaro, Ray S.
1989-01-01
The parallel multigrid algorithm of Frederickson and McBryan (1987) is considered. This algorithm uses multiple coarse-grid problems (instead of one problem) in the hope of accelerating convergence and is found to have a close relationship to traditional multigrid methods. Specifically, the parallel coarse-grid correction operator is identical to a traditional multigrid coarse-grid correction operator, except that the mixing of high and low frequencies caused by aliasing error is removed. Appropriate relaxation operators can be chosen to take advantage of this property. Comparisons between the standard multigrid and the new method are made.
Parallel LC circuit model for multi-band absorption and preliminary design of radiative cooling.
Feng, Rui; Qiu, Jun; Liu, Linhua; Ding, Weiqiang; Chen, Lixue
2014-12-15
We perform a comprehensive analysis of multi-band absorption by exciting magnetic polaritons in the infrared region. According to the independent properties of the magnetic polaritons, we propose a parallel inductance and capacitance(PLC) circuit model to explain and predict the multi-band resonant absorption peaks, which is fully validated by using the multi-sized structure with identical dielectric spacing layer and the multilayer structure with the same strip width. More importantly, we present the application of the PLC circuit model to preliminarily design a radiative cooling structure realized by merging several close peaks together. This omnidirectional and polarization insensitive structure is a good candidate for radiative cooling application.
Close-Spaced High Temperature Knudsen Flow.
1986-07-15
radiant heat source assembly was substituted for the brazed molybdenum one in order to achieve higher radiant heater temperatures . 2.1.4 Experimental...at very high temperature , and ground flat. The molybdenum is then chemically etched to the desired depth using an etchant which does not affect...RiB6 295 -CLSE PCED HIGH TEMPERATURE KNUDSEN FLOU(U) RASOR I AiASSOCIATES INC SUNNYVALE CA J 8 MCVEY 15 JUL 86 NSR-224 AFOSR-TR-87-1258 F49628-83-C
Directory of Open Access Journals (Sweden)
André Guarçoni M.
2011-08-01
Full Text Available In arabica coffee crops grown at high altitudes with lower temperatures, soil fertility can be improved by condensed spacing. However, at low lands with warmer temperatures in which conilon coffee is grown, the effect of close spacing on the soil characteristics may change. Aiming to determine the effect of coffee-trees close planting grown with or without NPK fertilization on the soil fertility characteristics, soil samples were collected (0-20 and 20-40 cm depth within four different conilon crop spacings (2,222; 3,333; 4,000; and 5,000 plants/ha. It was determined pH, H+Al, effective CEC (t, pH 7.0 CEC (T, base saturation (v, aluminum saturation (m values and organic matter (OM, P, K, Ca2+, Mg2+ and Al3+ contents. The analytical results were compared by Student t test and regression analysis. Conilon coffee-trees with close planting only changed soil fertility characteristics when coffee plants received annual NPK fertilization. Close planting substantially increased P and K contents and the T value in the upper soil layer and P and K contents and T, t and H+Al values in the lower soil layer.O plantio adensado melhora a fertilidade do solo em lavouras de café arábica, cultura típica de regiões altas e de temperaturas amenas. O café conilon é cultivado em regiões baixas e quentes, o que pode modificar os efeitos do adensamento sobre a fertilidade do solo. Visando determinar a influência do adensamento de plantio do café conilon, com ou sem adubação, nas características da fertilidade do solo, foram coletadas amostras de solo (0-20 e 20-40 cm de profundidade em quatro densidades de plantio (2.222; 3.333; 4.000 e 5.000 plantas/ha. Foram determinados os valores de pH, H+Al, CTC efetiva (t, CTC pH 7,0 (T, saturação por bases (V e saturação por alumínio (m e os teores de matéria orgânica (MO, P, K, Ca2+, Mg2+ e Al3+. Os resultados analíticos foram comparados pelo teste t de Student e por análise de regressão. O adensamento de
Massively parallel multicanonical simulations
Gross, Jonathan; Zierenberg, Johannes; Weigel, Martin; Janke, Wolfhard
2018-03-01
Generalized-ensemble Monte Carlo simulations such as the multicanonical method and similar techniques are among the most efficient approaches for simulations of systems undergoing discontinuous phase transitions or with rugged free-energy landscapes. As Markov chain methods, they are inherently serial computationally. It was demonstrated recently, however, that a combination of independent simulations that communicate weight updates at variable intervals allows for the efficient utilization of parallel computational resources for multicanonical simulations. Implementing this approach for the many-thread architecture provided by current generations of graphics processing units (GPUs), we show how it can be efficiently employed with of the order of 104 parallel walkers and beyond, thus constituting a versatile tool for Monte Carlo simulations in the era of massively parallel computing. We provide the fully documented source code for the approach applied to the paradigmatic example of the two-dimensional Ising model as starting point and reference for practitioners in the field.
A parallel nearly implicit time-stepping scheme
Botchev, Mike A.; van der Vorst, Henk A.
2001-01-01
Across-the-space parallelism still remains the most mature, convenient and natural way to parallelize large scale problems. One of the major problems here is that implicit time stepping is often difficult to parallelize due to the structure of the system. Approximate implicit schemes have been suggested to circumvent the problem. These schemes have attractive stability properties and they are also very well parallelizable. The purpose of this article is to give an overall assessment of the pa...
K.I.S.S. Parallel Coding (lecture 2)
CERN. Geneva
2018-01-01
K.I.S.S.ing parallel computing means, finally, loving it. Parallel computing will be approached in a theoretical and experimental way, using the most advanced and used C API: OpenMP. OpenMP is an open source project constantly developed and updated to hide the awful complexity of parallel coding in an awesome interface. The result is a tool which leaves plenty of space for clever solutions and terrific results in terms of efficiency and performance maximisation.
SPINning parallel systems software
International Nuclear Information System (INIS)
Matlin, O.S.; Lusk, E.; McCune, W.
2002-01-01
We describe our experiences in using Spin to verify parts of the Multi Purpose Daemon (MPD) parallel process management system. MPD is a distributed collection of processes connected by Unix network sockets. MPD is dynamic processes and connections among them are created and destroyed as MPD is initialized, runs user processes, recovers from faults, and terminates. This dynamic nature is easily expressible in the Spin/Promela framework but poses performance and scalability challenges. We present here the results of expressing some of the parallel algorithms of MPD and executing both simulation and verification runs with Spin
Parallel programming with Python
Palach, Jan
2014-01-01
A fast, easy-to-follow and clear tutorial to help you develop Parallel computing systems using Python. Along with explaining the fundamentals, the book will also introduce you to slightly advanced concepts and will help you in implementing these techniques in the real world. If you are an experienced Python programmer and are willing to utilize the available computing resources by parallelizing applications in a simple way, then this book is for you. You are required to have a basic knowledge of Python development to get the most of this book.
Towards a streaming model for nested data parallelism
DEFF Research Database (Denmark)
Madsen, Frederik Meisner; Filinski, Andrzej
2013-01-01
The language-integrated cost semantics for nested data parallelism pioneered by NESL provides an intuitive, high-level model for predicting performance and scalability of parallel algorithms with reasonable accuracy. However, this predictability, obtained through a uniform, parallelism-flattening......The language-integrated cost semantics for nested data parallelism pioneered by NESL provides an intuitive, high-level model for predicting performance and scalability of parallel algorithms with reasonable accuracy. However, this predictability, obtained through a uniform, parallelism......-processable in a streaming fashion. This semantics is directly compatible with previously proposed piecewise execution models for nested data parallelism, but allows the expected space usage to be reasoned about directly at the source-language level. The language definition and implementation are still very much work...
Directory of Open Access Journals (Sweden)
Javier Royuela-del-Val
2017-06-01
Full Text Available α-stable distributions are a family of well-known probability distributions. However, the lack of closed analytical expressions hinders their application. Currently, several tools have been developed to numerically evaluate their density and distribution functions or to estimate their parameters, but available solutions either do not reach sufficient precision on their evaluations or are excessively slow for practical purposes. Moreover, they do not take full advantage of the parallel processing capabilities of current multi-core machines. Other solutions work only on a subset of the α-stable parameter space. In this paper we present an R package and a C/C++ library with a MATLAB front-end that permit parallelized, fast and high precision evaluation of density, distribution and quantile functions, as well as random variable generation and parameter estimation of α-stable distributions in their whole parameter space. The described library can be easily integrated into third party developments.
A PARALLEL EXTENSION OF THE UAL ENVIRONMENT
International Nuclear Information System (INIS)
MALITSKY, N.; SHISHLO, A.
2001-01-01
The deployment of the Unified Accelerator Library (UAL) environment on the parallel cluster is presented. The approach is based on the Message-Passing Interface (MPI) library and the Perl adapter that allows one to control and mix together the existing conventional UAL components with the new MPI-based parallel extensions. In the paper, we provide timing results and describe the application of the new environment to the SNS Ring complex beam dynamics studies, particularly, simulations of several physical effects, such as space charge, field errors, fringe fields, and others
Advances in randomized parallel computing
Rajasekaran, Sanguthevar
1999-01-01
The technique of randomization has been employed to solve numerous prob lems of computing both sequentially and in parallel. Examples of randomized algorithms that are asymptotically better than their deterministic counterparts in solving various fundamental problems abound. Randomized algorithms have the advantages of simplicity and better performance both in theory and often in practice. This book is a collection of articles written by renowned experts in the area of randomized parallel computing. A brief introduction to randomized algorithms In the aflalysis of algorithms, at least three different measures of performance can be used: the best case, the worst case, and the average case. Often, the average case run time of an algorithm is much smaller than the worst case. 2 For instance, the worst case run time of Hoare's quicksort is O(n ), whereas its average case run time is only O( n log n). The average case analysis is conducted with an assumption on the input space. The assumption made to arrive at t...
Rajasekaran, Ilangovan; Nethaji, Ochanan
2017-01-01
Abstaract−In this paper, we introduce nano ∧g-closed sets in nano topological spaces. Some properties of nano ∧g-closed sets and nano ∧g-open sets are weaker forms of nano closed sets and nano open sets
Expressing Parallelism with ROOT
Energy Technology Data Exchange (ETDEWEB)
Piparo, D. [CERN; Tejedor, E. [CERN; Guiraud, E. [CERN; Ganis, G. [CERN; Mato, P. [CERN; Moneta, L. [CERN; Valls Pla, X. [CERN; Canal, P. [Fermilab
2017-11-22
The need for processing the ever-increasing amount of data generated by the LHC experiments in a more efficient way has motivated ROOT to further develop its support for parallelism. Such support is being tackled both for shared-memory and distributed-memory environments. The incarnations of the aforementioned parallelism are multi-threading, multi-processing and cluster-wide executions. In the area of multi-threading, we discuss the new implicit parallelism and related interfaces, as well as the new building blocks to safely operate with ROOT objects in a multi-threaded environment. Regarding multi-processing, we review the new MultiProc framework, comparing it with similar tools (e.g. multiprocessing module in Python). Finally, as an alternative to PROOF for cluster-wide executions, we introduce the efforts on integrating ROOT with state-of-the-art distributed data processing technologies like Spark, both in terms of programming model and runtime design (with EOS as one of the main components). For all the levels of parallelism, we discuss, based on real-life examples and measurements, how our proposals can increase the productivity of scientists.
Expressing Parallelism with ROOT
Piparo, D.; Tejedor, E.; Guiraud, E.; Ganis, G.; Mato, P.; Moneta, L.; Valls Pla, X.; Canal, P.
2017-10-01
The need for processing the ever-increasing amount of data generated by the LHC experiments in a more efficient way has motivated ROOT to further develop its support for parallelism. Such support is being tackled both for shared-memory and distributed-memory environments. The incarnations of the aforementioned parallelism are multi-threading, multi-processing and cluster-wide executions. In the area of multi-threading, we discuss the new implicit parallelism and related interfaces, as well as the new building blocks to safely operate with ROOT objects in a multi-threaded environment. Regarding multi-processing, we review the new MultiProc framework, comparing it with similar tools (e.g. multiprocessing module in Python). Finally, as an alternative to PROOF for cluster-wide executions, we introduce the efforts on integrating ROOT with state-of-the-art distributed data processing technologies like Spark, both in terms of programming model and runtime design (with EOS as one of the main components). For all the levels of parallelism, we discuss, based on real-life examples and measurements, how our proposals can increase the productivity of scientists.
Parallel Fast Legendre Transform
Alves de Inda, M.; Bisseling, R.H.; Maslen, D.K.
1998-01-01
We discuss a parallel implementation of a fast algorithm for the discrete polynomial Legendre transform We give an introduction to the DriscollHealy algorithm using polynomial arithmetic and present experimental results on the eciency and accuracy of our implementation The algorithms were
Practical parallel programming
Bauer, Barr E
2014-01-01
This is the book that will teach programmers to write faster, more efficient code for parallel processors. The reader is introduced to a vast array of procedures and paradigms on which actual coding may be based. Examples and real-life simulations using these devices are presented in C and FORTRAN.
Parallel universes beguile science
2007-01-01
A staple of mind-bending science fiction, the possibility of multiple universes has long intrigued hard-nosed physicists, mathematicians and cosmologists too. We may not be able -- as least not yet -- to prove they exist, many serious scientists say, but there are plenty of reasons to think that parallel dimensions are more than figments of eggheaded imagination.
Energy Technology Data Exchange (ETDEWEB)
2017-04-04
A parallelization of the k-means++ seed selection algorithm on three distinct hardware platforms: GPU, multicore CPU, and multithreaded architecture. K-means++ was developed by David Arthur and Sergei Vassilvitskii in 2007 as an extension of the k-means data clustering technique. These algorithms allow people to cluster multidimensional data, by attempting to minimize the mean distance of data points within a cluster. K-means++ improved upon traditional k-means by using a more intelligent approach to selecting the initial seeds for the clustering process. While k-means++ has become a popular alternative to traditional k-means clustering, little work has been done to parallelize this technique. We have developed original C++ code for parallelizing the algorithm on three unique hardware architectures: GPU using NVidia's CUDA/Thrust framework, multicore CPU using OpenMP, and the Cray XMT multithreaded architecture. By parallelizing the process for these platforms, we are able to perform k-means++ clustering much more quickly than it could be done before.
International Nuclear Information System (INIS)
Gardes, D.; Volkov, P.
1981-01-01
A 5x3cm 2 (timing only) and a 15x5cm 2 (timing and position) parallel plate avalanche counters (PPAC) are considered. The theory of operation and timing resolution is given. The measurement set-up and the curves of experimental results illustrate the possibilities of the two counters [fr
Parallel hierarchical global illumination
Energy Technology Data Exchange (ETDEWEB)
Snell, Quinn O. [Iowa State Univ., Ames, IA (United States)
1997-10-08
Solving the global illumination problem is equivalent to determining the intensity of every wavelength of light in all directions at every point in a given scene. The complexity of the problem has led researchers to use approximation methods for solving the problem on serial computers. Rather than using an approximation method, such as backward ray tracing or radiosity, the authors have chosen to solve the Rendering Equation by direct simulation of light transport from the light sources. This paper presents an algorithm that solves the Rendering Equation to any desired accuracy, and can be run in parallel on distributed memory or shared memory computer systems with excellent scaling properties. It appears superior in both speed and physical correctness to recent published methods involving bidirectional ray tracing or hybrid treatments of diffuse and specular surfaces. Like progressive radiosity methods, it dynamically refines the geometry decomposition where required, but does so without the excessive storage requirements for ray histories. The algorithm, called Photon, produces a scene which converges to the global illumination solution. This amounts to a huge task for a 1997-vintage serial computer, but using the power of a parallel supercomputer significantly reduces the time required to generate a solution. Currently, Photon can be run on most parallel environments from a shared memory multiprocessor to a parallel supercomputer, as well as on clusters of heterogeneous workstations.
Effects of parallel planning on agreement production.
Veenstra, Alma; Meyer, Antje S; Acheson, Daniel J
2015-11-01
An important issue in current psycholinguistics is how the time course of utterance planning affects the generation of grammatical structures. The current study investigated the influence of parallel activation of the components of complex noun phrases on the generation of subject-verb agreement. Specifically, the lexical interference account (Gillespie & Pearlmutter, 2011b; Solomon & Pearlmutter, 2004) predicts more agreement errors (i.e., attraction) for subject phrases in which the head and local noun mismatch in number (e.g., the apple next to the pears) when nouns are planned in parallel than when they are planned in sequence. We used a speeded picture description task that yielded sentences such as the apple next to the pears is red. The objects mentioned in the noun phrase were either semantically related or unrelated. To induce agreement errors, pictures sometimes mismatched in number. In order to manipulate the likelihood of parallel processing of the objects and to test the hypothesized relationship between parallel processing and the rate of agreement errors, the pictures were either placed close together or far apart. Analyses of the participants' eye movements and speech onset latencies indicated slower processing of the first object and stronger interference from the related (compared to the unrelated) second object in the close than in the far condition. Analyses of the agreement errors yielded an attraction effect, with more errors in mismatching than in matching conditions. However, the magnitude of the attraction effect did not differ across the close and far conditions. Thus, spatial proximity encouraged parallel processing of the pictures, which led to interference of the associated conceptual and/or lexical representation, but, contrary to the prediction, it did not lead to more attraction errors. Copyright © 2015 Elsevier B.V. All rights reserved.
Parallel Nonlinear Optimization for Astrodynamic Navigation, Phase I
National Aeronautics and Space Administration — CU Aerospace proposes the development of a new parallel nonlinear program (NLP) solver software package. NLPs allow the solution of complex optimization problems,...
Visual Interfaces for Parallel Simulations (VIPS), Phase I
National Aeronautics and Space Administration — Configuring the 3D geometry and physics of large scale parallel physics simulations is increasingly complex. Given the investment in time and effort to run these...
Parallel optoelectronic trinary signed-digit division
Alam, Mohammad S.
1999-03-01
The trinary signed-digit (TSD) number system has been found to be very useful for parallel addition and subtraction of any arbitrary length operands in constant time. Using the TSD addition and multiplication modules as the basic building blocks, we develop an efficient algorithm for performing parallel TSD division in constant time. The proposed division technique uses one TSD subtraction and two TSD multiplication steps. An optoelectronic correlator based architecture is suggested for implementation of the proposed TSD division algorithm, which fully exploits the parallelism and high processing speed of optics. An efficient spatial encoding scheme is used to ensure better utilization of space bandwidth product of the spatial light modulators used in the optoelectronic implementation.
Wald, Ingo; Ize, Santiago
2015-07-28
Parallel population of a grid with a plurality of objects using a plurality of processors. One example embodiment is a method for parallel population of a grid with a plurality of objects using a plurality of processors. The method includes a first act of dividing a grid into n distinct grid portions, where n is the number of processors available for populating the grid. The method also includes acts of dividing a plurality of objects into n distinct sets of objects, assigning a distinct set of objects to each processor such that each processor determines by which distinct grid portion(s) each object in its distinct set of objects is at least partially bounded, and assigning a distinct grid portion to each processor such that each processor populates its distinct grid portion with any objects that were previously determined to be at least partially bounded by its distinct grid portion.
Ultrascalable petaflop parallel supercomputer
Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton On Hudson, NY; Chiu, George [Cross River, NY; Cipolla, Thomas M [Katonah, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Hall, Shawn [Pleasantville, NY; Haring, Rudolf A [Cortlandt Manor, NY; Heidelberger, Philip [Cortlandt Manor, NY; Kopcsay, Gerard V [Yorktown Heights, NY; Ohmacht, Martin [Yorktown Heights, NY; Salapura, Valentina [Chappaqua, NY; Sugavanam, Krishnan [Mahopac, NY; Takken, Todd [Brewster, NY
2010-07-20
A massively parallel supercomputer of petaOPS-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC) having up to four processing elements. The ASIC nodes are interconnected by multiple independent networks that optimally maximize the throughput of packet communications between nodes with minimal latency. The multiple networks may include three high-speed networks for parallel algorithm message passing including a Torus, collective network, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. The use of a DMA engine is provided to facilitate message passing among the nodes without the expenditure of processing resources at the node.
DEFF Research Database (Denmark)
Gregersen, Frans; Josephson, Olle; Kristoffersen, Gjert
of departure that English may be used in parallel with the various local, in this case Nordic, languages. As such, the book integrates the challenge of internationalization faced by any university with the wish to improve quality in research, education and administration based on the local language......Abstract [en] More parallel, please is the result of the work of an Inter-Nordic group of experts on language policy financed by the Nordic Council of Ministers 2014-17. The book presents all that is needed to plan, practice and revise a university language policy which takes as its point......(s). There are three layers in the text: First, you may read the extremely brief version of the in total 11 recommendations for best practice. Second, you may acquaint yourself with the extended version of the recommendations and finally, you may study the reasoning behind each of them. At the end of the text, we give...
PARALLEL MOVING MECHANICAL SYSTEMS
Directory of Open Access Journals (Sweden)
Florian Ion Tiberius Petrescu
2014-09-01
Full Text Available Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Moving mechanical systems parallel structures are solid, fast, and accurate. Between parallel systems it is to be noticed Stewart platforms, as the oldest systems, fast, solid and precise. The work outlines a few main elements of Stewart platforms. Begin with the geometry platform, kinematic elements of it, and presented then and a few items of dynamics. Dynamic primary element on it means the determination mechanism kinetic energy of the entire Stewart platforms. It is then in a record tail cinematic mobile by a method dot matrix of rotation. If a structural mottoelement consists of two moving elements which translates relative, drive train and especially dynamic it is more convenient to represent the mottoelement as a single moving components. We have thus seven moving parts (the six motoelements or feet to which is added mobile platform 7 and one fixed.
Xyce parallel electronic simulator.
Energy Technology Data Exchange (ETDEWEB)
Keiter, Eric R; Mei, Ting; Russo, Thomas V.; Rankin, Eric Lamont; Schiek, Richard Louis; Thornquist, Heidi K.; Fixel, Deborah A.; Coffey, Todd S; Pawlowski, Roger P; Santarelli, Keith R.
2010-05-01
This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users Guide. The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce. This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users Guide.
Betchov, R
2012-01-01
Stability of Parallel Flows provides information pertinent to hydrodynamical stability. This book explores the stability problems that occur in various fields, including electronics, mechanics, oceanography, administration, economics, as well as naval and aeronautical engineering. Organized into two parts encompassing 10 chapters, this book starts with an overview of the general equations of a two-dimensional incompressible flow. This text then explores the stability of a laminar boundary layer and presents the equation of the inviscid approximation. Other chapters present the general equation
Algorithmically specialized parallel computers
Snyder, Lawrence; Gannon, Dennis B
1985-01-01
Algorithmically Specialized Parallel Computers focuses on the concept and characteristics of an algorithmically specialized computer.This book discusses the algorithmically specialized computers, algorithmic specialization using VLSI, and innovative architectures. The architectures and algorithms for digital signal, speech, and image processing and specialized architectures for numerical computations are also elaborated. Other topics include the model for analyzing generalized inter-processor, pipelined architecture for search tree maintenance, and specialized computer organization for raster
Directory of Open Access Journals (Sweden)
E. S. Belenkaya
2007-06-01
Full Text Available We study the dependence of Saturn's magnetospheric magnetic field structure on the interplanetary magnetic field (IMF, together with the corresponding variations of the open-closed field line boundary in the ionosphere. Specifically we investigate the interval from 8 to 30 January 2004, when UV images of Saturn's southern aurora were obtained by the Hubble Space Telescope (HST, and simultaneous interplanetary measurements were provided by the Cassini spacecraft located near the ecliptic ~0.2 AU upstream of Saturn and ~0.5 AU off the planet-Sun line towards dawn. Using the paraboloid model of Saturn's magnetosphere, we calculate the magnetospheric magnetic field structure for several values of the IMF vector representative of interplanetary compression regions. Variations in the magnetic structure lead to different shapes and areas of the open field line region in the ionosphere. Comparison with the HST auroral images shows that the area of the computed open flux region is generally comparable to that enclosed by the auroral oval, and sometimes agrees in detail with its poleward boundary, though more typically being displaced by a few degrees in the tailward direction.
Energy Technology Data Exchange (ETDEWEB)
Lober, R.R.; Tautges, T.J.; Vaughan, C.T.
1997-03-01
Paving is an automated mesh generation algorithm which produces all-quadrilateral elements. It can additionally generate these elements in varying sizes such that the resulting mesh adapts to a function distribution, such as an error function. While powerful, conventional paving is a very serial algorithm in its operation. Parallel paving is the extension of serial paving into parallel environments to perform the same meshing functions as conventional paving only on distributed, discretized models. This extension allows large, adaptive, parallel finite element simulations to take advantage of paving`s meshing capabilities for h-remap remeshing. A significantly modified version of the CUBIT mesh generation code has been developed to host the parallel paving algorithm and demonstrate its capabilities on both two dimensional and three dimensional surface geometries and compare the resulting parallel produced meshes to conventionally paved meshes for mesh quality and algorithm performance. Sandia`s {open_quotes}tiling{close_quotes} dynamic load balancing code has also been extended to work with the paving algorithm to retain parallel efficiency as subdomains undergo iterative mesh refinement.
Resistor Combinations for Parallel Circuits.
McTernan, James P.
1978-01-01
To help simplify both teaching and learning of parallel circuits, a high school electricity/electronics teacher presents and illustrates the use of tables of values for parallel resistive circuits in which total resistances are whole numbers. (MF)
SOFTWARE FOR DESIGNING PARALLEL APPLICATIONS
Directory of Open Access Journals (Sweden)
M. K. Bouza
2017-01-01
Full Text Available The object of research is the tools to support the development of parallel programs in C/C ++. The methods and software which automates the process of designing parallel applications are proposed.
ps-ro Fuzzy Open(Closed Functions and ps-ro Fuzzy Semi-Homeomorphism
Directory of Open Access Journals (Sweden)
Pankaj Chettri
2015-11-01
Full Text Available The aim of this paper is to introduce and characterize some new class of functions in a fuzzy topological space termed as ps-ro fuzzy open(closed functions, ps-ro fuzzy pre semiopen functions and ps-ro fuzzy semi-homeomorphism. The interrelation among these concepts and also their relations with the parallel existing concepts are established. It is also shown with the help of examples that these newly introduced concepts are independent of the well known existing allied concepts.
Analysis of a closed-kinematic chain robot manipulator
Nguyen, Charles C.; Pooran, Farhad J.
1988-01-01
Presented are the research results from the research grant entitled: Active Control of Robot Manipulators, sponsored by the Goddard Space Flight Center (NASA) under grant number NAG-780. This report considers a class of robot manipulators based on the closed-kinematic chain mechanism (CKCM). This type of robot manipulators mainly consists of two platforms, one is stationary and the other moving, and they are coupled together through a number of in-parallel actuators. Using spatial geometry and homogeneous transformation, a closed-form solution is derived for the inverse kinematic problem of the six-degree-of-freedom manipulator, built to study robotic assembly in space. Iterative Newton Raphson method is employed to solve the forward kinematic problem. Finally, the equations of motion of the above manipulators are obtained by employing the Lagrangian method. Study of the manipulator dynamics is performed using computer simulation whose results show that the robot actuating forces are strongly dependent on the mass and centroid locations of the robot links.
Parallel External Memory Graph Algorithms
DEFF Research Database (Denmark)
Arge, Lars Allan; Goodrich, Michael T.; Sitchinava, Nodari
2010-01-01
In this paper, we study parallel I/O efficient graph algorithms in the Parallel External Memory (PEM) model, one o f the private-cache chip multiprocessor (CMP) models. We study the fundamental problem of list ranking which leads to efficient solutions to problems on trees, such as computing lowest...... an optimal speedup of Â¿(P) in parallel I/O complexity and parallel computation time, compared to the single-processor external memory counterparts....
Mayfield, K. K.
2017-12-01
BackgroundTo minority adolescents in urban centers science inquiry seems like an engagement completed by others with specialized skills (Alkon & Agyeman, 2012). When scientists teach science classes those spaces and pedagogy are underwritten by the science teachers' beliefs about how science happens (Southerland, Gess-Newsome & Johnston, 2002). Further, scientific inquiry is often presented as the realm of upperclass whiteness (Alkon & Agyeman, 2012; Mayfield, 2014). When science educators talk about the achievement gaps between raced and classed learners, accompanying that gap is also a gap in science experience. My high school students in a postindustrial school district: attend a school under state takeover (the lowest 5/5 rating (MA Executive Office of Education, 2017)); have a student body that is 70% Latinx; and 96% of whom receive Free and Reduced Lunch (a Federal marker of a family below the poverty line). Annual Yearly Progress is a goal set by state and federal governments for school populations by race, ability, and language. In 2016, the site has failed to make its goals for special education, black, hispanic, white, and English as a Second Language populations. As a high poverty district there is a paucity of extracurricular science experiences. This lack of science extensions make closing standardized test gaps difficult. Geoscience Skills & FindingsThis after school program does not replicate deficit narratives that keep certain bodies of students away from science inquiry (Mayfield, 2015; Ogbu, 1987). Instead, Science Club uses an array of student-centered science (physics, math, arts, chemistry, biology) projects to help students see themselves as citizen scientists who lead explorations of their world. We meet 1.5 hours a week in a 30 week school year. Science club helps students feel like powerful and capable science inquirers with 80% girls in attendance, and uses science experiments to cultivate essential inquiry skills like: Observation
Parallel inter channel interaction mechanisms
International Nuclear Information System (INIS)
Jovic, V.; Afgan, N.; Jovic, L.
1995-01-01
Parallel channels interactions are examined. For experimental researches of nonstationary regimes flow in three parallel vertical channels results of phenomenon analysis and mechanisms of parallel channel interaction for adiabatic condition of one-phase fluid and two-phase mixture flow are shown. (author)
Inflation in a closed universe
Ratra, Bharat
2017-11-01
To derive a power spectrum for energy density inhomogeneities in a closed universe, we study a spatially-closed inflation-modified hot big bang model whose evolutionary history is divided into three epochs: an early slowly-rolling scalar field inflation epoch and the usual radiation and nonrelativistic matter epochs. (For our purposes it is not necessary to consider a final dark energy dominated epoch.) We derive general solutions of the relativistic linear perturbation equations in each epoch. The constants of integration in the inflation epoch solutions are determined from de Sitter invariant quantum-mechanical initial conditions in the Lorentzian section of the inflating closed de Sitter space derived from Hawking's prescription that the quantum state of the universe only include field configurations that are regular on the Euclidean (de Sitter) sphere section. The constants of integration in the radiation and matter epoch solutions are determined from joining conditions derived by requiring that the linear perturbation equations remain nonsingular at the transitions between epochs. The matter epoch power spectrum of gauge-invariant energy density inhomogeneities is not a power law, and depends on spatial wave number in the way expected for a generalization to the closed model of the standard flat-space scale-invariant power spectrum. The power spectrum we derive appears to differ from a number of other closed inflation model power spectra derived assuming different (presumably non de Sitter invariant) initial conditions.
National Aeronautics and Space Administration — The Deep Space Habitat was closed out at the end of Fiscal Year 2013 (September 30, 2013). Results and select content have been incorporated into the new Exploration...
A Parallel Butterfly Algorithm
Poulson, Jack; Demanet, Laurent; Maxwell, Nicholas; Ying, Lexing
2014-01-01
The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.
A Parallel Butterfly Algorithm
Poulson, Jack
2014-02-04
The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.
Fast parallel event reconstruction
CERN. Geneva
2010-01-01
On-line processing of large data volumes produced in modern HEP experiments requires using maximum capabilities of modern and future many-core CPU and GPU architectures.One of such powerful feature is a SIMD instruction set, which allows packing several data items in one register and to operate on all of them, thus achievingmore operations per clock cycle. Motivated by the idea of using the SIMD unit ofmodern processors, the KF based track fit has been adapted for parallelism, including memory optimization, numerical analysis, vectorization with inline operator overloading, and optimization using SDKs. The speed of the algorithm has been increased in 120000 times with 0.1 ms/track, running in parallel on 16 SPEs of a Cell Blade computer. Running on a Nehalem CPU with 8 cores it shows the processing speed of 52 ns/track using the Intel Threading Building Blocks. The same KF algorithm running on an Nvidia GTX 280 in the CUDA frameworkprovi...
International Nuclear Information System (INIS)
DeHart, Mark D.; Williams, Mark L.; Bowman, Stephen M.
2010-01-01
The SCALE computational architecture has remained basically the same since its inception 30 years ago, although constituent modules and capabilities have changed significantly. This SCALE concept was intended to provide a framework whereby independent codes can be linked to provide a more comprehensive capability than possible with the individual programs - allowing flexibility to address a wide variety of applications. However, the current system was designed originally for mainframe computers with a single CPU and with significantly less memory than today's personal computers. It has been recognized that the present SCALE computation system could be restructured to take advantage of modern hardware and software capabilities, while retaining many of the modular features of the present system. Preliminary work is being done to define specifications and capabilities for a more advanced computational architecture. This paper describes the state of current SCALE development activities and plans for future development. With the release of SCALE 6.1 in 2010, a new phase of evolutionary development will be available to SCALE users within the TRITON and NEWT modules. The SCALE (Standardized Computer Analyses for Licensing Evaluation) code system developed by Oak Ridge National Laboratory (ORNL) provides a comprehensive and integrated package of codes and nuclear data for a wide range of applications in criticality safety, reactor physics, shielding, isotopic depletion and decay, and sensitivity/uncertainty (S/U) analysis. Over the last three years, since the release of version 5.1 in 2006, several important new codes have been introduced within SCALE, and significant advances applied to existing codes. Many of these new features became available with the release of SCALE 6.0 in early 2009. However, beginning with SCALE 6.1, a first generation of parallel computing is being introduced. In addition to near-term improvements, a plan for longer term SCALE enhancement
Parallel ray tracing for one-dimensional discrete ordinate computations
International Nuclear Information System (INIS)
Jarvis, R.D.; Nelson, P.
1996-01-01
The ray-tracing sweep in discrete-ordinates, spatially discrete numerical approximation methods applied to the linear, steady-state, plane-parallel, mono-energetic, azimuthally symmetric, neutral-particle transport equation can be reduced to a parallel prefix computation. In so doing, the often severe penalty in convergence rate of the source iteration, suffered by most current parallel algorithms using spatial domain decomposition, can be avoided while attaining parallelism in the spatial domain to whatever extent desired. In addition, the reduction implies parallel algorithm complexity limits for the ray-tracing sweep. The reduction applies to all closed, linear, one-cell functional (CLOF) spatial approximation methods, which encompasses most in current popular use. Scalability test results of an implementation of the algorithm on a 64-node nCube-2S hypercube-connected, message-passing, multi-computer are described. (author)
Parallel imaging microfluidic cytometer.
Ehrlich, Daniel J; McKenna, Brian K; Evans, James G; Belkina, Anna C; Denis, Gerald V; Sherr, David H; Cheung, Man Ching
2011-01-01
By adding an additional degree of freedom from multichannel flow, the parallel microfluidic cytometer (PMC) combines some of the best features of fluorescence-activated flow cytometry (FCM) and microscope-based high-content screening (HCS). The PMC (i) lends itself to fast processing of large numbers of samples, (ii) adds a 1D imaging capability for intracellular localization assays (HCS), (iii) has a high rare-cell sensitivity, and (iv) has an unusual capability for time-synchronized sampling. An inability to practically handle large sample numbers has restricted applications of conventional flow cytometers and microscopes in combinatorial cell assays, network biology, and drug discovery. The PMC promises to relieve a bottleneck in these previously constrained applications. The PMC may also be a powerful tool for finding rare primary cells in the clinic. The multichannel architecture of current PMC prototypes allows 384 unique samples for a cell-based screen to be read out in ∼6-10 min, about 30 times the speed of most current FCM systems. In 1D intracellular imaging, the PMC can obtain protein localization using HCS marker strategies at many times for the sample throughput of charge-coupled device (CCD)-based microscopes or CCD-based single-channel flow cytometers. The PMC also permits the signal integration time to be varied over a larger range than is practical in conventional flow cytometers. The signal-to-noise advantages are useful, for example, in counting rare positive cells in the most difficult early stages of genome-wide screening. We review the status of parallel microfluidic cytometry and discuss some of the directions the new technology may take. Copyright © 2011 Elsevier Inc. All rights reserved.
Vacuum Large Current Parallel Transfer Numerical Analysis
Directory of Open Access Journals (Sweden)
Enyuan Dong
2014-01-01
Full Text Available The stable operation and reliable breaking of large generator current are a difficult problem in power system. It can be solved successfully by the parallel interrupters and proper timing sequence with phase-control technology, in which the strategy of breaker’s control is decided by the time of both the first-opening phase and second-opening phase. The precise transfer current’s model can provide the proper timing sequence to break the generator circuit breaker. By analysis of the transfer current’s experiments and data, the real vacuum arc resistance and precise correctional model in the large transfer current’s process are obtained in this paper. The transfer time calculated by the correctional model of transfer current is very close to the actual transfer time. It can provide guidance for planning proper timing sequence and breaking the vacuum generator circuit breaker with the parallel interrupters.
(Nearly) portable PIC code for parallel computers
International Nuclear Information System (INIS)
Decyk, V.K.
1993-01-01
As part of the Numerical Tokamak Project, the author has developed a (nearly) portable, one dimensional version of the GCPIC algorithm for particle-in-cell codes on parallel computers. This algorithm uses a spatial domain decomposition for the fields, and passes particles from one domain to another as the particles move spatially. With only minor changes, the code has been run in parallel on the Intel Delta, the Cray C-90, the IBM ES/9000 and a cluster of workstations. After a line by line translation into cmfortran, the code was also run on the CM-200. Impressive speeds have been achieved, both on the Intel Delta and the Cray C-90, around 30 nanoseconds per particle per time step. In addition, the author was able to isolate the data management modules, so that the physics modules were not changed much from their sequential version, and the data management modules can be used as open-quotes black boxes.close quotes
Parallelization and automatic data distribution for nuclear reactor simulations
Energy Technology Data Exchange (ETDEWEB)
Liebrock, L.M. [Liebrock-Hicks Research, Calumet, MI (United States)
1997-07-01
Detailed attempts at realistic nuclear reactor simulations currently take many times real time to execute on high performance workstations. Even the fastest sequential machine can not run these simulations fast enough to ensure that the best corrective measure is used during a nuclear accident to prevent a minor malfunction from becoming a major catastrophe. Since sequential computers have nearly reached the speed of light barrier, these simulations will have to be run in parallel to make significant improvements in speed. In physical reactor plants, parallelism abounds. Fluids flow, controls change, and reactions occur in parallel with only adjacent components directly affecting each other. These do not occur in the sequentialized manner, with global instantaneous effects, that is often used in simulators. Development of parallel algorithms that more closely approximate the real-world operation of a reactor may, in addition to speeding up the simulations, actually improve the accuracy and reliability of the predictions generated. Three types of parallel architecture (shared memory machines, distributed memory multicomputers, and distributed networks) are briefly reviewed as targets for parallelization of nuclear reactor simulation. Various parallelization models (loop-based model, shared memory model, functional model, data parallel model, and a combined functional and data parallel model) are discussed along with their advantages and disadvantages for nuclear reactor simulation. A variety of tools are introduced for each of the models. Emphasis is placed on the data parallel model as the primary focus for two-phase flow simulation. Tools to support data parallel programming for multiple component applications and special parallelization considerations are also discussed.
Parallelization and automatic data distribution for nuclear reactor simulations
International Nuclear Information System (INIS)
Liebrock, L.M.
1997-01-01
Detailed attempts at realistic nuclear reactor simulations currently take many times real time to execute on high performance workstations. Even the fastest sequential machine can not run these simulations fast enough to ensure that the best corrective measure is used during a nuclear accident to prevent a minor malfunction from becoming a major catastrophe. Since sequential computers have nearly reached the speed of light barrier, these simulations will have to be run in parallel to make significant improvements in speed. In physical reactor plants, parallelism abounds. Fluids flow, controls change, and reactions occur in parallel with only adjacent components directly affecting each other. These do not occur in the sequentialized manner, with global instantaneous effects, that is often used in simulators. Development of parallel algorithms that more closely approximate the real-world operation of a reactor may, in addition to speeding up the simulations, actually improve the accuracy and reliability of the predictions generated. Three types of parallel architecture (shared memory machines, distributed memory multicomputers, and distributed networks) are briefly reviewed as targets for parallelization of nuclear reactor simulation. Various parallelization models (loop-based model, shared memory model, functional model, data parallel model, and a combined functional and data parallel model) are discussed along with their advantages and disadvantages for nuclear reactor simulation. A variety of tools are introduced for each of the models. Emphasis is placed on the data parallel model as the primary focus for two-phase flow simulation. Tools to support data parallel programming for multiple component applications and special parallelization considerations are also discussed
Design strategies for irregularly adapting parallel applications
International Nuclear Information System (INIS)
Oliker, Leonid; Biswas, Rupak; Shan, Hongzhang; Sing, Jaswinder Pal
2000-01-01
Achieving scalable performance for dynamic irregular applications is eminently challenging. Traditional message-passing approaches have been making steady progress towards this goal; however, they suffer from complex implementation requirements. The use of a global address space greatly simplifies the programming task, but can degrade the performance of dynamically adapting computations. In this work, we examine two major classes of adaptive applications, under five competing programming methodologies and four leading parallel architectures. Results indicate that it is possible to achieve message-passing performance using shared-memory programming techniques by carefully following the same high level strategies. Adaptive applications have computational work loads and communication patterns which change unpredictably at runtime, requiring dynamic load balancing to achieve scalable performance on parallel machines. Efficient parallel implementations of such adaptive applications are therefore a challenging task. This work examines the implementation of two typical adaptive applications, Dynamic Remeshing and N-Body, across various programming paradigms and architectural platforms. We compare several critical factors of the parallel code development, including performance, programmability, scalability, algorithmic development, and portability
About Parallel Programming: Paradigms, Parallel Execution and Collaborative Systems
Directory of Open Access Journals (Sweden)
Loredana MOCEAN
2009-01-01
Full Text Available In the last years, there were made efforts for delineation of a stabile and unitary frame, where the problems of logical parallel processing must find solutions at least at the level of imperative languages. The results obtained by now are not at the level of the made efforts. This paper wants to be a little contribution at these efforts. We propose an overview in parallel programming, parallel execution and collaborative systems.
Closed forms for conformally flat Green's functions
International Nuclear Information System (INIS)
Brown, M.R.; Grove, P.G.; Ottewill, A.C.
1981-01-01
A closed form is obtained for the massless scalar Green's function on Rindler space. This is related by conformal transformation to the Green's function for a massless, conformally coupled scalar field on the open Einstein universe. A closed form is also obtained for the corresponding Green's function on the Einstein static universe. (author)
Parallel Monte Carlo Search for Hough Transform
Lopes, Raul H. C.; Franqueira, Virginia N. L.; Reid, Ivan D.; Hobson, Peter R.
2017-10-01
We investigate the problem of line detection in digital image processing and in special how state of the art algorithms behave in the presence of noise and whether CPU efficiency can be improved by the combination of a Monte Carlo Tree Search, hierarchical space decomposition, and parallel computing. The starting point of the investigation is the method introduced in 1962 by Paul Hough for detecting lines in binary images. Extended in the 1970s to the detection of space forms, what came to be known as Hough Transform (HT) has been proposed, for example, in the context of track fitting in the LHC ATLAS and CMS projects. The Hough Transform transfers the problem of line detection, for example, into one of optimization of the peak in a vote counting process for cells which contain the possible points of candidate lines. The detection algorithm can be computationally expensive both in the demands made upon the processor and on memory. Additionally, it can have a reduced effectiveness in detection in the presence of noise. Our first contribution consists in an evaluation of the use of a variation of the Radon Transform as a form of improving theeffectiveness of line detection in the presence of noise. Then, parallel algorithms for variations of the Hough Transform and the Radon Transform for line detection are introduced. An algorithm for Parallel Monte Carlo Search applied to line detection is also introduced. Their algorithmic complexities are discussed. Finally, implementations on multi-GPU and multicore architectures are discussed.
A Parallel Saturation Algorithm on Shared Memory Architectures
Ezekiel, Jonathan; Siminiceanu
2007-01-01
Symbolic state-space generators are notoriously hard to parallelize. However, the Saturation algorithm implemented in the SMART verification tool differs from other sequential symbolic state-space generators in that it exploits the locality of ring events in asynchronous system models. This paper explores whether event locality can be utilized to efficiently parallelize Saturation on shared-memory architectures. Conceptually, we propose to parallelize the ring of events within a decision diagram node, which is technically realized via a thread pool. We discuss the challenges involved in our parallel design and conduct experimental studies on its prototypical implementation. On a dual-processor dual core PC, our studies show speed-ups for several example models, e.g., of up to 50% for a Kanban model, when compared to running our algorithm only on a single core.
Asynchronous Task-Based Parallelization of Algebraic Multigrid
AlOnazi, Amani A.
2017-06-23
As processor clock rates become more dynamic and workloads become more adaptive, the vulnerability to global synchronization that already complicates programming for performance in today\\'s petascale environment will be exacerbated. Algebraic multigrid (AMG), the solver of choice in many large-scale PDE-based simulations, scales well in the weak sense, with fixed problem size per node, on tightly coupled systems when loads are well balanced and core performance is reliable. However, its strong scaling to many cores within a node is challenging. Reducing synchronization and increasing concurrency are vital adaptations of AMG to hybrid architectures. Recent communication-reducing improvements to classical additive AMG by Vassilevski and Yang improve concurrency and increase communication-computation overlap, while retaining convergence properties close to those of standard multiplicative AMG, but remain bulk synchronous.We extend the Vassilevski and Yang additive AMG to asynchronous task-based parallelism using a hybrid MPI+OmpSs (from the Barcelona Supercomputer Center) within a node, along with MPI for internode communications. We implement a tiling approach to decompose the grid hierarchy into parallel units within task containers. We compare against the MPI-only BoomerAMG and the Auxiliary-space Maxwell Solver (AMS) in the hypre library for the 3D Laplacian operator and the electromagnetic diffusion, respectively. In time to solution for a full solve an MPI-OmpSs hybrid improves over an all-MPI approach in strong scaling at full core count (32 threads per single Haswell node of the Cray XC40) and maintains this per node advantage as both weak scale to thousands of cores, with MPI between nodes.
Parallel Framework for Cooperative Processes
Directory of Open Access Journals (Sweden)
Mitică Craus
2005-01-01
Full Text Available This paper describes the work of an object oriented framework designed to be used in the parallelization of a set of related algorithms. The idea behind the system we are describing is to have a re-usable framework for running several sequential algorithms in a parallel environment. The algorithms that the framework can be used with have several things in common: they have to run in cycles and the work should be possible to be split between several "processing units". The parallel framework uses the message-passing communication paradigm and is organized as a master-slave system. Two applications are presented: an Ant Colony Optimization (ACO parallel algorithm for the Travelling Salesman Problem (TSP and an Image Processing (IP parallel algorithm for the Symmetrical Neighborhood Filter (SNF. The implementations of these applications by means of the parallel framework prove to have good performances: approximatively linear speedup and low communication cost.
Pattern recognition with parallel associative memory
Toth, Charles K.; Schenk, Toni
1990-01-01
An examination is conducted of the feasibility of searching targets in aerial photographs by means of a parallel associative memory (PAM) that is based on the nearest-neighbor algorithm; the Hamming distance is used as a measure of closeness, in order to discriminate patterns. Attention has been given to targets typically used for ground-control points. The method developed sorts out approximate target positions where precise localizations are needed, in the course of the data-acquisition process. The majority of control points in different images were correctly identified.
P3T+: A Performance Estimator for Distributed and Parallel Programs
Directory of Open Access Journals (Sweden)
T. Fahringer
2000-01-01
Full Text Available Developing distributed and parallel programs on today's multiprocessor architectures is still a challenging task. Particular distressing is the lack of effective performance tools that support the programmer in evaluating changes in code, problem and machine sizes, and target architectures. In this paper we introduce P3T+ which is a performance estimator for mostly regular HPF (High Performance Fortran programs but partially covers also message passing programs (MPI. P3T+ is unique by modeling programs, compiler code transformations, and parallel and distributed architectures. It computes at compile-time a variety of performance parameters including work distribution, number of transfers, amount of data transferred, transfer times, computation times, and number of cache misses. Several novel technologies are employed to compute these parameters: loop iteration spaces, array access patterns, and data distributions are modeled by employing highly effective symbolic analysis. Communication is estimated by simulating the behavior of a communication library used by the underlying compiler. Computation times are predicted through pre-measured kernels on every target architecture of interest. We carefully model most critical architecture specific factors such as cache lines sizes, number of cache lines available, startup times, message transfer time per byte, etc. P3T+ has been implemented and is closely integrated with the Vienna High Performance Compiler (VFC to support programmers develop parallel and distributed applications. Experimental results for realistic kernel codes taken from real-world applications are presented to demonstrate both accuracy and usefulness of P3T+.
Grasp planning for a reconfigurable parallel robot with an underactuated arm structure
Directory of Open Access Journals (Sweden)
M. Riedel
2010-12-01
Full Text Available In this paper, a novel approach of grasp planning is applied to find out the appropriate grasp points for a reconfigurable parallel robot called PARAGRIP (Parallel Gripping. This new handling system is able to manipulate objects in the six-dimensional Cartesian space by several robotic arms using only six actuated joints. After grasping, the contact elements at the end of the underactuated arm mechanisms are connected to the object which forms a closed loop mechanism similar to the architecture of parallel manipulators. As the mounting and grasp points of the arms can easily be changed, the manipulator can be reconfigured to match the user's preferences and needs. This paper raises the question, how and where these grasp points are to be placed on the object to perform well for a certain manipulation task.
This paper was presented at the IFToMM/ASME International Workshop on Underactuated Grasping (UG2010, 19 August 2010, Montréal, Canada.
A Parallel Compact Multi-Dimensional Numerical Algorithm with Aeroacoustics Applications
Povitsky, Alex; Morris, Philip J.
1999-01-01
In this study we propose a novel method to parallelize high-order compact numerical algorithms for the solution of three-dimensional PDEs (Partial Differential Equations) in a space-time domain. For this numerical integration most of the computer time is spent in computation of spatial derivatives at each stage of the Runge-Kutta temporal update. The most efficient direct method to compute spatial derivatives on a serial computer is a version of Gaussian elimination for narrow linear banded systems known as the Thomas algorithm. In a straightforward pipelined implementation of the Thomas algorithm processors are idle due to the forward and backward recurrences of the Thomas algorithm. To utilize processors during this time, we propose to use them for either non-local data independent computations, solving lines in the next spatial direction, or local data-dependent computations by the Runge-Kutta method. To achieve this goal, control of processor communication and computations by a static schedule is adopted. Thus, our parallel code is driven by a communication and computation schedule instead of the usual "creative, programming" approach. The obtained parallelization speed-up of the novel algorithm is about twice as much as that for the standard pipelined algorithm and close to that for the explicit DRP algorithm.
Parallel Monte Carlo reactor neutronics
International Nuclear Information System (INIS)
Blomquist, R.N.; Brown, F.B.
1994-01-01
The issues affecting implementation of parallel algorithms for large-scale engineering Monte Carlo neutron transport simulations are discussed. For nuclear reactor calculations, these include load balancing, recoding effort, reproducibility, domain decomposition techniques, I/O minimization, and strategies for different parallel architectures. Two codes were parallelized and tested for performance. The architectures employed include SIMD, MIMD-distributed memory, and workstation network with uneven interactive load. Speedups linear with the number of nodes were achieved
Circuit and bond polytopes on series–parallel graphs
Borne , Sylvie; Fouilhoux , Pierre; Grappe , Roland; Lacroix , Mathieu; Pesneau , Pierre
2015-01-01
International audience; In this paper, we describe the circuit polytope on series–parallel graphs. We first show the existence of a compact extended formulation. Though not being explicit, its construction process helps us to inductively provide the description in the original space. As a consequence, using the link between bonds and circuits in planar graphs, we also describe the bond polytope on series–parallel graphs.
Large amplitude parallel propagating electromagnetic oscillitons
International Nuclear Information System (INIS)
Cattaert, Tom; Verheest, Frank
2005-01-01
Earlier systematic nonlinear treatments of parallel propagating electromagnetic waves have been given within a fluid dynamic approach, in a frame where the nonlinear structures are stationary and various constraining first integrals can be obtained. This has lead to the concept of oscillitons that has found application in various space plasmas. The present paper differs in three main aspects from the previous studies: first, the invariants are derived in the plasma frame, as customary in the Sagdeev method, thus retaining in Maxwell's equations all possible effects. Second, a single differential equation is obtained for the parallel fluid velocity, in a form reminiscent of the Sagdeev integrals, hence allowing a fully nonlinear discussion of the oscilliton properties, at such amplitudes as the underlying Mach number restrictions allow. Third, the transition to weakly nonlinear whistler oscillitons is done in an analytical rather than a numerical fashion
Computation and parallel implementation for early vision
Gualtieri, J. Anthony
1990-01-01
The problem of early vision is to transform one or more retinal illuminance images-pixel arrays-to image representations built out of such primitive visual features such as edges, regions, disparities, and clusters. These transformed representations form the input to later vision stages that perform higher level vision tasks including matching and recognition. Researchers developed algorithms for: (1) edge finding in the scale space formulation; (2) correlation methods for computing matches between pairs of images; and (3) clustering of data by neural networks. These algorithms are formulated for parallel implementation of SIMD machines, such as the Massively Parallel Processor, a 128 x 128 array processor with 1024 bits of local memory per processor. For some cases, researchers can show speedups of three orders of magnitude over serial implementations.
A Study of Parallels Between Antarctica South Pole Traverse Equipment and Lunar/Mars Surface Systems
Mueller, Robert P.; Hoffman, Stephen, J.; Thur, Paul
2010-01-01
The parallels between an actual Antarctica South Pole re-supply traverse conducted by the National Science Foundation (NSF) Office of Polar Programs in 2009 have been studied with respect to the latest mission architecture concepts being generated by the United States National Aeronautics and Space Administration (NASA) for lunar and Mars surface systems scenarios. The challenges faced by both endeavors are similar since they must both deliver equipment and supplies to support operations in an extreme environment with little margin for error in order to be successful. By carefully and closely monitoring the manifesting and operational support equipment lists which will enable this South Pole traverse, functional areas have been identified. The equipment required to support these functions will be listed with relevant properties such as mass, volume, spare parts and maintenance schedules. This equipment will be compared to space systems currently in use and projected to be required to support equivalent and parallel functions in Lunar and Mars missions in order to provide a level of realistic benchmarking. Space operations have historically required significant amounts of support equipment and tools to operate and maintain the space systems that are the primary focus of the mission. By gaining insight and expertise in Antarctic South Pole traverses, space missions can use the experience gained over the last half century of Antarctic operations in order to design for operations, maintenance, dual use, robustness and safety which will result in a more cost effective, user friendly, and lower risk surface system on the Moon and Mars. It is anticipated that the U.S Antarctic Program (USAP) will also realize benefits for this interaction with NASA in at least two areas: an understanding of how NASA plans and carries out its missions and possible improved efficiency through factors such as weight savings, alternative technologies, or modifications in training and
DEFF Research Database (Denmark)
Kosbar, Tamer R.; Sofan, Mamdouh A.; Waly, Mohamed A.
2015-01-01
about 6.1 °C when the TFO strand was modified with Z and the Watson-Crick strand with adenine-LNA (AL). The molecular modeling results showed that, in case of nucleobases Y and Z a hydrogen bond (1.69 and 1.72 Å, respectively) was formed between the protonated 3-aminopropyn-1-yl chain and one...... of the phosphate groups in Watson-Crick strand. Also, it was shown that the nucleobase Y made a good stacking and binding with the other nucleobases in the TFO and Watson-Crick duplex, respectively. In contrast, the nucleobase Z with LNA moiety was forced to twist out of plane of Watson-Crick base pair which......The phosphoramidites of DNA monomers of 7-(3-aminopropyn-1-yl)-8-aza-7-deazaadenine (Y) and 7-(3-aminopropyn-1-yl)-8-aza-7-deazaadenine LNA (Z) are synthesized, and the thermal stability at pH 7.2 and 8.2 of anti-parallel triplexes modified with these two monomers is determined. When, the anti...
Parallel consensual neural networks.
Benediktsson, J A; Sveinsson, J R; Ersoy, O K; Swain, P H
1997-01-01
A new type of a neural-network architecture, the parallel consensual neural network (PCNN), is introduced and applied in classification/data fusion of multisource remote sensing and geographic data. The PCNN architecture is based on statistical consensus theory and involves using stage neural networks with transformed input data. The input data are transformed several times and the different transformed data are used as if they were independent inputs. The independent inputs are first classified using the stage neural networks. The output responses from the stage networks are then weighted and combined to make a consensual decision. In this paper, optimization methods are used in order to weight the outputs from the stage networks. Two approaches are proposed to compute the data transforms for the PCNN, one for binary data and another for analog data. The analog approach uses wavelet packets. The experimental results obtained with the proposed approach show that the PCNN outperforms both a conjugate-gradient backpropagation neural network and conventional statistical methods in terms of overall classification accuracy of test data.
Classification of domains of closed operators
International Nuclear Information System (INIS)
Lassner, G.; Timmermann, W.
1975-01-01
The structure of domains of determining closed operators in the Hilbert space by means of sequence spaces is investigated. The final classification provides three classes of these domains. Necessary and sufficient conditions of equivalence of these domains are obtained in the form of equivalency of corresponding sequences of natural numbers. Connection with the perturbation theory is mentioned [ru
Parallel transport in ideal magnetohydrodynamics and applications to resistive wall modes
International Nuclear Information System (INIS)
Finn, J.M.; Gerwin, R.A.
1996-01-01
It is shown that in magnetohydrodynamics (MHD) with an ideal Ohm close-quote s law, in the presence of parallel heat flux, density gradient, temperature gradient, and parallel compression, but in the absence of perpendicular compressibility, there is an exact cancellation of the parallel transport terms. This cancellation is due to the fact that magnetic flux is advected in the presence of an ideal Ohm close-quote s law, and therefore parallel transport of temperature and density gives the same result as perpendicular advection of the same quantities. Discussions are also presented regarding parallel viscosity and parallel velocity shear, and the generalization to toroidal geometry. These results suggest that a correct generalization of the Hammett endash Perkins fluid operator [G. W. Hammett and F. W. Perkins, Phys. Rev. Lett. 64, 3019 (1990)] to simulate Landau damping for electromagnetic modes must give an operator that acts on the dynamics parallel to the perturbed magnetic field lines. copyright 1996 American Institute of Physics
A Parallel Particle Swarm Optimizer
National Research Council Canada - National Science Library
Schutte, J. F; Fregly, B .J; Haftka, R. T; George, A. D
2003-01-01
.... Motivated by a computationally demanding biomechanical system identification problem, we introduce a parallel implementation of a stochastic population based global optimizer, the Particle Swarm...
Patterns for Parallel Software Design
Ortega-Arjona, Jorge Luis
2010-01-01
Essential reading to understand patterns for parallel programming Software patterns have revolutionized the way we think about how software is designed, built, and documented, and the design of parallel software requires you to consider other particular design aspects and special skills. From clusters to supercomputers, success heavily depends on the design skills of software developers. Patterns for Parallel Software Design presents a pattern-oriented software architecture approach to parallel software design. This approach is not a design method in the classic sense, but a new way of managin
DEFF Research Database (Denmark)
Christensen, Mark Schram; Ehrsson, H Henrik; Nielsen, Jens Bo
2013-01-01
a different network, involving bilateral dorsal premotor cortex (PMd), primary motor cortex, and SMA, was more active when subjects viewed parallel movements while performing either symmetrical or parallel movements. Correlations between behavioral instability and brain activity were present in right lateral...... adduction-abduction movements symmetrically or in parallel with real-time congruent or incongruent visual feedback of the movements. One network, consisting of bilateral superior and middle frontal gyrus and supplementary motor area (SMA), was more active when subjects performed parallel movements, whereas...
Parallel processing of two-dimensional Sn transport calculations
International Nuclear Information System (INIS)
Uematsu, M.
1997-01-01
A parallel processing method for the two-dimensional S n transport code DOT3.5 has been developed to achieve a drastic reduction in computation time. In the proposed method, parallelization is achieved with angular domain decomposition and/or space domain decomposition. The calculational speed of parallel processing by angular domain decomposition is largely influenced by frequent communications between processing elements. To assess parallelization efficiency, sample problems with up to 32 x 32 spatial meshes were solved with a Sun workstation using the PVM message-passing library. As a result, parallel calculation using 16 processing elements, for example, was found to be nine times as fast as that with one processing element. As for parallel processing by geometry segmentation, the influence of processing element communications on computation time is small; however, discontinuity at the segment boundary degrades convergence speed. To accelerate the convergence, an alternate sweep of angular flux in conjunction with space domain decomposition and a two-step rescaling method consisting of segmentwise rescaling and ordinary pointwise rescaling have been developed. By applying the developed method, the number of iterations needed to obtain a converged flux solution was reduced by a factor of 2. As a result, parallel calculation using 16 processing elements was found to be 5.98 times as fast as the original DOT3.5 calculation
Kinematics analysis and simulation of a new underactuated parallel robot
Directory of Open Access Journals (Sweden)
Wenxu YAN
2017-04-01
Full Text Available The number of degrees of freedom is equal to the number of the traditional robot driving motors, which causes defects such as low efficiency. To overcome that problem, based on the traditional parallel robot, a new underactuated parallel robot is presented. The structure characteristics and working principles of the underactuated parallel robot are analyzed. The forward and inverse solutions are derived by way of space analytic geometry and vector algebra. The kinematics model is established, and MATLAB is implied to verify the accuracy of forward and inverse solutions and identify the optimal work space. The simulation results show that the robot can realize the function of robot switch with three or four degrees of freedom when the number of driving motors is three, improving the efficiency of robot grasping, with the characteristics of large working space, high speed operation, high positioning accuracy, low manufacturing cost and so on, and it will have a wide range of industrial applications.
School Closings in Philadelphia
Jack, James; Sludden, John
2013-01-01
In 2012, the School District of Philadelphia closed six schools. In 2013, it closed 24. The closure of 30 schools has occurred amid a financial crisis, headlined by the district's $1.35 billion deficit. School closures are one piece of the district's plan to cut expenditures and close its budget gap. The closures are also intended to make…
Parallel 3-D method of characteristics in MPACT
International Nuclear Information System (INIS)
Kochunas, B.; Dovvnar, T. J.; Liu, Z.
2013-01-01
A new parallel 3-D MOC kernel has been developed and implemented in MPACT which makes use of the modular ray tracing technique to reduce computational requirements and to facilitate parallel decomposition. The parallel model makes use of both distributed and shared memory parallelism which are implemented with the MPI and OpenMP standards, respectively. The kernel is capable of parallel decomposition of problems in space, angle, and by characteristic rays up to 0(104) processors. Initial verification of the parallel 3-D MOC kernel was performed using the Takeda 3-D transport benchmark problems. The eigenvalues computed by MPACT are within the statistical uncertainty of the benchmark reference and agree well with the averages of other participants. The MPACT k eff differs from the benchmark results for rodded and un-rodded cases by 11 and -40 pcm, respectively. The calculations were performed for various numbers of processors and parallel decompositions up to 15625 processors; all producing the same result at convergence. The parallel efficiency of the worst case was 60%, while very good efficiency (>95%) was observed for cases using 500 processors. The overall run time for the 500 processor case was 231 seconds and 19 seconds for the case with 15625 processors. Ongoing work is focused on developing theoretical performance models and the implementation of acceleration techniques to minimize the number of iterations to converge. (authors)
Fast image processing on parallel hardware
International Nuclear Information System (INIS)
Bittner, U.
1988-01-01
Current digital imaging modalities in the medical field incorporate parallel hardware which is heavily used in the stage of image formation like the CT/MR image reconstruction or in the DSA real time subtraction. In order to image post-processing as efficient as image acquisition, new software approaches have to be found which take full advantage of the parallel hardware architecture. This paper describes the implementation of two-dimensional median filter which can serve as an example for the development of such an algorithm. The algorithm is analyzed by viewing it as a complete parallel sort of the k pixel values in the chosen window which leads to a generalization to rank order operators and other closely related filters reported in literature. A section about the theoretical base of the algorithm gives hints for how to characterize operations suitable for implementations on pipeline processors and the way to find the appropriate algorithms. Finally some results that computation time and usefulness of medial filtering in radiographic imaging are given
PARALLEL IMPORT: REALITY FOR RUSSIA
Directory of Open Access Journals (Sweden)
Т. А. Сухопарова
2014-01-01
Full Text Available Problem of parallel import is urgent question at now. Parallel import legalization in Russia is expedient. Such statement based on opposite experts opinion analysis. At the same time it’s necessary to negative consequences consider of this decision and to apply remedies to its minimization.Purchase on Elibrary.ru > Buy now
Rega, Joseph Mark
2003-01-01
Dissertação (mestrado) - Universidade Federal de Santa Catarina, Centro de Comunicação e Expressão. Programa de Pós-Graduação em Inglês e Literatura Correspondente. The recent surge in cyberspace science fiction follows previous trends within the genre, i.e. those connected with future city-space and outer space, and is an inevitable result of economic forces. There has always been a close relationship between capitalism and spatial expansion, compelled by technological innovations that ha...
The Galley Parallel File System
Nieuwejaar, Nils; Kotz, David
1996-01-01
Most current multiprocessor file systems are designed to use multiple disks in parallel, using the high aggregate bandwidth to meet the growing I/0 requirements of parallel scientific applications. Many multiprocessor file systems provide applications with a conventional Unix-like interface, allowing the application to access multiple disks transparently. This interface conceals the parallelism within the file system, increasing the ease of programmability, but making it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. In addition to providing an insufficient interface, most current multiprocessor file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic scientific multiprocessor workloads. We discuss Galley's file structure and application interface, as well as the performance advantages offered by that interface.
Parallelization of the FLAPW method
International Nuclear Information System (INIS)
Canning, A.; Mannstadt, W.; Freeman, A.J.
1999-01-01
The FLAPW (full-potential linearized-augmented plane-wave) method is one of the most accurate first-principles methods for determining electronic and magnetic properties of crystals and surfaces. Until the present work, the FLAPW method has been limited to systems of less than about one hundred atoms due to a lack of an efficient parallel implementation to exploit the power and memory of parallel computers. In this work we present an efficient parallelization of the method by division among the processors of the plane-wave components for each state. The code is also optimized for RISC (reduced instruction set computer) architectures, such as those found on most parallel computers, making full use of BLAS (basic linear algebra subprograms) wherever possible. Scaling results are presented for systems of up to 686 silicon atoms and 343 palladium atoms per unit cell, running on up to 512 processors on a CRAY T3E parallel computer
Parallelization of the FLAPW method
Canning, A.; Mannstadt, W.; Freeman, A. J.
2000-08-01
The FLAPW (full-potential linearized-augmented plane-wave) method is one of the most accurate first-principles methods for determining structural, electronic and magnetic properties of crystals and surfaces. Until the present work, the FLAPW method has been limited to systems of less than about a hundred atoms due to the lack of an efficient parallel implementation to exploit the power and memory of parallel computers. In this work, we present an efficient parallelization of the method by division among the processors of the plane-wave components for each state. The code is also optimized for RISC (reduced instruction set computer) architectures, such as those found on most parallel computers, making full use of BLAS (basic linear algebra subprograms) wherever possible. Scaling results are presented for systems of up to 686 silicon atoms and 343 palladium atoms per unit cell, running on up to 512 processors on a CRAY T3E parallel supercomputer.
Compressing Data Cube in Parallel OLAP Systems
Directory of Open Access Journals (Sweden)
Frank Dehne
2007-03-01
Full Text Available This paper proposes an efficient algorithm to compress the cubes in the progress of the parallel data cube generation. This low overhead compression mechanism provides block-by-block and record-by-record compression by using tuple difference coding techniques, thereby maximizing the compression ratio and minimizing the decompression penalty at run-time. The experimental results demonstrate that the typical compression ratio is about 30:1 without sacrificing running time. This paper also demonstrates that the compression method is suitable for Hilbert Space Filling Curve, a mechanism widely used in multi-dimensional indexing.
New parallel SOR method by domain partitioning
Energy Technology Data Exchange (ETDEWEB)
Xie, Dexuan [Courant Inst. of Mathematical Sciences New York Univ., NY (United States)
1996-12-31
In this paper, we propose and analyze a new parallel SOR method, the PSOR method, formulated by using domain partitioning together with an interprocessor data-communication technique. For the 5-point approximation to the Poisson equation on a square, we show that the ordering of the PSOR based on the strip partition leads to a consistently ordered matrix, and hence the PSOR and the SOR using the row-wise ordering have the same convergence rate. However, in general, the ordering used in PSOR may not be {open_quote}consistently ordered{close_quotes}. So, there is a need to analyze the convergence of PSOR directly. In this paper, we present a PSOR theory, and show that the PSOR method can have the same asymptotic rate of convergence as the corresponding sequential SOR method for a wide class of linear systems in which the matrix is {open_quotes}consistently ordered{close_quotes}. Finally, we demonstrate the parallel performance of the PSOR method on four different message passing multiprocessors (a KSR1, the Intel Delta, an Intel Paragon and an IBM SP2), along with a comparison with the point Red-Black and four-color SOR methods.
International Nuclear Information System (INIS)
Bao Bo-Cheng; Feng Fei; Dong Wei; Pan Sai-Hu
2013-01-01
A flux-controlled memristor characterized by smooth cubic nonlinearity is taken as an example, upon which the voltage—current relationships (VCRs) between two parallel memristive circuits — a parallel memristor and capacitor circuit (the parallel MC circuit), and a parallel memristor and inductor circuit (the parallel ML circuit) — are investigated. The results indicate that the VCR between these two parallel memristive circuits is closely related to the circuit parameters, and the frequency and amplitude of the sinusoidal voltage stimulus. An equivalent circuit model of the memristor is built, upon which the circuit simulations and experimental measurements of both the parallel MC circuit and the parallel ML circuit are performed, and the results verify the theoretical analysis results
Is Monte Carlo embarrassingly parallel?
Energy Technology Data Exchange (ETDEWEB)
Hoogenboom, J. E. [Delft Univ. of Technology, Mekelweg 15, 2629 JB Delft (Netherlands); Delft Nuclear Consultancy, IJsselzoom 2, 2902 LB Capelle aan den IJssel (Netherlands)
2012-07-01
Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)
Is Monte Carlo embarrassingly parallel?
International Nuclear Information System (INIS)
Hoogenboom, J. E.
2012-01-01
Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)
Parallel integer sorting with medium and fine-scale parallelism
Dagum, Leonardo
1993-01-01
Two new parallel integer sorting algorithms, queue-sort and barrel-sort, are presented and analyzed in detail. These algorithms do not have optimal parallel complexity, yet they show very good performance in practice. Queue-sort designed for fine-scale parallel architectures which allow the queueing of multiple messages to the same destination. Barrel-sort is designed for medium-scale parallel architectures with a high message passing overhead. The performance results from the implementation of queue-sort on a Connection Machine CM-2 and barrel-sort on a 128 processor iPSC/860 are given. The two implementations are found to be comparable in performance but not as good as a fully vectorized bucket sort on the Cray YMP.
Template based parallel checkpointing in a massively parallel computer system
Archer, Charles Jens [Rochester, MN; Inglett, Todd Alan [Rochester, MN
2009-01-13
A method and apparatus for a template based parallel checkpoint save for a massively parallel super computer system using a parallel variation of the rsync protocol, and network broadcast. In preferred embodiments, the checkpoint data for each node is compared to a template checkpoint file that resides in the storage and that was previously produced. Embodiments herein greatly decrease the amount of data that must be transmitted and stored for faster checkpointing and increased efficiency of the computer system. Embodiments are directed to a parallel computer system with nodes arranged in a cluster with a high speed interconnect that can perform broadcast communication. The checkpoint contains a set of actual small data blocks with their corresponding checksums from all nodes in the system. The data blocks may be compressed using conventional non-lossy data compression algorithms to further reduce the overall checkpoint size.
A Parallel Particle Swarm Optimization Algorithm Accelerated by Asynchronous Evaluations
Venter, Gerhard; Sobieszczanski-Sobieski, Jaroslaw
2005-01-01
A parallel Particle Swarm Optimization (PSO) algorithm is presented. Particle swarm optimization is a fairly recent addition to the family of non-gradient based, probabilistic search algorithms that is based on a simplified social model and is closely tied to swarming theory. Although PSO algorithms present several attractive properties to the designer, they are plagued by high computational cost as measured by elapsed time. One approach to reduce the elapsed time is to make use of coarse-grained parallelization to evaluate the design points. Previous parallel PSO algorithms were mostly implemented in a synchronous manner, where all design points within a design iteration are evaluated before the next iteration is started. This approach leads to poor parallel speedup in cases where a heterogeneous parallel environment is used and/or where the analysis time depends on the design point being analyzed. This paper introduces an asynchronous parallel PSO algorithm that greatly improves the parallel e ciency. The asynchronous algorithm is benchmarked on a cluster assembled of Apple Macintosh G5 desktop computers, using the multi-disciplinary optimization of a typical transport aircraft wing as an example.
Ecological Challenges for Closed Systems
Nelson, Mark; Dempster, William; Allen, John P.
2012-07-01
Closed ecological systems are desirable for a number of purposes. In space life support systems, material closure allows precious life-supporting resources to be kept inside and recycled. Closure in small biospheric systems facilitates detailed measurement of global ecological processes and biogeochemical cycles. Closed testbeds facilitate research topics which require isolation from the outside (e.g. genetically modified organisms; radioisotopes) so their ecological interactions and fluxes can be studied separate from interactions with the outside environment. But to achieve and maintain closure entails solving complex ecological challenges. These challenges include being able to handle faster cycling rates and accentuated daily and seasonal fluxes of critical life elements such as carbon dioxide, oxygen, water, macro- and mico-nutrients. The problems of achieving sustainability in closed systems for life support include how to handle atmospheric dynamics including trace gases, producing a complete human diet and recycling nutrients and maintaining soil fertility, the sustaining of healthy air and water and preventing the loss of crucial elements from active circulation. In biospheric facilities the challenge is also to produce analogues to natural biomes and ecosystems, studying processes of self-organization and adaptation in systems that allow specification or determination of state variables and cycles which may be followed through all interactions from atmosphere to soils. Other challenges include the dynamics and genetics of small populations, the psychological challenges for small isolated human groups and measures and options which may be necessary to ensure long-term operation of closed ecological systems.
Parallel education: what is it?
Amos, Michelle Peta
2017-01-01
In the history of education it has long been discussed that single-sex and coeducation are the two models of education present in schools. With the introduction of parallel schools over the last 15 years, there has been very little research into this 'new model'. Many people do not understand what it means for a school to be parallel or they confuse a parallel model with co-education, due to the presence of both boys and girls within the one institution. Therefore, the main obj...
Balanced, parallel operation of flashlamps
International Nuclear Information System (INIS)
Carder, B.M.; Merritt, B.T.
1979-01-01
A new energy store, the Compensated Pulsed Alternator (CPA), promises to be a cost effective substitute for capacitors to drive flashlamps that pump large Nd:glass lasers. Because the CPA is large and discrete, it will be necessary that it drive many parallel flashlamp circuits, presenting a problem in equal current distribution. Current division to +- 20% between parallel flashlamps has been achieved, but this is marginal for laser pumping. A method is presented here that provides equal current sharing to about 1%, and it includes fused protection against short circuit faults. The method was tested with eight parallel circuits, including both open-circuit and short-circuit fault tests
Parallel computing in plasma physics: Nonlinear instabilities
International Nuclear Information System (INIS)
Pohn, E.; Kamelander, G.; Shoucri, M.
2000-01-01
A Vlasov-Poisson-system is used for studying the time evolution of the charge-separation at a spatial one- as well as a two-dimensional plasma-edge. Ions are advanced in time using the Vlasov-equation. The whole three-dimensional velocity-space is considered leading to very time-consuming four-resp. five-dimensional fully kinetic simulations. In the 1D simulations electrons are assumed to behave adiabatic, i.e. they are Boltzmann-distributed, leading to a nonlinear Poisson-equation. In the 2D simulations a gyro-kinetic approximation is used for the electrons. The plasma is assumed to be initially neutral. The simulations are performed at an equidistant grid. A constant time-step is used for advancing the density-distribution function in time. The time-evolution of the distribution function is performed using a splitting scheme. Each dimension (x, y, υ x , υ y , υ z ) of the phase-space is advanced in time separately. The value of the distribution function for the next time is calculated from the value of an - in general - interstitial point at the present time (fractional shift). One-dimensional cubic-spline interpolation is used for calculating the interstitial function values. After the fractional shifts are performed for each dimension of the phase-space, a whole time-step for advancing the distribution function is finished. Afterwards the charge density is calculated, the Poisson-equation is solved and the electric field is calculated before the next time-step is performed. The fractional shift method sketched above was parallelized for p processors as follows. Considering first the shifts in y-direction, a proper parallelization strategy is to split the grid into p disjoint υ z -slices, which are sub-grids, each containing a different 1/p-th part of the υ z range but the whole range of all other dimensions. Each processor is responsible for performing the y-shifts on a different slice, which can be done in parallel without any communication between
Oriented open-closed string theory revisited
International Nuclear Information System (INIS)
Zwiebach, B.
1998-01-01
String theory on D-brane backgrounds is open-closed string theory. Given the relevance of this fact, we give details and elaborate upon our earlier construction of oriented open-closed string field theory. In order to incorporate explicitly closed strings, the classical sector of this theory is open strings with a homotopy associative A ∞ algebraic structure. We build a suitable Batalin-Vilkovisky algebra on moduli spaces of bordered Ricmann surfaces, the construction of which involves a few subtleties arising from the open string punctures and cyclicity conditions. All vertices coupling open and closed strings through disks are described explicitly. Subalgebras of the algebra of surfaces with boundaries are used to discuss symmetries of classical open string theory induced by the closed string sector, and to write classical open string field theory on general closed string backgrounds. We give a preliminary analysis of the ghost-dilaton theorem. copyright 1998 Academic Press, Inc
Temporal Precedence Checking for Switched Models and its Application to a Parallel Landing Protocol
Duggirala, Parasara Sridhar; Wang, Le; Mitra, Sayan; Viswanathan, Mahesh; Munoz, Cesar A.
2014-01-01
This paper presents an algorithm for checking temporal precedence properties of nonlinear switched systems. This class of properties subsume bounded safety and capture requirements about visiting a sequence of predicates within given time intervals. The algorithm handles nonlinear predicates that arise from dynamics-based predictions used in alerting protocols for state-of-the-art transportation systems. It is sound and complete for nonlinear switch systems that robustly satisfy the given property. The algorithm is implemented in the Compare Execute Check Engine (C2E2) using validated simulations. As a case study, a simplified model of an alerting system for closely spaced parallel runways is considered. The proposed approach is applied to this model to check safety properties of the alerting logic for different operating conditions such as initial velocities, bank angles, aircraft longitudinal separation, and runway separation.
AA, closed orbit observation pickup
1980-01-01
Electrostatic pickups around the circumference of the AA served for the measurement of the closed orbits across the wide momentum range of +- 3% to either side of central orbit. The pickups were of the "shoebox" type, with diagonal cuts, a horizontal and a vertical one mechanically coupled together. They were located where they would not require extra space. The small ones, like the one we see here, were inserted into the vacuum chamber of the BLG (long and narrow) bending magnets. See also 8001372, 8010042, 8010045
AA, closed orbit observation pickup
CERN PhotoLab
1980-01-01
Electrostatic pickups around the circumference of the AA served for the measurement of the closed orbits across the wide momentum range of +- 3% to either side of central orbit. The pickups were of the "shoebox" type, with diagonal cuts, a horizontal and a vertical one mechanically coupled together. They were located where they would not require extra space. The wide ones (very wide indeed: 70 cm), like the one we see here, were placed inside the vacuum chamber of the wide quadrupoles QFW, at maximum dispersion. See also 8001372, 8001383, 8010045
AA, closed orbit observation pickup
CERN PhotoLab
1980-01-01
Electrostatic pickups around the circumference of the AA served for the measurement of the closed orbits across the wide momentum range of +- 3% to either side of central orbit. The pickups were of the "shoebox" type, with diagonal cuts, a horizontal and a vertical one mechanically coupled together. They were located where they would not require extra space. The wide ones (very wide indeed: 70 cm), like the one we see here, were placed inside the vacuum chamber of the wide quadrupoles, QFW, at maximum dispersion. See also 8001372,8001383, 8010042
AA, closed orbit observation pickup
CERN PhotoLab
1980-01-01
Electrostatic pickups around the circumference of the AA served for the measurement of the closed orbits across the wide momentum range of +- 3% to either side of central orbit. The pickups were of the "shoebox" type, with diagonal cuts, a horizontal and a vertical one mechanically coupled together. They were located where they would not require extra space. The small ones, like the one we see here, were inserted into the vacuum chamber of the BLG (long and narrow) bending magnets. Werner Sax contemplates his achievement. See also 8001383, 8010042, 8010045.
Space Plastic Recycling System, Phase I
National Aeronautics and Space Administration — Techshot's proposed Space Plastic Recycler (SPR) is an automated closed loop plastic recycling system that allows the automated conversion of disposable ISS...
Workspace Analysis for Parallel Robot
Directory of Open Access Journals (Sweden)
Ying Sun
2013-05-01
Full Text Available As a completely new-type of robot, the parallel robot possesses a lot of advantages that the serial robot does not, such as high rigidity, great load-carrying capacity, small error, high precision, small self-weight/load ratio, good dynamic behavior and easy control, hence its range is extended in using domain. In order to find workspace of parallel mechanism, the numerical boundary-searching algorithm based on the reverse solution of kinematics and limitation of link length has been introduced. This paper analyses position workspace, orientation workspace of parallel robot of the six degrees of freedom. The result shows: It is a main means to increase and decrease its workspace to change the length of branch of parallel mechanism; The radius of the movement platform has no effect on the size of workspace, but will change position of workspace.
"Feeling" Series and Parallel Resistances.
Morse, Robert A.
1993-01-01
Equipped with drinking straws and stirring straws, a teacher can help students understand how resistances in electric circuits combine in series and in parallel. Follow-up suggestions are provided. (ZWH)
Parallel encoders for pixel detectors
International Nuclear Information System (INIS)
Nikityuk, N.M.
1991-01-01
A new method of fast encoding and determining the multiplicity and coordinates of fired pixels is described. A specific example construction of parallel encodes and MCC for n=49 and t=2 is given. 16 refs.; 6 figs.; 2 tabs
Massively Parallel Finite Element Programming
Heister, Timo
2010-01-01
Today\\'s large finite element simulations require parallel algorithms to scale on clusters with thousands or tens of thousands of processor cores. We present data structures and algorithms to take advantage of the power of high performance computers in generic finite element codes. Existing generic finite element libraries often restrict the parallelization to parallel linear algebra routines. This is a limiting factor when solving on more than a few hundreds of cores. We describe routines for distributed storage of all major components coupled with efficient, scalable algorithms. We give an overview of our effort to enable the modern and generic finite element library deal.II to take advantage of the power of large clusters. In particular, we describe the construction of a distributed mesh and develop algorithms to fully parallelize the finite element calculation. Numerical results demonstrate good scalability. © 2010 Springer-Verlag.
Event monitoring of parallel computations
Directory of Open Access Journals (Sweden)
Gruzlikov Alexander M.
2015-06-01
Full Text Available The paper considers the monitoring of parallel computations for detection of abnormal events. It is assumed that computations are organized according to an event model, and monitoring is based on specific test sequences
Massively Parallel Finite Element Programming
Heister, Timo; Kronbichler, Martin; Bangerth, Wolfgang
2010-01-01
Today's large finite element simulations require parallel algorithms to scale on clusters with thousands or tens of thousands of processor cores. We present data structures and algorithms to take advantage of the power of high performance computers in generic finite element codes. Existing generic finite element libraries often restrict the parallelization to parallel linear algebra routines. This is a limiting factor when solving on more than a few hundreds of cores. We describe routines for distributed storage of all major components coupled with efficient, scalable algorithms. We give an overview of our effort to enable the modern and generic finite element library deal.II to take advantage of the power of large clusters. In particular, we describe the construction of a distributed mesh and develop algorithms to fully parallelize the finite element calculation. Numerical results demonstrate good scalability. © 2010 Springer-Verlag.
The STAPL Parallel Graph Library
Harshvardhan,
2013-01-01
This paper describes the stapl Parallel Graph Library, a high-level framework that abstracts the user from data-distribution and parallelism details and allows them to concentrate on parallel graph algorithm development. It includes a customizable distributed graph container and a collection of commonly used parallel graph algorithms. The library introduces pGraph pViews that separate algorithm design from the container implementation. It supports three graph processing algorithmic paradigms, level-synchronous, asynchronous and coarse-grained, and provides common graph algorithms based on them. Experimental results demonstrate improved scalability in performance and data size over existing graph libraries on more than 16,000 cores and on internet-scale graphs containing over 16 billion vertices and 250 billion edges. © Springer-Verlag Berlin Heidelberg 2013.
Writing parallel programs that work
CERN. Geneva
2012-01-01
Serial algorithms typically run inefficiently on parallel machines. This may sound like an obvious statement, but it is the root cause of why parallel programming is considered to be difficult. The current state of the computer industry is still that almost all programs in existence are serial. This talk will describe the techniques used in the Intel Parallel Studio to provide a developer with the tools necessary to understand the behaviors and limitations of the existing serial programs. Once the limitations are known the developer can refactor the algorithms and reanalyze the resulting programs with the tools in the Intel Parallel Studio to create parallel programs that work. About the speaker Paul Petersen is a Sr. Principal Engineer in the Software and Solutions Group (SSG) at Intel. He received a Ph.D. degree in Computer Science from the University of Illinois in 1993. After UIUC, he was employed at Kuck and Associates, Inc. (KAI) working on auto-parallelizing compiler (KAP), and was involved in th...
Exploiting Symmetry on Parallel Architectures.
Stiller, Lewis Benjamin
1995-01-01
This thesis describes techniques for the design of parallel programs that solve well-structured problems with inherent symmetry. Part I demonstrates the reduction of such problems to generalized matrix multiplication by a group-equivariant matrix. Fast techniques for this multiplication are described, including factorization, orbit decomposition, and Fourier transforms over finite groups. Our algorithms entail interaction between two symmetry groups: one arising at the software level from the problem's symmetry and the other arising at the hardware level from the processors' communication network. Part II illustrates the applicability of our symmetry -exploitation techniques by presenting a series of case studies of the design and implementation of parallel programs. First, a parallel program that solves chess endgames by factorization of an associated dihedral group-equivariant matrix is described. This code runs faster than previous serial programs, and discovered it a number of results. Second, parallel algorithms for Fourier transforms for finite groups are developed, and preliminary parallel implementations for group transforms of dihedral and of symmetric groups are described. Applications in learning, vision, pattern recognition, and statistics are proposed. Third, parallel implementations solving several computational science problems are described, including the direct n-body problem, convolutions arising from molecular biology, and some communication primitives such as broadcast and reduce. Some of our implementations ran orders of magnitude faster than previous techniques, and were used in the investigation of various physical phenomena.
Parallel algorithms for continuum dynamics
International Nuclear Information System (INIS)
Hicks, D.L.; Liebrock, L.M.
1987-01-01
Simply porting existing parallel programs to a new parallel processor may not achieve the full speedup possible; to achieve the maximum efficiency may require redesigning the parallel algorithms for the specific architecture. The authors discuss here parallel algorithms that were developed first for the HEP processor and then ported to the CRAY X-MP/4, the ELXSI/10, and the Intel iPSC/32. Focus is mainly on the most recent parallel processing results produced, i.e., those on the Intel Hypercube. The applications are simulations of continuum dynamics in which the momentum and stress gradients are important. Examples of these are inertial confinement fusion experiments, severe breaks in the coolant system of a reactor, weapons physics, shock-wave physics. Speedup efficiencies on the Intel iPSC Hypercube are very sensitive to the ratio of communication to computation. Great care must be taken in designing algorithms for this machine to avoid global communication. This is much more critical on the iPSC than it was on the three previous parallel processors
Mappings with closed range and compactness
International Nuclear Information System (INIS)
Iyahen, S.O.; Umweni, I.
1985-12-01
The motivation for this note is the result of E.O. Thorp that a normed linear space E is finite dimensional if and only if every continuous linear map for E into any normed linear space has a closed range. Here, a class of Hausdorff topological groups is introduced; called r-compactifiable topological groups, they include compact groups, locally compact Abelian groups and locally convex linear topological spaces. It is proved that a group in this class which is separable, complete metrizable or locally compact, is necessarily compact if its image by a continuous group homomorphism is necessarily closed. It is deduced then that a Hausdorff locally convex is zero if its image by a continuous additive map is necessarily closed. (author)
Restaurants closed over Christmas
2011-01-01
The restaurants will be closed during the Christmas holiday period : please note that all three CERN Restaurants will be closed from 5 p.m. on Wednesday, 21 December until Wednesday, 4 January inclusive. The Restaurants will reopen on Thursday, 5 January 2012.
Straight nearness spaces | Bentley | Quaestiones Mathematicae
African Journals Online (AJOL)
Straight spaces are spaces for which a continuous map defined on the space which is uniformly continuous on each set of a finite closed cover is then uniformly continuous on the whole space. Previously, straight spaces have been studied in the setting of metric spaces. In this paper, we present a study of straight spaces in ...
Comparative eye-tracking evaluation of scatterplots and parallel coordinates
Directory of Open Access Journals (Sweden)
Rudolf Netzel
2017-06-01
Full Text Available We investigate task performance and reading characteristics for scatterplots (Cartesian coordinates and parallel coordinates. In a controlled eye-tracking study, we asked 24 participants to assess the relative distance of points in multidimensional space, depending on the diagram type (parallel coordinates or a horizontal collection of scatterplots, the number of data dimensions (2, 4, 6, or 8, and the relative distance between points (15%, 20%, or 25%. For a given reference point and two target points, we instructed participants to choose the target point that was closer to the reference point in multidimensional space. We present a visual scanning model that describes different strategies to solve this retrieval task for both diagram types, and propose corresponding hypotheses that we test using task completion time, accuracy, and gaze positions as dependent variables. Our results show that scatterplots outperform parallel coordinates significantly in 2 dimensions, however, the task was solved more quickly and more accurately with parallel coordinates in 8 dimensions. The eye-tracking data further shows significant differences between Cartesian and parallel coordinates, as well as between different numbers of dimensions. For parallel coordinates, there is a clear trend toward shorter fixations and longer saccades with increasing number of dimensions. Using an area-of-interest (AOI based approach, we identify different reading strategies for each diagram type: For parallel coordinates, the participants’ gaze frequently jumped back and forth between pairs of axes, while axes were rarely focused on when viewing Cartesian coordinates. We further found that participants’ attention is biased: toward the center of the whole plotfor parallel coordinates and skewed to the center/left side for Cartesian coordinates. We anticipate that these results may support the design of more effective visualizations for multidimensional data.
Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.
2014-08-12
Endpoint-based parallel data processing in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective operation through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks.
Parallel Implicit Algorithms for CFD
Keyes, David E.
1998-01-01
The main goal of this project was efficient distributed parallel and workstation cluster implementations of Newton-Krylov-Schwarz (NKS) solvers for implicit Computational Fluid Dynamics (CFD.) "Newton" refers to a quadratically convergent nonlinear iteration using gradient information based on the true residual, "Krylov" to an inner linear iteration that accesses the Jacobian matrix only through highly parallelizable sparse matrix-vector products, and "Schwarz" to a domain decomposition form of preconditioning the inner Krylov iterations with primarily neighbor-only exchange of data between the processors. Prior experience has established that Newton-Krylov methods are competitive solvers in the CFD context and that Krylov-Schwarz methods port well to distributed memory computers. The combination of the techniques into Newton-Krylov-Schwarz was implemented on 2D and 3D unstructured Euler codes on the parallel testbeds that used to be at LaRC and on several other parallel computers operated by other agencies or made available by the vendors. Early implementations were made directly in Massively Parallel Integration (MPI) with parallel solvers we adapted from legacy NASA codes and enhanced for full NKS functionality. Later implementations were made in the framework of the PETSC library from Argonne National Laboratory, which now includes pseudo-transient continuation Newton-Krylov-Schwarz solver capability (as a result of demands we made upon PETSC during our early porting experiences). A secondary project pursued with funding from this contract was parallel implicit solvers in acoustics, specifically in the Helmholtz formulation. A 2D acoustic inverse problem has been solved in parallel within the PETSC framework.
Second derivative parallel block backward differentiation type ...
African Journals Online (AJOL)
Second derivative parallel block backward differentiation type formulas for Stiff ODEs. ... Log in or Register to get access to full text downloads. ... and the methods are inherently parallel and can be distributed over parallel processors. They are ...
A Parallel Approach to Fractal Image Compression
Lubomir Dedera
2004-01-01
The paper deals with a parallel approach to coding and decoding algorithms in fractal image compressionand presents experimental results comparing sequential and parallel algorithms from the point of view of achieved bothcoding and decoding time and effectiveness of parallelization.
Parallel Computing:. Some Activities in High Energy Physics
Willers, Ian
This paper examines some activities in High Energy Physics that utilise parallel computing. The topic includes all computing from the proposed SIMD front end detectors, the farming applications, high-powered RISC processors and the large machines in the computer centers. We start by looking at the motivation behind using parallelism for general purpose computing. The developments around farming are then described from its simplest form to the more complex system in Fermilab. Finally, there is a list of some developments that are happening close to the experiments.
International Nuclear Information System (INIS)
Strominger, A.
1987-01-01
A gauge invariant cubic action describing bosonic closed string field theory is constructed. The gauge symmetries include local spacetime diffeomorphisms. The conventional closed string spectrum and trilinear couplings are reproduced after spontaneous symmetry breaking. The action S is constructed from the usual ''open string'' field of ghost number minus one half. It is given by the associator of the string field product which is non-vanishing because of associativity anomalies. S does not describe open string propagation because open string states associate and can thereby be shifted away. A field theory of closed and open strings can be obtained by adding to S the cubic open string action. (orig.)
International Nuclear Information System (INIS)
Klahn, F.C.; Nolan, J.H.; Wills, C.
1979-01-01
The closing device closes the upper end of a support tube for monitoring samples. It meshes with the upper connecting piece of the monitorung sample capsule, and loads the capsule within the bore of the support tube, so that it is fixed but can be released. The closing device consists of an interlocking component with a chamber and several ratchets which hang down. The interlocking component surrounds the actuating component for positioning the ratchets. The interlocking and actuating components are movable axially relative to each other. (DG) [de
Parallelization of MCNP4 code by using simple FORTRAN algorithms
International Nuclear Information System (INIS)
Yazid, P.I.; Takano, Makoto; Masukawa, Fumihiro; Naito, Yoshitaka.
1993-12-01
Simple FORTRAN algorithms, that rely only on open, close, read and write statements, together with disk files and some UNIX commands have been applied to parallelization of MCNP4. The code, named MCNPNFS, maintains almost all capabilities of MCNP4 in solving shielding problems. It is able to perform parallel computing on a set of any UNIX workstations connected by a network, regardless of the heterogeneity in hardware system, provided that all processors produce a binary file in the same format. Further, it is confirmed that MCNPNFS can be executed also on Monte-4 vector-parallel computer. MCNPNFS has been tested intensively by executing 5 photon-neutron benchmark problems, a spent fuel cask problem and 17 sample problems included in the original code package of MCNP4. Three different workstations, connected by a network, have been used to execute MCNPNFS in parallel. By measuring CPU time, the parallel efficiency is determined to be 58% to 99% and 86% in average. On Monte-4, MCNPNFS has been executed using 4 processors concurrently and has achieved the parallel efficiency of 79% in average. (author)
Parallelization Issues and Particle-In Codes.
Elster, Anne Cathrine
1994-01-01
"Everything should be made as simple as possible, but not simpler." Albert Einstein. The field of parallel scientific computing has concentrated on parallelization of individual modules such as matrix solvers and factorizers. However, many applications involve several interacting modules. Our analyses of a particle-in-cell code modeling charged particles in an electric field, show that these accompanying dependencies affect data partitioning and lead to new parallelization strategies concerning processor, memory and cache utilization. Our test-bed, a KSR1, is a distributed memory machine with a globally shared addressing space. However, most of the new methods presented hold generally for hierarchical and/or distributed memory systems. We introduce a novel approach that uses dual pointers on the local particle arrays to keep the particle locations automatically partially sorted. Complexity and performance analyses with accompanying KSR benchmarks, have been included for both this scheme and for the traditional replicated grids approach. The latter approach maintains load-balance with respect to particles. However, our results demonstrate it fails to scale properly for problems with large grids (say, greater than 128-by-128) running on as few as 15 KSR nodes, since the extra storage and computation time associated with adding the grid copies, becomes significant. Our grid partitioning scheme, although harder to implement, does not need to replicate the whole grid. Consequently, it scales well for large problems on highly parallel systems. It may, however, require load balancing schemes for non-uniform particle distributions. Our dual pointer approach may facilitate this through dynamically partitioned grids. We also introduce hierarchical data structures that store neighboring grid-points within the same cache -line by reordering the grid indexing. This alignment produces a 25% savings in cache-hits for a 4-by-4 cache. A consideration of the input data's effect on
On a type of generalized closed sets
Directory of Open Access Journals (Sweden)
Dhananjoy Mandal
2012-01-01
Full Text Available The purpose of this paper is to introduce and study a new class ofgeneralized closed sets in a topological space X, defined in terms of a grill G on X. Explicit characterization of such sets along with certain other properties of them are obtained. As applications, some characterizations of regular and normal spaces are achieved by use of the introduced class of sets.
Parallel fabrication of macroporous scaffolds.
Dobos, Andrew; Grandhi, Taraka Sai Pavan; Godeshala, Sudhakar; Meldrum, Deirdre R; Rege, Kaushal
2018-07-01
Scaffolds generated from naturally occurring and synthetic polymers have been investigated in several applications because of their biocompatibility and tunable chemo-mechanical properties. Existing methods for generation of 3D polymeric scaffolds typically cannot be parallelized, suffer from low throughputs, and do not allow for quick and easy removal of the fragile structures that are formed. Current molds used in hydrogel and scaffold fabrication using solvent casting and porogen leaching are often single-use and do not facilitate 3D scaffold formation in parallel. Here, we describe a simple device and related approaches for the parallel fabrication of macroporous scaffolds. This approach was employed for the generation of macroporous and non-macroporous materials in parallel, in higher throughput and allowed for easy retrieval of these 3D scaffolds once formed. In addition, macroporous scaffolds with interconnected as well as non-interconnected pores were generated, and the versatility of this approach was employed for the generation of 3D scaffolds from diverse materials including an aminoglycoside-derived cationic hydrogel ("Amikagel"), poly(lactic-co-glycolic acid) or PLGA, and collagen. Macroporous scaffolds generated using the device were investigated for plasmid DNA binding and cell loading, indicating the use of this approach for developing materials for different applications in biotechnology. Our results demonstrate that the device-based approach is a simple technology for generating scaffolds in parallel, which can enhance the toolbox of current fabrication techniques. © 2018 Wiley Periodicals, Inc.
Parallel plasma fluid turbulence calculations
International Nuclear Information System (INIS)
Leboeuf, J.N.; Carreras, B.A.; Charlton, L.A.; Drake, J.B.; Lynch, V.E.; Newman, D.E.; Sidikman, K.L.; Spong, D.A.
1994-01-01
The study of plasma turbulence and transport is a complex problem of critical importance for fusion-relevant plasmas. To this day, the fluid treatment of plasma dynamics is the best approach to realistic physics at the high resolution required for certain experimentally relevant calculations. Core and edge turbulence in a magnetic fusion device have been modeled using state-of-the-art, nonlinear, three-dimensional, initial-value fluid and gyrofluid codes. Parallel implementation of these models on diverse platforms--vector parallel (National Energy Research Supercomputer Center's CRAY Y-MP C90), massively parallel (Intel Paragon XP/S 35), and serial parallel (clusters of high-performance workstations using the Parallel Virtual Machine protocol)--offers a variety of paths to high resolution and significant improvements in real-time efficiency, each with its own advantages. The largest and most efficient calculations have been performed at the 200 Mword memory limit on the C90 in dedicated mode, where an overlap of 12 to 13 out of a maximum of 16 processors has been achieved with a gyrofluid model of core fluctuations. The richness of the physics captured by these calculations is commensurate with the increased resolution and efficiency and is limited only by the ingenuity brought to the analysis of the massive amounts of data generated
Evaluating parallel optimization on transputers
Directory of Open Access Journals (Sweden)
A.G. Chalmers
2003-12-01
Full Text Available The faster processing power of modern computers and the development of efficient algorithms have made it possible for operations researchers to tackle a much wider range of problems than ever before. Further improvements in processing speed can be achieved utilising relatively inexpensive transputers to process components of an algorithm in parallel. The Davidon-Fletcher-Powell method is one of the most successful and widely used optimisation algorithms for unconstrained problems. This paper examines the algorithm and identifies the components that can be processed in parallel. The results of some experiments with these components are presented which indicates under what conditions parallel processing with an inexpensive configuration is likely to be faster than the traditional sequential implementations. The performance of the whole algorithm with its parallel components is then compared with the original sequential algorithm. The implementation serves to illustrate the practicalities of speeding up typical OR algorithms in terms of difficulty, effort and cost. The results give an indication of the savings in time a given parallel implementation can be expected to yield.
Pattern-Driven Automatic Parallelization
Directory of Open Access Journals (Sweden)
Christoph W. Kessler
1996-01-01
Full Text Available This article describes a knowledge-based system for automatic parallelization of a wide class of sequential numerical codes operating on vectors and dense matrices, and for execution on distributed memory message-passing multiprocessors. Its main feature is a fast and powerful pattern recognition tool that locally identifies frequently occurring computations and programming concepts in the source code. This tool also works for dusty deck codes that have been "encrypted" by former machine-specific code transformations. Successful pattern recognition guides sophisticated code transformations including local algorithm replacement such that the parallelized code need not emerge from the sequential program structure by just parallelizing the loops. It allows access to an expert's knowledge on useful parallel algorithms, available machine-specific library routines, and powerful program transformations. The partially restored program semantics also supports local array alignment, distribution, and redistribution, and allows for faster and more exact prediction of the performance of the parallelized target code than is usually possible.
Free topological vector spaces
Gabriyelyan, Saak S.; Morris, Sidney A.
2016-01-01
We define and study the free topological vector space $\\mathbb{V}(X)$ over a Tychonoff space $X$. We prove that $\\mathbb{V}(X)$ is a $k_\\omega$-space if and only if $X$ is a $k_\\omega$-space. If $X$ is infinite, then $\\mathbb{V}(X)$ contains a closed vector subspace which is topologically isomorphic to $\\mathbb{V}(\\mathbb{N})$. It is proved that if $X$ is a $k$-space, then $\\mathbb{V}(X)$ is locally convex if and only if $X$ is discrete and countable. If $X$ is a metrizable space it is shown ...
Parallel artificial liquid membrane extraction
DEFF Research Database (Denmark)
Gjelstad, Astrid; Rasmussen, Knut Einar; Parmer, Marthe Petrine
2013-01-01
This paper reports development of a new approach towards analytical liquid-liquid-liquid membrane extraction termed parallel artificial liquid membrane extraction. A donor plate and acceptor plate create a sandwich, in which each sample (human plasma) and acceptor solution is separated by an arti......This paper reports development of a new approach towards analytical liquid-liquid-liquid membrane extraction termed parallel artificial liquid membrane extraction. A donor plate and acceptor plate create a sandwich, in which each sample (human plasma) and acceptor solution is separated...... by an artificial liquid membrane. Parallel artificial liquid membrane extraction is a modification of hollow-fiber liquid-phase microextraction, where the hollow fibers are replaced by flat membranes in a 96-well plate format....
Cellular automata a parallel model
Mazoyer, J
1999-01-01
Cellular automata can be viewed both as computational models and modelling systems of real processes. This volume emphasises the first aspect. In articles written by leading researchers, sophisticated massive parallel algorithms (firing squad, life, Fischer's primes recognition) are treated. Their computational power and the specific complexity classes they determine are surveyed, while some recent results in relation to chaos from a new dynamic systems point of view are also presented. Audience: This book will be of interest to specialists of theoretical computer science and the parallelism challenge.
Parallel keyed hash function construction based on chaotic maps
International Nuclear Information System (INIS)
Xiao Di; Liao Xiaofeng; Deng Shaojiang
2008-01-01
Recently, a variety of chaos-based hash functions have been proposed. Nevertheless, none of them works efficiently in parallel computing environment. In this Letter, an algorithm for parallel keyed hash function construction is proposed, whose structure can ensure the uniform sensitivity of hash value to the message. By means of the mechanism of both changeable-parameter and self-synchronization, the keystream establishes a close relation with the algorithm key, the content and the order of each message block. The entire message is modulated into the chaotic iteration orbit, and the coarse-graining trajectory is extracted as the hash value. Theoretical analysis and computer simulation indicate that the proposed algorithm can satisfy the performance requirements of hash function. It is simple, efficient, practicable, and reliable. These properties make it a good choice for hash on parallel computing platform
Minding the close relationship.
Harvey, J H; Omarzu, J
1997-01-01
In this theoretical analysis, we argue that a process referred to as minding is essential for a couple to feel mutually close and satisfied in a close relationship over a long period Minding represents a package of mutual self-disclosure, other forms of goal-oriented behavior aimed at facilitating the relationship, and attributions about self's and other's motivations, intentions, and Mort in the relationship. Self-disclosure and attribution activities in minding are aimed at getting to know the other, trying to understand the other's motivations and deeper disposition as they pertain to the relationship, and showing respect and acceptance for knowledge gained about other. We link the concept of minding to other major ideas and literatures about how couples achieve closeness: self-disclosure and social penetration, intimacy, empathy and empathic accuracy, and love and self-expansion. We argue that the minding process articulated here has not previously been delineated and that it is a useful composite notion about essential steps in bonding among humans. We also argue that the minding concept stretches our understanding of the interface of attribution and close relationships. We present research possibilities and implications and consider possible alternative positions and counter arguments about the merits of the minding idea for close relationship satisfaction.
Options for Parallelizing a Planning and Scheduling Algorithm
Clement, Bradley J.; Estlin, Tara A.; Bornstein, Benjamin D.
2011-01-01
Space missions have a growing interest in putting multi-core processors onboard spacecraft. For many missions processing power significantly slows operations. We investigate how continual planning and scheduling algorithms can exploit multi-core processing and outline different potential design decisions for a parallelized planning architecture. This organization of choices and challenges helps us with an initial design for parallelizing the CASPER planning system for a mesh multi-core processor. This work extends that presented at another workshop with some preliminary results.
Badlands: A parallel basin and landscape dynamics model
Directory of Open Access Journals (Sweden)
T. Salles
2016-01-01
Full Text Available Over more than three decades, a number of numerical landscape evolution models (LEMs have been developed to study the combined effects of climate, sea-level, tectonics and sediments on Earth surface dynamics. Most of them are written in efficient programming languages, but often cannot be used on parallel architectures. Here, I present a LEM which ports a common core of accepted physical principles governing landscape evolution into a distributed memory parallel environment. Badlands (acronym for BAsin anD LANdscape DynamicS is an open-source, flexible, TIN-based landscape evolution model, built to simulate topography development at various space and time scales.
Numerical simulation of Vlasov equation with parallel tools
International Nuclear Information System (INIS)
Peyroux, J.
2005-11-01
This project aims to make even more powerful the resolution of Vlasov codes through the various parallelization tools (MPI, OpenMP...). A simplified test case served as a base for constructing the parallel codes for obtaining a data-processing skeleton which, thereafter, could be re-used for increasingly complex models (more than four variables of phase space). This will thus make it possible to treat more realistic situations linked, for example, to the injection of ultra short and ultra intense impulses in inertial fusion plasmas, or the study of the instability of trapped ions now taken as being responsible for the generation of turbulence in tokamak plasmas. (author)
Many-Body Mean-Field Equations: Parallel implementation
International Nuclear Information System (INIS)
Vallieres, M.; Umar, S.; Chinn, C.; Strayer, M.
1993-01-01
We describe the implementation of Hartree-Fock Many-Body Mean-Field Equations on a Parallel Intel iPSC/860 hypercube. We first discuss the Nuclear Mean-Field approach in physical terms. Then we describe our parallel implementation of this approach on the Intel iPSC/860 hypercube. We discuss and compare the advantages and disadvantages of the domain partition versus the Hilbert space partition for this problem. We conclude by discussing some timing experiments on various computing platforms
Directory of Open Access Journals (Sweden)
Julian Dontchev
1999-01-01
θ-generalized Λ-sets and R0-, T1/2- and T1-spaces are characterized. The relations with other notions directly or indirectly connected with generalized closed sets are investigated. The notion of TGO-connectedness is introduced.
Parallel Sparse Matrix - Vector Product
DEFF Research Database (Denmark)
Alexandersen, Joe; Lazarov, Boyan Stefanov; Dammann, Bernd
This technical report contains a case study of a sparse matrix-vector product routine, implemented for parallel execution on a compute cluster with both pure MPI and hybrid MPI-OpenMP solutions. C++ classes for sparse data types were developed and the report shows how these class can be used...
[Falsified medicines in parallel trade].
Muckenfuß, Heide
2017-11-01
The number of falsified medicines on the German market has distinctly increased over the past few years. In particular, stolen pharmaceutical products, a form of falsified medicines, have increasingly been introduced into the legal supply chain via parallel trading. The reasons why parallel trading serves as a gateway for falsified medicines are most likely the complex supply chains and routes of transport. It is hardly possible for national authorities to trace the history of a medicinal product that was bought and sold by several intermediaries in different EU member states. In addition, the heterogeneous outward appearance of imported and relabelled pharmaceutical products facilitates the introduction of illegal products onto the market. Official batch release at the Paul-Ehrlich-Institut offers the possibility of checking some aspects that might provide an indication of a falsified medicine. In some circumstances, this may allow the identification of falsified medicines before they come onto the German market. However, this control is only possible for biomedicinal products that have not received a waiver regarding official batch release. For improved control of parallel trade, better networking among the EU member states would be beneficial. European-wide regulations, e. g., for disclosure of the complete supply chain, would help to minimise the risks of parallel trading and hinder the marketing of falsified medicines.
The parallel adult education system
DEFF Research Database (Denmark)
Wahlgren, Bjarne
2015-01-01
for competence development. The Danish university educational system includes two parallel programs: a traditional academic track (candidatus) and an alternative practice-based track (master). The practice-based program was established in 2001 and organized as part time. The total program takes half the time...
Where are the parallel algorithms?
Voigt, R. G.
1985-01-01
Four paradigms that can be useful in developing parallel algorithms are discussed. These include computational complexity analysis, changing the order of computation, asynchronous computation, and divide and conquer. Each is illustrated with an example from scientific computation, and it is shown that computational complexity must be used with great care or an inefficient algorithm may be selected.
Default Parallels Plesk Panel Page
services that small businesses want and need. Our software includes key building blocks of cloud service virtualized servers Service Provider Products ParallelsÂ® Automation Hosting, SaaS, and cloud computing , the leading hosting automation software. You see this page because there is no Web site at this
Parallel plate transmission line transformer
Voeten, S.J.; Brussaard, G.J.H.; Pemen, A.J.M.
2011-01-01
A Transmission Line Transformer (TLT) can be used to transform high-voltage nanosecond pulses. These transformers rely on the fact that the length of the pulse is shorter than the transmission lines used. This allows connecting the transmission lines in parallel at the input and in series at the
Matpar: Parallel Extensions for MATLAB
Springer, P. L.
1998-01-01
Matpar is a set of client/server software that allows a MATLAB user to take advantage of a parallel computer for very large problems. The user can replace calls to certain built-in MATLAB functions with calls to Matpar functions.
Massively parallel quantum computer simulator
De Raedt, K.; Michielsen, K.; De Raedt, H.; Trieu, B.; Arnold, G.; Richter, M.; Lippert, Th.; Watanabe, H.; Ito, N.
2007-01-01
We describe portable software to simulate universal quantum computers on massive parallel Computers. We illustrate the use of the simulation software by running various quantum algorithms on different computer architectures, such as a IBM BlueGene/L, a IBM Regatta p690+, a Hitachi SR11000/J1, a Cray
Massively parallel Fokker-Planck calculations
International Nuclear Information System (INIS)
Mirin, A.A.
1990-01-01
This paper reports that the Fokker-Planck package FPPAC, which solves the complete nonlinear multispecies Fokker-Planck collision operator for a plasma in two-dimensional velocity space, has been rewritten for the Connection Machine 2. This has involved allocation of variables either to the front end or the CM2, minimization of data flow, and replacement of Cray-optimized algorithms with ones suitable for a massively parallel architecture. Calculations have been carried out on various Connection Machines throughout the country. Results and timings on these machines have been compared to each other and to those on the static memory Cray-2. For large problem size, the Connection Machine 2 is found to be cost-efficient
Parallel computing: numerics, applications, and trends
National Research Council Canada - National Science Library
Trobec, Roman; Vajteršic, Marián; Zinterhof, Peter
2009-01-01
... and/or distributed systems. The contributions to this book are focused on topics most concerned in the trends of today's parallel computing. These range from parallel algorithmics, programming, tools, network computing to future parallel computing. Particular attention is paid to parallel numerics: linear algebra, differential equations, numerica...
Experiments with parallel algorithms for combinatorial problems
G.A.P. Kindervater (Gerard); H.W.J.M. Trienekens
1985-01-01
textabstractIn the last decade many models for parallel computation have been proposed and many parallel algorithms have been developed. However, few of these models have been realized and most of these algorithms are supposed to run on idealized, unrealistic parallel machines. The parallel machines
International Nuclear Information System (INIS)
Heggarty, J.W.
1999-06-01
For almost thirty years, sequential R-matrix computation has been used by atomic physics research groups, from around the world, to model collision phenomena involving the scattering of electrons or positrons with atomic or molecular targets. As considerable progress has been made in the understanding of fundamental scattering processes, new data, obtained from more complex calculations, is of current interest to experimentalists. Performing such calculations, however, places considerable demands on the computational resources to be provided by the target machine, in terms of both processor speed and memory requirement. Indeed, in some instances the computational requirements are so great that the proposed R-matrix calculations are intractable, even when utilising contemporary classic supercomputers. Historically, increases in the computational requirements of R-matrix computation were accommodated by porting the problem codes to a more powerful classic supercomputer. Although this approach has been successful in the past, it is no longer considered to be a satisfactory solution due to the limitations of current (and future) Von Neumann machines. As a consequence, there has been considerable interest in the high performance multicomputers, that have emerged over the last decade which appear to offer the computational resources required by contemporary R-matrix research. Unfortunately, developing codes for these machines is not as simple a task as it was to develop codes for successive classic supercomputers. The difficulty arises from the considerable differences in the computing models that exist between the two types of machine and results in the programming of multicomputers to be widely acknowledged as a difficult, time consuming and error-prone task. Nevertheless, unless parallel R-matrix computation is realised, important theoretical and experimental atomic physics research will continue to be hindered. This thesis describes work that was undertaken in
International Nuclear Information System (INIS)
Larsson-Leander, G.
1979-01-01
Studies of close binary stars are being persued more vigorously than ever, with about 3000 research papers and notes pertaining to the field being published during the triennium 1976-1978. Many major advances and spectacular discoveries were made, mostly due to increased observational efficiency and precision, especially in the X-ray, radio, and ultraviolet domains. Progress reports are presented in the following areas: observational techniques, methods of analyzing light curves, observational data, physical data, structure and models of close binaries, statistical investigations, and origin and evolution of close binaries. Reports from the Coordinates Programs Committee, the Committee for Extra-Terrestrial Observations and the Working Group on RS CVn binaries are included. (Auth./C.F.)
Nam, Sung Sik; Alouini, Mohamed-Slim; Ko, Young-Chai
2018-01-01
In this paper, we statistically analyze the performance of a threshold-based parallel multiple beam selection scheme for a free-space optical (FSO) based system with wavelength division multiplexing (WDM) in cases where a pointing error has occurred
Building Blocks for the Rapid Development of Parallel Simulations, Phase I
National Aeronautics and Space Administration — Scientists need to be able to quickly develop and run parallel simulations without paying the high price of writing low-level message passing codes using compiled...
Parallel family trees for transfer matrices in the Potts model
Navarro, Cristobal A.; Canfora, Fabrizio; Hitschfeld, Nancy; Navarro, Gonzalo
2015-02-01
The computational cost of transfer matrix methods for the Potts model is related to the question in how many ways can two layers of a lattice be connected? Answering the question leads to the generation of a combinatorial set of lattice configurations. This set defines the configuration space of the problem, and the smaller it is, the faster the transfer matrix can be computed. The configuration space of generic (q , v) transfer matrix methods for strips is in the order of the Catalan numbers, which grows asymptotically as O(4m) where m is the width of the strip. Other transfer matrix methods with a smaller configuration space indeed exist but they make assumptions on the temperature, number of spin states, or restrict the structure of the lattice. In this paper we propose a parallel algorithm that uses a sub-Catalan configuration space of O(3m) to build the generic (q , v) transfer matrix in a compressed form. The improvement is achieved by grouping the original set of Catalan configurations into a forest of family trees, in such a way that the solution to the problem is now computed by solving the root node of each family. As a result, the algorithm becomes exponentially faster than the Catalan approach while still highly parallel. The resulting matrix is stored in a compressed form using O(3m ×4m) of space, making numerical evaluation and decompression to be faster than evaluating the matrix in its O(4m ×4m) uncompressed form. Experimental results for different sizes of strip lattices show that the parallel family trees (PFT) strategy indeed runs exponentially faster than the Catalan Parallel Method (CPM), especially when dealing with dense transfer matrices. In terms of parallel performance, we report strong-scaling speedups of up to 5.7 × when running on an 8-core shared memory machine and 28 × for a 32-core cluster. The best balance of speedup and efficiency for the multi-core machine was achieved when using p = 4 processors, while for the cluster
Parallel trajectory similarity joins in spatial networks
Shang, Shuo
2018-04-04
The matching of similar pairs of objects, called similarity join, is fundamental functionality in data management. We consider two cases of trajectory similarity joins (TS-Joins), including a threshold-based join (Tb-TS-Join) and a top-k TS-Join (k-TS-Join), where the objects are trajectories of vehicles moving in road networks. Given two sets of trajectories and a threshold θ, the Tb-TS-Join returns all pairs of trajectories from the two sets with similarity above θ. In contrast, the k-TS-Join does not take a threshold as a parameter, and it returns the top-k most similar trajectory pairs from the two sets. The TS-Joins target diverse applications such as trajectory near-duplicate detection, data cleaning, ridesharing recommendation, and traffic congestion prediction. With these applications in mind, we provide purposeful definitions of similarity. To enable efficient processing of the TS-Joins on large sets of trajectories, we develop search space pruning techniques and enable use of the parallel processing capabilities of modern processors. Specifically, we present a two-phase divide-and-conquer search framework that lays the foundation for the algorithms for the Tb-TS-Join and the k-TS-Join that rely on different pruning techniques to achieve efficiency. For each trajectory, the algorithms first find similar trajectories. Then they merge the results to obtain the final result. The algorithms for the two joins exploit different upper and lower bounds on the spatiotemporal trajectory similarity and different heuristic scheduling strategies for search space pruning. Their per-trajectory searches are independent of each other and can be performed in parallel, and the mergings have constant cost. An empirical study with real data offers insight in the performance of the algorithms and demonstrates that they are capable of outperforming well-designed baseline algorithms by an order of magnitude.
Parallel trajectory similarity joins in spatial networks
Shang, Shuo; Chen, Lisi; Wei, Zhewei; Jensen, Christian S.; Zheng, Kai; Kalnis, Panos
2018-01-01
The matching of similar pairs of objects, called similarity join, is fundamental functionality in data management. We consider two cases of trajectory similarity joins (TS-Joins), including a threshold-based join (Tb-TS-Join) and a top-k TS-Join (k-TS-Join), where the objects are trajectories of vehicles moving in road networks. Given two sets of trajectories and a threshold θ, the Tb-TS-Join returns all pairs of trajectories from the two sets with similarity above θ. In contrast, the k-TS-Join does not take a threshold as a parameter, and it returns the top-k most similar trajectory pairs from the two sets. The TS-Joins target diverse applications such as trajectory near-duplicate detection, data cleaning, ridesharing recommendation, and traffic congestion prediction. With these applications in mind, we provide purposeful definitions of similarity. To enable efficient processing of the TS-Joins on large sets of trajectories, we develop search space pruning techniques and enable use of the parallel processing capabilities of modern processors. Specifically, we present a two-phase divide-and-conquer search framework that lays the foundation for the algorithms for the Tb-TS-Join and the k-TS-Join that rely on different pruning techniques to achieve efficiency. For each trajectory, the algorithms first find similar trajectories. Then they merge the results to obtain the final result. The algorithms for the two joins exploit different upper and lower bounds on the spatiotemporal trajectory similarity and different heuristic scheduling strategies for search space pruning. Their per-trajectory searches are independent of each other and can be performed in parallel, and the mergings have constant cost. An empirical study with real data offers insight in the performance of the algorithms and demonstrates that they are capable of outperforming well-designed baseline algorithms by an order of magnitude.
Hydraulic Profiling of a Parallel Channel Type Reactor Core
International Nuclear Information System (INIS)
Seo, Kyong-Won; Hwang, Dae-Hyun; Lee, Chung-Chan
2006-01-01
An advanced reactor core which consisted of closed multiple parallel channels was optimized to maximize the thermal margin of the core. The closed multiple parallel channel configurations have different characteristics to the open channels of conventional PWRs. The channels, usually assemblies, are isolated hydraulically from each other and there is no cross flow between channels. The distribution of inlet flow rate between channels is a very important design parameter in the core because distribution of inlet flow is directly proportional to a margin for a certain hydraulic parameter. The thermal hydraulic parameter may be the boiling margin, maximum fuel temperature, and critical heat flux. The inlet flow distribution of the core was optimized for the boiling margins by grouping the inlet orifices by several hydraulic regions. The procedure is called a hydraulic profiling
The numerical parallel computing of photon transport
International Nuclear Information System (INIS)
Huang Qingnan; Liang Xiaoguang; Zhang Lifa
1998-12-01
The parallel computing of photon transport is investigated, the parallel algorithm and the parallelization of programs on parallel computers both with shared memory and with distributed memory are discussed. By analyzing the inherent law of the mathematics and physics model of photon transport according to the structure feature of parallel computers, using the strategy of 'to divide and conquer', adjusting the algorithm structure of the program, dissolving the data relationship, finding parallel liable ingredients and creating large grain parallel subtasks, the sequential computing of photon transport into is efficiently transformed into parallel and vector computing. The program was run on various HP parallel computers such as the HY-1 (PVP), the Challenge (SMP) and the YH-3 (MPP) and very good parallel speedup has been gotten
International Nuclear Information System (INIS)
Horing, Norman J Morgenstern; Glasser, M Lawrence; Dong Bing
2006-01-01
We carry out a theoretical analysis of quantum well electron dynamics in a parallel magnetic field of arbitrary strength, for a narrow quantum well. An explicit analytical closed-form solution is obtained for the retarded Green's function for Landau-quantized electrons in skipping states of motion between the narrow well walls, effectively involving in-plane translational motion, and hybridized with the zero-field lowest subband energy eigenstate. The dispersion relation for electron eigenstates is examined, and we find a plethora of such discrete Landau-quantized modes coupled to the subband state. In the weak field limit, we determine low magnetic field corrections to the lowest subband state energy associated with close-packing (phase averaging) of the Landau levels in the skipping states. At higher fields the discrete energy levels of the well lie between adjacent Landau levels, but they are not equally spaced, albeit undamped. Furthermore, we also examine the associated thermodynamic Green's function for Landau-quantized electrons in a thin quantum well in a parallel magnetic field and construct the (grand) thermodynamic potential (logarithm of the grand partition function) determining the statistical thermodynamics of the system
Automatic Parallelization Tool: Classification of Program Code for Parallel Computing
Directory of Open Access Journals (Sweden)
Mustafa Basthikodi
2016-04-01
Full Text Available Performance growth of single-core processors has come to a halt in the past decade, but was re-enabled by the introduction of parallelism in processors. Multicore frameworks along with Graphical Processing Units empowered to enhance parallelism broadly. Couples of compilers are updated to developing challenges forsynchronization and threading issues. Appropriate program and algorithm classifications will have advantage to a great extent to the group of software engineers to get opportunities for effective parallelization. In present work we investigated current species for classification of algorithms, in that related work on classification is discussed along with the comparison of issues that challenges the classification. The set of algorithms are chosen which matches the structure with different issues and perform given task. We have tested these algorithms utilizing existing automatic species extraction toolsalong with Bones compiler. We have added functionalities to existing tool, providing a more detailed characterization. The contributions of our work include support for pointer arithmetic, conditional and incremental statements, user defined types, constants and mathematical functions. With this, we can retain significant data which is not captured by original speciesof algorithms. We executed new theories into the device, empowering automatic characterization of program code.
International Nuclear Information System (INIS)
Coffman, F.
1994-01-01
This section contains the edited transcript of the NRC closing remarks made by Mr. Franklin Coffman (Chief, Human Factors Branch, Office of Nuclear Regulatory Research) and Dr. Cecil Thomas (Deputy Director, Division of Reactor Controls and Human Factors, Office of Nuclear Reactor Regulation). This editing consisted of minimal editing to correct grammar and remove extraneous references to microphone volume, etc
International Nuclear Information System (INIS)
Sedgwick, S.G.
1976-01-01
It has been previously reported that an inducible form of post-replication repair appeared to be required for UV induced mutagenesis in an uvrA strain of Escherichia coli. It is shown here that the numbers of daughter strand gaps requiring inducible repair were similar to the numbers calculated to be overlapping one another in opposite daughter chromosomes. An estimation of survival with no repair of these gaps resembled the survival predicted with mutagenesis. It is thus proposed that inducible post-replication repair causes mutagenesis by the repair of overlapping daughter strand gaps. A general model for induced mutagenesis is presented. It is proposed that (a) some DNA lesions introduced by any DNA damaging agent may be close enough to interfere with constitutive repair replication of each other, (b) these lesions induce a repair system (SOS repair) which involves the recA + . lexA + and polC + genes (c) repair, and noncomitant mutagenesis occurs during repair replication by the insertion of mismatched bases oppposite the noncoding DNA lesions
Dynamics of parallel robots from rigid bodies to flexible elements
Briot, Sébastien
2015-01-01
This book starts with a short recapitulation on basic concepts, common to any types of robots (serial, tree structure, parallel, etc.), that are also necessary for computation of the dynamic models of parallel robots. Then, as dynamics requires the use of geometry and kinematics, the general equations of geometric and kinematic models of parallel robots are given. After, it is explained that parallel robot dynamic models can be obtained by decomposing the real robot into two virtual systems: a tree-structure robot (equivalent to the robot legs for which all joints would be actuated) plus a free body corresponding to the platform. Thus, the dynamics of rigid tree-structure robots is analyzed and algorithms to obtain their dynamic models in the most compact form are given. The dynamic model of the real rigid parallel robot is obtained by closing the loops through the use of the Lagrange multipliers. The problem of the dynamic model degeneracy near singularities is treated and optimal trajectory planning for cro...
Fundamental Parallel Algorithms for Private-Cache Chip Multiprocessors
DEFF Research Database (Denmark)
Arge, Lars Allan; Goodrich, Michael T.; Nelson, Michael
2008-01-01
about the way cores are interconnected, for we assume that all inter-processor communication occurs through the memory hierarchy. We study several fundamental problems, including prefix sums, selection, and sorting, which often form the building blocks of other parallel algorithms. Indeed, we present...... two sorting algorithms, a distribution sort and a mergesort. Our algorithms are asymptotically optimal in terms of parallel cache accesses and space complexity under reasonable assumptions about the relationships between the number of processors, the size of memory, and the size of cache blocks....... In addition, we study sorting lower bounds in a computational model, which we call the parallel external-memory (PEM) model, that formalizes the essential properties of our algorithms for private-cache CMPs....
Parallel grid generation algorithm for distributed memory computers
Moitra, Stuti; Moitra, Anutosh
1994-01-01
A parallel grid-generation algorithm and its implementation on the Intel iPSC/860 computer are described. The grid-generation scheme is based on an algebraic formulation of homotopic relations. Methods for utilizing the inherent parallelism of the grid-generation scheme are described, and implementation of multiple levELs of parallelism on multiple instruction multiple data machines are indicated. The algorithm is capable of providing near orthogonality and spacing control at solid boundaries while requiring minimal interprocessor communications. Results obtained on the Intel hypercube for a blended wing-body configuration are used to demonstrate the effectiveness of the algorithm. Fortran implementations bAsed on the native programming model of the iPSC/860 computer and the Express system of software tools are reported. Computational gains in execution time speed-up ratios are given.
Self-balanced modulation and magnetic rebalancing method for parallel multilevel inverters
Li, Hui; Shi, Yanjun
2017-11-28
A self-balanced modulation method and a closed-loop magnetic flux rebalancing control method for parallel multilevel inverters. The combination of the two methods provides for balancing of the magnetic flux of the inter-cell transformers (ICTs) of the parallel multilevel inverters without deteriorating the quality of the output voltage. In various embodiments a parallel multi-level inverter modulator is provide including a multi-channel comparator to generate a multiplexed digitized ideal waveform for a parallel multi-level inverter and a finite state machine (FSM) module coupled to the parallel multi-channel comparator, the FSM module to receive the multiplexed digitized ideal waveform and to generate a pulse width modulated gate-drive signal for each switching device of the parallel multi-level inverter. The system and method provides for optimization of the output voltage spectrum without influence the magnetic balancing.
Efficient Out of Core Sorting Algorithms for the Parallel Disks Model.
Kundeti, Vamsi; Rajasekaran, Sanguthevar
2011-11-01
In this paper we present efficient algorithms for sorting on the Parallel Disks Model (PDM). Numerous asymptotically optimal algorithms have been proposed in the literature. However many of these merge based algorithms have large underlying constants in the time bounds, because they suffer from the lack of read parallelism on PDM. The irregular consumption of the runs during the merge affects the read parallelism and contributes to the increased sorting time. In this paper we first introduce a novel idea called the dirty sequence accumulation that improves the read parallelism. Secondly, we show analytically that this idea can reduce the number of parallel I/O's required to sort the input close to the lower bound of [Formula: see text]. We experimentally verify our dirty sequence idea with the standard R-Way merge and show that our idea can reduce the number of parallel I/Os to sort on PDM significantly.
International Space Station exhibit
2000-01-01
The International Space Station (ISS) exhibit in StenniSphere at John C. Stennis Space Center in Hancock County, Miss., gives visitors an up-close look at the largest international peacetime project in history. Step inside a module of the ISS and glimpse how astronauts will live and work in space. Currently, 16 countries contribute resources and hardware to the ISS. When complete, the orbiting research facility will be larger than a football field.
A novel two-level dynamic parallel data scheme for large 3-D SN calculations
International Nuclear Information System (INIS)
Sjoden, G.E.; Shedlock, D.; Haghighat, A.; Yi, C.
2005-01-01
We introduce a new dynamic parallel memory optimization scheme for executing large scale 3-D discrete ordinates (Sn) simulations on distributed memory parallel computers. In order for parallel transport codes to be truly scalable, they must use parallel data storage, where only the variables that are locally computed are locally stored. Even with parallel data storage for the angular variables, cumulative storage requirements for large discrete ordinates calculations can be prohibitive. To address this problem, Memory Tuning has been implemented into the PENTRAN 3-D parallel discrete ordinates code as an optimized, two-level ('large' array, 'small' array) parallel data storage scheme. Memory Tuning can be described as the process of parallel data memory optimization. Memory Tuning dynamically minimizes the amount of required parallel data in allocated memory on each processor using a statistical sampling algorithm. This algorithm is based on the integral average and standard deviation of the number of fine meshes contained in each coarse mesh in the global problem. Because PENTRAN only stores the locally computed problem phase space, optimal two-level memory assignments can be unique on each node, depending upon the parallel decomposition used (hybrid combinations of angular, energy, or spatial). As demonstrated in the two large discrete ordinates models presented (a storage cask and an OECD MOX Benchmark), Memory Tuning can save a substantial amount of memory per parallel processor, allowing one to accomplish very large scale Sn computations. (authors)
Structural synthesis of parallel robots
Gogu, Grigore
This book represents the fifth part of a larger work dedicated to the structural synthesis of parallel robots. The originality of this work resides in the fact that it combines new formulae for mobility, connectivity, redundancy and overconstraints with evolutionary morphology in a unified structural synthesis approach that yields interesting and innovative solutions for parallel robotic manipulators. This is the first book on robotics that presents solutions for coupled, decoupled, uncoupled, fully-isotropic and maximally regular robotic manipulators with Schönflies motions systematically generated by using the structural synthesis approach proposed in Part 1. Overconstrained non-redundant/overactuated/redundantly actuated solutions with simple/complex limbs are proposed. Many solutions are presented here for the first time in the literature. The author had to make a difficult and challenging choice between protecting these solutions through patents and releasing them directly into the public domain. T...
GPU Parallel Bundle Block Adjustment
Directory of Open Access Journals (Sweden)
ZHENG Maoteng
2017-09-01
Full Text Available To deal with massive data in photogrammetry, we introduce the GPU parallel computing technology. The preconditioned conjugate gradient and inexact Newton method are also applied to decrease the iteration times while solving the normal equation. A brand new workflow of bundle adjustment is developed to utilize GPU parallel computing technology. Our method can avoid the storage and inversion of the big normal matrix, and compute the normal matrix in real time. The proposed method can not only largely decrease the memory requirement of normal matrix, but also largely improve the efficiency of bundle adjustment. It also achieves the same accuracy as the conventional method. Preliminary experiment results show that the bundle adjustment of a dataset with about 4500 images and 9 million image points can be done in only 1.5 minutes while achieving sub-pixel accuracy.
A tandem parallel plate analyzer
International Nuclear Information System (INIS)
Hamada, Y.; Fujisawa, A.; Iguchi, H.; Nishizawa, A.; Kawasumi, Y.
1996-11-01
By a new modification of a parallel plate analyzer the second-order focus is obtained in an arbitrary injection angle. This kind of an analyzer with a small injection angle will have an advantage of small operational voltage, compared to the Proca and Green analyzer where the injection angle is 30 degrees. Thus, the newly proposed analyzer will be very useful for the precise energy measurement of high energy particles in MeV range. (author)
International Nuclear Information System (INIS)
Gus'kov, B.N.; Kalinnikov, V.A.; Krastev, V.R.; Maksimov, A.N.; Nikityuk, N.M.
1985-01-01
This paper describes a high-speed parallel counter that contains 31 inputs and 15 outputs and is implemented by integrated circuits of series 500. The counter is designed for fast sampling of events according to the number of particles that pass simultaneously through the hodoscopic plane of the detector. The minimum delay of the output signals relative to the input is 43 nsec. The duration of the output signals can be varied from 75 to 120 nsec
An anthropologist in parallel structure
Directory of Open Access Journals (Sweden)
Noelle Molé Liston
2016-08-01
Full Text Available The essay examines the parallels between Molé Liston’s studies on labor and precarity in Italy and the United States’ anthropology job market. Probing the way economic shift reshaped the field of anthropology of Europe in the late 2000s, the piece explores how the neoliberalization of the American academy increased the value in studying the hardships and daily lives of non-western populations in Europe.
Wakefield calculations on parallel computers
International Nuclear Information System (INIS)
Schoessow, P.
1990-01-01
The use of parallelism in the solution of wakefield problems is illustrated for two different computer architectures (SIMD and MIMD). Results are given for finite difference codes which have been implemented on a Connection Machine and an Alliant FX/8 and which are used to compute wakefields in dielectric loaded structures. Benchmarks on code performance are presented for both cases. 4 refs., 3 figs., 2 tabs
Closed cycle electric discharge laser design investigation
Baily, P. K.; Smith, R. C.
1978-01-01
Closed cycle CO2 and CO electric discharge lasers were studied. An analytical investigation assessed scale-up parameters and design features for CO2, closed cycle, continuous wave, unstable resonator, electric discharge lasing systems operating in space and airborne environments. A space based CO system was also examined. The program objectives were the conceptual designs of six CO2 systems and one CO system. Three airborne CO2 designs, with one, five, and ten megawatt outputs, were produced. These designs were based upon five minute run times. Three space based CO2 designs, with the same output levels, were also produced, but based upon one year run times. In addition, a conceptual design for a one megawatt space based CO laser system was also produced. These designs include the flow loop, compressor, and heat exchanger, as well as the laser cavity itself. The designs resulted in a laser loop weight for the space based five megawatt system that is within the space shuttle capacity. For the one megawatt systems, the estimated weight of the entire system including laser loop, solar power generator, and heat radiator is less than the shuttle capacity.
Aspects of computation on asynchronous parallel processors
International Nuclear Information System (INIS)
Wright, M.
1989-01-01
The increasing availability of asynchronous parallel processors has provided opportunities for original and useful work in scientific computing. However, the field of parallel computing is still in a highly volatile state, and researchers display a wide range of opinion about many fundamental questions such as models of parallelism, approaches for detecting and analyzing parallelism of algorithms, and tools that allow software developers and users to make effective use of diverse forms of complex hardware. This volume collects the work of researchers specializing in different aspects of parallel computing, who met to discuss the framework and the mechanics of numerical computing. The far-reaching impact of high-performance asynchronous systems is reflected in the wide variety of topics, which include scientific applications (e.g. linear algebra, lattice gauge simulation, ordinary and partial differential equations), models of parallelism, parallel language features, task scheduling, automatic parallelization techniques, tools for algorithm development in parallel environments, and system design issues
Parallel processing of genomics data
Agapito, Giuseppe; Guzzi, Pietro Hiram; Cannataro, Mario
2016-10-01
The availability of high-throughput experimental platforms for the analysis of biological samples, such as mass spectrometry, microarrays and Next Generation Sequencing, have made possible to analyze a whole genome in a single experiment. Such platforms produce an enormous volume of data per single experiment, thus the analysis of this enormous flow of data poses several challenges in term of data storage, preprocessing, and analysis. To face those issues, efficient, possibly parallel, bioinformatics software needs to be used to preprocess and analyze data, for instance to highlight genetic variation associated with complex diseases. In this paper we present a parallel algorithm for the parallel preprocessing and statistical analysis of genomics data, able to face high dimension of data and resulting in good response time. The proposed system is able to find statistically significant biological markers able to discriminate classes of patients that respond to drugs in different ways. Experiments performed on real and synthetic genomic datasets show good speed-up and scalability.
Dassau, E; Atlas, E; Phillip, M
2010-02-01
The dream of closing the loop is actually the dream of creating an artificial pancreas and freeing the patients from being involved with the care of their own diabetes. Insulin-dependent diabetes (type 1) is a chronic incurable disease which requires constant therapy without the possibility of any 'holidays' or insulin-free days. It means that patients have to inject insulin every day of their life, several times per day, and in order to do it safely they also have to measure their blood glucose levels several times per day. Patients need to plan their meals, their physical activities and their insulin regime - there is only very small room for spontaneous activities. This is why the desire for an artificial pancreas is so strong despite the fact that it will not cure the diabetic patients. Attempts to develop a closed-loop system started in the 1960s but never got to a clinical practical stage of development. In recent years the availability of continuous glucose sensors revived those efforts and stimulated the clinician and researchers to believe that closing the loop might be possible nowadays. Many papers have been published over the years describing several different ideas on how to close the loop. Most of the suggested systems have a sensing arm that measures the blood glucose repeatedly or continuously, an insulin delivery arm that injects insulin upon command and a computer that makes the decisions of when and how much insulin to deliver. The differences between the various published systems in the literature are mainly in their control algorithms. However, there are also differences related to the method and site of glucose measurement and insulin delivery. SC glucose measurements and insulin delivery are the most studied option but other combinations of insulin measurements and glucose delivery including intravascular and intraperitoneal (IP) are explored. We tried to select recent publications that we believe had influenced and inspired people interested
Sun, Degui; Wang, Na-Xin; He, Li-Ming; Weng, Zhao-Heng; Wang, Daheng; Chen, Ray T.
1996-06-01
A space-position-logic-encoding scheme is proposed and demonstrated. This encoding scheme not only makes the best use of the convenience of binary logic operation, but is also suitable for the trinary property of modified signed- digit (MSD) numbers. Based on the space-position-logic-encoding scheme, a fully parallel modified signed-digit adder and subtractor is built using optoelectronic switch technologies in conjunction with fiber-multistage 3D optoelectronic interconnects. Thus an effective combination of a parallel algorithm and a parallel architecture is implemented. In addition, the performance of the optoelectronic switches used in this system is experimentally studied and verified. Both the 3-bit experimental model and the experimental results of a parallel addition and a parallel subtraction are provided and discussed. Finally, the speed ratio between the MSD adder and binary adders is discussed and the advantage of the MSD in operating speed is demonstrated.
Parallel heat transport in integrable and chaotic magnetic fields
Energy Technology Data Exchange (ETDEWEB)
Castillo-Negrete, D. del; Chacon, L. [Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831-8071 (United States)
2012-05-15
The study of transport in magnetized plasmas is a problem of fundamental interest in controlled fusion, space plasmas, and astrophysics research. Three issues make this problem particularly challenging: (i) The extreme anisotropy between the parallel (i.e., along the magnetic field), {chi}{sub ||} , and the perpendicular, {chi}{sub Up-Tack }, conductivities ({chi}{sub ||} /{chi}{sub Up-Tack} may exceed 10{sup 10} in fusion plasmas); (ii) Nonlocal parallel transport in the limit of small collisionality; and (iii) Magnetic field lines chaos which in general complicates (and may preclude) the construction of magnetic field line coordinates. Motivated by these issues, we present a Lagrangian Green's function method to solve the local and non-local parallel transport equation applicable to integrable and chaotic magnetic fields in arbitrary geometry. The method avoids by construction the numerical pollution issues of grid-based algorithms. The potential of the approach is demonstrated with nontrivial applications to integrable (magnetic island), weakly chaotic (Devil's staircase), and fully chaotic magnetic field configurations. For the latter, numerical solutions of the parallel heat transport equation show that the effective radial transport, with local and non-local parallel closures, is non-diffusive, thus casting doubts on the applicability of quasilinear diffusion descriptions. General conditions for the existence of non-diffusive, multivalued flux-gradient relations in the temperature evolution are derived.
Parallel Breadth-First Search on Distributed Memory Systems
Energy Technology Data Exchange (ETDEWEB)
Computational Research Division; Buluc, Aydin; Madduri, Kamesh
2011-04-15
Data-intensive, graph-based computations are pervasive in several scientific applications, and are known to to be quite challenging to implement on distributed memory systems. In this work, we explore the design space of parallel algorithms for Breadth-First Search (BFS), a key subroutine in several graph algorithms. We present two highly-tuned par- allel approaches for BFS on large parallel systems: a level-synchronous strategy that relies on a simple vertex-based partitioning of the graph, and a two-dimensional sparse matrix- partitioning-based approach that mitigates parallel commu- nication overhead. For both approaches, we also present hybrid versions with intra-node multithreading. Our novel hybrid two-dimensional algorithm reduces communication times by up to a factor of 3.5, relative to a common vertex based approach. Our experimental study identifies execu- tion regimes in which these approaches will be competitive, and we demonstrate extremely high performance on lead- ing distributed-memory parallel systems. For instance, for a 40,000-core parallel execution on Hopper, an AMD Magny- Cours based system, we achieve a BFS performance rate of 17.8 billion edge visits per second on an undirected graph of 4.3 billion vertices and 68.7 billion edges with skewed degree distribution.
Fast robot kinematics modeling by using a parallel simulator (PSIM)
International Nuclear Information System (INIS)
El-Gazzar, H.M.; Ayad, N.M.A.
2002-01-01
High-speed computers are strongly needed not only for solving scientific and engineering problems, but also for numerous industrial applications. Such applications include computer-aided design, oil exploration, weather predication, space applications and safety of nuclear reactors. The rapid development in VLSI technology makes it possible to implement time consuming algorithms in real-time situations. Parallel processing approaches can now be used to reduce the processing-time for models of very high mathematical structure such as the kinematics molding of robot manipulator. This system is used to construct and evaluate the performance and cost effectiveness of several proposed methods to solve the Jacobian algorithm. Parallelism is introduced to the algorithms by using different task-allocations and dividing the whole job into sub tasks. Detailed analysis is performed and results are obtained for the case of six DOF (degree of freedom) robot arms (Stanford Arm). Execution times comparisons between Von Neumann (uni processor) and parallel processor architectures by using parallel simulator package (PSIM) are presented. The gained results are much in favour for the parallel techniques by at least fifty-percent improvements. Of course, further studies are needed to achieve the convenient and optimum number of processors has to be done
Fast robot kinematics modeling by using a parallel simulator (PSIM)
Energy Technology Data Exchange (ETDEWEB)
El-Gazzar, H M; Ayad, N M.A. [Atomic Energy Authority, Reactor Dept., Computer and Control Lab., P.O. Box no 13759 (Egypt)
2002-09-15
High-speed computers are strongly needed not only for solving scientific and engineering problems, but also for numerous industrial applications. Such applications include computer-aided design, oil exploration, weather predication, space applications and safety of nuclear reactors. The rapid development in VLSI technology makes it possible to implement time consuming algorithms in real-time situations. Parallel processing approaches can now be used to reduce the processing-time for models of very high mathematical structure such as the kinematics molding of robot manipulator. This system is used to construct and evaluate the performance and cost effectiveness of several proposed methods to solve the Jacobian algorithm. Parallelism is introduced to the algorithms by using different task-allocations and dividing the whole job into sub tasks. Detailed analysis is performed and results are obtained for the case of six DOF (degree of freedom) robot arms (Stanford Arm). Execution times comparisons between Von Neumann (uni processor) and parallel processor architectures by using parallel simulator package (PSIM) are presented. The gained results are much in favour for the parallel techniques by at least fifty-percent improvements. Of course, further studies are needed to achieve the convenient and optimum number of processors has to be done.
Modelling and parallel calculation of a kinetic boundary layer
International Nuclear Information System (INIS)
Perlat, Jean Philippe
1998-01-01
This research thesis aims at addressing reliability and cost issues in the calculation by numeric simulation of flows in transition regime. The first step has been to reduce calculation cost and memory space for the Monte Carlo method which is known to provide performance and reliability for rarefied regimes. Vector and parallel computers allow this objective to be reached. Here, a MIMD (multiple instructions, multiple data) machine has been used which implements parallel calculation at different levels of parallelization. Parallelization procedures have been adapted, and results showed that parallelization by calculation domain decomposition was far more efficient. Due to reliability issue related to the statistic feature of Monte Carlo methods, a new deterministic model was necessary to simulate gas molecules in transition regime. New models and hyperbolic systems have therefore been studied. One is chosen which allows thermodynamic values (density, average velocity, temperature, deformation tensor, heat flow) present in Navier-Stokes equations to be determined, and the equations of evolution of thermodynamic values are described for the mono-atomic case. Numerical resolution of is reported. A kinetic scheme is developed which complies with the structure of all systems, and which naturally expresses boundary conditions. The validation of the obtained 14 moment-based model is performed on shock problems and on Couette flows [fr
Development of parallel Fokker-Planck code ALLAp
International Nuclear Information System (INIS)
Batishcheva, A.A.; Sigmar, D.J.; Koniges, A.E.
1996-01-01
We report on our ongoing development of the 3D Fokker-Planck code ALLA for a highly collisional scrape-off-layer (SOL) plasma. A SOL with strong gradients of density and temperature in the spatial dimension is modeled. Our method is based on a 3-D adaptive grid (in space, magnitude of the velocity, and cosine of the pitch angle) and a second order conservative scheme. Note that the grid size is typically 100 x 257 x 65 nodes. It was shown in our previous work that only these capabilities make it possible to benchmark a 3D code against a spatially-dependent self-similar solution of a kinetic equation with the Landau collision term. In the present work we show results of a more precise benchmarking against the exact solutions of the kinetic equation using a new parallel code ALLAp with an improved method of parallelization and a modified boundary condition at the plasma edge. We also report first results from the code parallelization using Message Passing Interface for a Massively Parallel CRI T3D platform. We evaluate the ALLAp code performance versus the number of T3D processors used and compare its efficiency against a Work/Data Sharing parallelization scheme and a workstation version
International Nuclear Information System (INIS)
Kang, Myeong Gie
2012-01-01
It is important to find a way of enhancing heat transfer coefficients if the space for heat exchanger installation is limited, as it is in advanced light water reactors. One of the effective methods to increase heat transfer coefficients ( h b ) of pool boiling is to consider a confined space. It is well known from the literature that the confined boiling is an effective technique to enhance heat transfer. Once the flow inlet at the tube bottom is closed, a very rapid increase in heat transfer coefficient is observed at low heat fluxes ( q ' ). The similar tendency is observed regardless of the geometric shape. Yao and Chang and Kang investigated a vertical annulus while Rops et al. investigated a confined plate. Fujita et al., in other wise, used parallel plates with side and bottom inflow is restricted. Around the upper region of the annulus with closed bottoms the downward liquid interrupts the upward movement of the bubble slugs. Thereafter, bubbles are coalescing into much bigger bubbles while fluctuating up and down in the annular space. As the heat flux increases (1) the isolate bubble region, (2) the coalesced big size bubble region, and (3) the dryout region is observed in series. The major causes of the heat transfer enhancement are related with the liquid film evaporation and active liquid agitation. Literature review on the previous studies about crevice effects on pool boiling denotes that heat transfer is highly dependent on the geometric parameters. Therefore, it is necessary to quantify the effect of each geometric parameter to estimate heat transfer coefficients accurately. Although some correlations were developed to predict pool boiling heat transfer in confined spaces based on open bottoms, the application of them to a confined space with closed bottoms could result in much error. To overcome the limits of the published correlations, Kang developed a correlation to predict pool boiling heat transfer in annuli with closed bottoms. However, the
A parallel implementation of 3D Zernike moment analysis
Berjón Díez, Daniel; Arnaldo Duart, Sergio; Morán Burgos, Francisco
2011-01-01
Zernike polynomials are a well known set of functions that find many applications in image or pattern characterization because they allow to construct shape descriptors that are invariant against translations, rotations or scale changes. The concepts behind them can be extended to higher dimension spaces, making them also fit to describe volumetric data. They have been less used than their properties might suggest due to their high computational cost. We present a parallel implementation of 3...
Directory of Open Access Journals (Sweden)
Piotr Bała
2001-01-01
Full Text Available After at least a decade of parallel tool development, parallelization of scientific applications remains a significant undertaking. Typically parallelization is a specialized activity supported only partially by the programming tool set, with the programmer involved with parallel issues in addition to sequential ones. The details of concern range from algorithm design down to low-level data movement details. The aim of parallel programming tools is to automate the latter without sacrificing performance and portability, allowing the programmer to focus on algorithm specification and development. We present our use of two similar parallelization tools, Pfortran and Cray's Co-Array Fortran, in the parallelization of the GROMOS96 molecular dynamics module. Our parallelization started from the GROMOS96 distribution's shared-memory implementation of the replicated algorithm, but used little of that existing parallel structure. Consequently, our parallelization was close to starting with the sequential version. We found the intuitive extensions to Pfortran and Co-Array Fortran helpful in the rapid parallelization of the project. We present performance figures for both the Pfortran and Co-Array Fortran parallelizations showing linear speedup within the range expected by these parallelization methods.
Optimization analysis of swing check valve closing induced water hammer
International Nuclear Information System (INIS)
Han Wenwei; Han Weishi; Guo Qing; Wang Xin; Liu Chunyu
2014-01-01
A mathematical-physics model of double pump parallel feed system was constructed. The water hammer was precisely calculated, which was formed in the closing process of swing check valve. And a systematic analysis was carried out to determine the influence of the torques from both valve plate and damping torsion spring on the valve closing induced water hammer. The results show that the swing check valve would distinctly produce the water hammer during the closing procedure in the double pump parallel feed water system. The torques of the valve plate can partly reduce the water hammer effect, and implying appropriate materials of valve plate and appropriate spring can effectively relieve the harm of water hammer. (authors)
Asten, Michael W.; Boore, David M.
2005-01-01
Shear-wave velocities within several hundred meters of Earth's surface are important in specifying earthquake ground motions for engineering design. Not only are the shearwave velocities used in classifying sites for use of modern building codes, but they are also used in site-specific studies of particularly significant structures. Many are the methods for estimating sub-surface shear-wave velocities, but few are the blind comparisons of a number of the methods at a single site. The word "blind" is important here and means that the measurements and interpretations are done completely independent of one another. Stephen Hartzell of the USGS office on Golden, Colorado realized that such an experiment would be very useful for assessing the strengths and weaknesses of the various methods, and he and Jack Boatwright of the USGS office in Menlo Park, California, in cooperation with Carl Wentworth of the Menlo Park USGS office found a convenient site in the city of San Jose, California. The site had good access and space for conducting experiments, and a borehole drilled to several hundred meters by the Santa Clara Valley Water District was made available for downhole logging. Jack Boatwright asked David Boore to coordinate the experiment. In turn, David Boore persuaded several teams to make measurements, helped with the local logistics, collected the results, and organized and conducted an International Workshop in May, 2004. At this meeting the participants in the experiment gathered in Menlo Park to describe their measurements and interpretations, and to see the results of the comparisons of the various methods for the first time. This Open-File Report describes the results of that workshop. One of the participants, Michael Asten, offered to help the coordinator prepare this report. Because of his lead role in pulling the report together, Dr. Asten is the lead author of the paper to follow and is also the lead Compiler for the Open-File Report.It is important to
Behaviour of parallel girders stabilised with U-frames
DEFF Research Database (Denmark)
Virdi, Kuldeep; Azzi, Walid
2010-01-01
Lateral torsional buckling is a key factor in the design of steel girders. Stability can be enhanced by cross-bracing, reducing the effective length and thus increasing the ultimate capacity. U-frames are an option often used to brace the girders when designing through type of bridges and where...... overhead bracing is not practical. This paper investigates the effect of the U-frame spacing on the stability of the parallel girders. Eigenvalue buckling analysis was undertaken with four different spacings of the U-frames. Results were extracted from finite element analysis, interpreted and conclusions...
Marginal Assessment of Crowns by the Aid of Parallel Radiography
Directory of Open Access Journals (Sweden)
Farnaz Fattahi
2015-03-01
Full Text Available Introduction: Marginal adaptation is the most critical item in long-term prognosis of single crowns. This study aimed to assess the marginal quality as well asthe discrepancies in marginal integrity of some PFM single crowns of posterior teeth by employing parallel radiography in Shiraz Dental School, Shiraz, Iran. Methods: In this descriptive study, parallel radiographies were taken from 200 fabricated PFM single crowns of posterior teeth after cementation and before discharging the patient. To calculate the magnification of the images, a metallic sphere with the thickness of 4 mm was placed in the direction of the crown margin on the occlusal surface. Thereafter, the horizontal and vertical space between the crown margins, the margin of preparations and also the vertical space between the crown margin and the bone crest were measured by using digital radiological software. Results: Analysis of data by descriptive statistics revealed that 75.5% and 60% of the cases had more than the acceptable space (50µm in the vertical (130±20µm and horizontal (90±15µm dimensions, respectively. Moreover, 85% of patients were found to have either horizontal or vertical gap. In 77% of cases, the margins of crowns invaded the biologic width in the mesial and 70% in distal surfaces. Conclusion: Parallel radiography can be expedient in the stage of framework try-in to yield some important information that cannot be obtained by routine clinical evaluations and may improve the treatment prognosis
Overview of the Force Scientific Parallel Language
Directory of Open Access Journals (Sweden)
Gita Alaghband
1994-01-01
Full Text Available The Force parallel programming language designed for large-scale shared-memory multiprocessors is presented. The language provides a number of parallel constructs as extensions to the ordinary Fortran language and is implemented as a two-level macro preprocessor to support portability across shared memory multiprocessors. The global parallelism model on which the Force is based provides a powerful parallel language. The parallel constructs, generic synchronization, and freedom from process management supported by the Force has resulted in structured parallel programs that are ported to the many multiprocessors on which the Force is implemented. Two new parallel constructs for looping and functional decomposition are discussed. Several programming examples to illustrate some parallel programming approaches using the Force are also presented.
Automatic Loop Parallelization via Compiler Guided Refactoring
DEFF Research Database (Denmark)
Larsen, Per; Ladelsky, Razya; Lidman, Jacob
For many parallel applications, performance relies not on instruction-level parallelism, but on loop-level parallelism. Unfortunately, many modern applications are written in ways that obstruct automatic loop parallelization. Since we cannot identify sufficient parallelization opportunities...... for these codes in a static, off-line compiler, we developed an interactive compilation feedback system that guides the programmer in iteratively modifying application source, thereby improving the compiler’s ability to generate loop-parallel code. We use this compilation system to modify two sequential...... benchmarks, finding that the code parallelized in this way runs up to 8.3 times faster on an octo-core Intel Xeon 5570 system and up to 12.5 times faster on a quad-core IBM POWER6 system. Benchmark performance varies significantly between the systems. This suggests that semi-automatic parallelization should...
Parallel kinematics type, kinematics, and optimal design
Liu, Xin-Jun
2014-01-01
Parallel Kinematics- Type, Kinematics, and Optimal Design presents the results of 15 year's research on parallel mechanisms and parallel kinematics machines. This book covers the systematic classification of parallel mechanisms (PMs) as well as providing a large number of mechanical architectures of PMs available for use in practical applications. It focuses on the kinematic design of parallel robots. One successful application of parallel mechanisms in the field of machine tools, which is also called parallel kinematics machines, has been the emerging trend in advanced machine tools. The book describes not only the main aspects and important topics in parallel kinematics, but also references novel concepts and approaches, i.e. type synthesis based on evolution, performance evaluation and optimization based on screw theory, singularity model taking into account motion and force transmissibility, and others. This book is intended for researchers, scientists, engineers and postgraduates or above with interes...
Applied Parallel Computing Industrial Computation and Optimization
DEFF Research Database (Denmark)
Madsen, Kaj; NA NA NA Olesen, Dorte
Proceedings and the Third International Workshop on Applied Parallel Computing in Industrial Problems and Optimization (PARA96)......Proceedings and the Third International Workshop on Applied Parallel Computing in Industrial Problems and Optimization (PARA96)...
International Nuclear Information System (INIS)
Hutcheson, R.C.
1992-01-01
In this paper, a representative of the Oil Companies' European Organization for Environmental and Health Protection (CONCAWE), argues the advantages of closing the gasoline system. Because this decouples the product from the environment, health risks and environmental damage are reduced. It is also more effective than changing the composition of gasoline because it offers better cost effectiveness, energy efficiency and the minimization of carbon dioxide release into the environment. However it will take time and political will to change until all European vehicles are fitted with three way catalysts and carbon canisters: control systems to monitor such systems will also need to be set up. However CONCAWE still recommends its adoption. (UK)
International Nuclear Information System (INIS)
Wolfe, B.; Judson, B.F.
1984-01-01
The possibilities for closing the fuel cycle in today's nuclear climate in the US are compared with those envisioned in 1977. Reprocessing, the fast breeder reactor program, and the uranium supply are discussed. The conclusion drawn is that the nuclear world is less healthy and less stable than the one previously envisioned and that the major task before the international nuclear community is to develop technologies, institutions, and accepted procedures that will allow to economically provide the huge store of energy from reprocessing and the breeder that it appears the world will desperately need
GPGPU Parallel SPIN Model Checker
National Aeronautics and Space Administration — Model Checking is a powerful technique used to verify that a system does not violate its intended behavior. While this is very useful in proving the robustness of a...
Parallel algorithms and cluster computing
Hoffmann, Karl Heinz
2007-01-01
This book presents major advances in high performance computing as well as major advances due to high performance computing. It contains a collection of papers in which results achieved in the collaboration of scientists from computer science, mathematics, physics, and mechanical engineering are presented. From the science problems to the mathematical algorithms and on to the effective implementation of these algorithms on massively parallel and cluster computers we present state-of-the-art methods and technology as well as exemplary results in these fields. This book shows that problems which seem superficially distinct become intimately connected on a computational level.
Parallel computation of rotating flows
DEFF Research Database (Denmark)
Lundin, Lars Kristian; Barker, Vincent A.; Sørensen, Jens Nørkær
1999-01-01
This paper deals with the simulation of 3‐D rotating flows based on the velocity‐vorticity formulation of the Navier‐Stokes equations in cylindrical coordinates. The governing equations are discretized by a finite difference method. The solution is advanced to a new time level by a two‐step process...... is that of solving a singular, large, sparse, over‐determined linear system of equations, and the iterative method CGLS is applied for this purpose. We discuss some of the mathematical and numerical aspects of this procedure and report on the performance of our software on a wide range of parallel computers. Darbe...
Ionic Liquids Enabling Revolutionary Closed-Loop Life Support
National Aeronautics and Space Administration — The innovation is to utilize ionic liquids with the Bosch process to achieve closed-loop life support. Specific tasks are to: 1) Advance the technology readiness of...
Energy Technology Data Exchange (ETDEWEB)
Lawrence, Albion
2001-07-25
We study the physics of open strings in bosonic and type II string theories in the presence of unstable D-branes. When the potential energy of the open string tachyon is at its minimum, Sen has argued that only closed strings remain in the perturbative spectrum. We explore the scenario of Yi and of Bergman, Hori and Yi, who argue that the open string degrees of freedom are strongly coupled and disappear through confinement. We discuss arguments using open string field theory and worldsheet boundary RG flows, which seem to indicate otherwise. We then describe a solitonic excitation of the open string tachyon and gauge field with the charge and tension of a fundamental closed string. This requires a double scaling limit where the tachyon is taken to its minimal value and the electric field is taken to its maximum value. The resulting flux tube has an unconstrained spatial profile; and for large fundamental string charge, it appears to have light, weakly coupled open strings living in the core. We argue that the flux tube acquires a size or order {alpha}' through sigma model and string coupling effects; and we argue that confinement effects make the light degrees of freedom heavy and strongly interacting.