Held, Christian; Nattkemper, Tim; Palmisano, Ralf; Wittenberg, Thomas
2013-01-01
Research and diagnosis in medicine and biology often require the assessment of a large amount of microscopy image data. Although on the one hand, digital pathology and new bioimaging technologies find their way into clinical practice and pharmaceutical research, some general methodological issues in automated image analysis are still open. In this study, we address the problem of fitting the parameters in a microscopy image segmentation pipeline. We propose to fit the parameters of the pipeline's modules with optimization algorithms, such as, genetic algorithms or coordinate descents, and show how visual exploration of the parameter space can help to identify sub-optimal parameter settings that need to be avoided. This is of significant help in the design of our automatic parameter fitting framework, which enables us to tune the pipeline for large sets of micrographs. The underlying parameter spaces pose a challenge for manual as well as automated parameter optimization, as the parameter spaces can show several local performance maxima. Hence, optimization strategies that are not able to jump out of local performance maxima, like the hill climbing algorithm, often result in a local maximum.
Directory of Open Access Journals (Sweden)
Christian Held
2013-01-01
Full Text Available Introduction: Research and diagnosis in medicine and biology often require the assessment of a large amount of microscopy image data. Although on the one hand, digital pathology and new bioimaging technologies find their way into clinical practice and pharmaceutical research, some general methodological issues in automated image analysis are still open. Methods: In this study, we address the problem of fitting the parameters in a microscopy image segmentation pipeline. We propose to fit the parameters of the pipeline′s modules with optimization algorithms, such as, genetic algorithms or coordinate descents, and show how visual exploration of the parameter space can help to identify sub-optimal parameter settings that need to be avoided. Results: This is of significant help in the design of our automatic parameter fitting framework, which enables us to tune the pipeline for large sets of micrographs. Conclusion: The underlying parameter spaces pose a challenge for manual as well as automated parameter optimization, as the parameter spaces can show several local performance maxima. Hence, optimization strategies that are not able to jump out of local performance maxima, like the hill climbing algorithm, often result in a local maximum.
Dynamics of a neuron model in different two-dimensional parameter-spaces
Rech, Paulo C.
2011-03-01
We report some two-dimensional parameter-space diagrams numerically obtained for the multi-parameter Hindmarsh-Rose neuron model. Several different parameter planes are considered, and we show that regardless of the combination of parameters, a typical scenario is preserved: for all choice of two parameters, the parameter-space presents a comb-shaped chaotic region immersed in a large periodic region. We also show that exist regions close these chaotic region, separated by the comb teeth, organized themselves in period-adding bifurcation cascades.
Displacement in the parameter space versus spurious solution of discretization with large time step
International Nuclear Information System (INIS)
Mendes, Eduardo; Letellier, Christophe
2004-01-01
In order to investigate a possible correspondence between differential and difference equations, it is important to possess discretization of ordinary differential equations. It is well known that when differential equations are discretized, the solution thus obtained depends on the time step used. In the majority of cases, such a solution is considered spurious when it does not resemble the expected solution of the differential equation. This often happens when the time step taken into consideration is too large. In this work, we show that, even for quite large time steps, some solutions which do not correspond to the expected ones are still topologically equivalent to solutions of the original continuous system if a displacement in the parameter space is considered. To reduce such a displacement, a judicious choice of the discretization scheme should be made. To this end, a recent discretization scheme, based on the Lie expansion of the original differential equations, proposed by Monaco and Normand-Cyrot will be analysed. Such a scheme will be shown to be sufficient for providing an adequate discretization for quite large time steps compared to the pseudo-period of the underlying dynamics
Dynamics of a neuron model in different two-dimensional parameter-spaces
International Nuclear Information System (INIS)
Rech, Paulo C.
2011-01-01
We report some two-dimensional parameter-space diagrams numerically obtained for the multi-parameter Hindmarsh-Rose neuron model. Several different parameter planes are considered, and we show that regardless of the combination of parameters, a typical scenario is preserved: for all choice of two parameters, the parameter-space presents a comb-shaped chaotic region immersed in a large periodic region. We also show that exist regions close these chaotic region, separated by the comb teeth, organized themselves in period-adding bifurcation cascades. - Research highlights: → We report parameter-spaces obtained for the Hindmarsh-Rose neuron model. → Regardless of the combination of parameters, a typical scenario is preserved. → The scenario presents a comb-shaped chaotic region immersed in a periodic region. → Periodic regions near the chaotic region are in period-adding bifurcation cascades.
Dynamics in the Parameter Space of a Neuron Model
Paulo, C. Rech
2012-06-01
Some two-dimensional parameter-space diagrams are numerically obtained by considering the largest Lyapunov exponent for a four-dimensional thirteen-parameter Hindmarsh—Rose neuron model. Several different parameter planes are considered, and it is shown that depending on the combination of parameters, a typical scenario can be preserved: for some choice of two parameters, the parameter plane presents a comb-shaped chaotic region embedded in a large periodic region. It is also shown that there exist regions close to these comb-shaped chaotic regions, separated by the comb teeth, organizing themselves in period-adding bifurcation cascades.
Large size space construction for space exploitation
Kondyurin, Alexey
2016-07-01
Space exploitation is impossible without large space structures. We need to make sufficient large volume of pressurized protecting frames for crew, passengers, space processing equipment, & etc. We have to be unlimited in space. Now the size and mass of space constructions are limited by possibility of a launch vehicle. It limits our future in exploitation of space by humans and in development of space industry. Large-size space construction can be made with using of the curing technology of the fibers-filled composites and a reactionable matrix applied directly in free space. For curing the fabric impregnated with a liquid matrix (prepreg) is prepared in terrestrial conditions and shipped in a container to orbit. In due time the prepreg is unfolded by inflating. After polymerization reaction, the durable construction can be fitted out with air, apparatus and life support systems. Our experimental studies of the curing processes in the simulated free space environment showed that the curing of composite in free space is possible. The large-size space construction can be developed. A project of space station, Moon base, Mars base, mining station, interplanet space ship, telecommunication station, space observatory, space factory, antenna dish, radiation shield, solar sail is proposed and overviewed. The study was supported by Humboldt Foundation, ESA (contract 17083/03/NL/SFe), NASA program of the stratospheric balloons and RFBR grants (05-08-18277, 12-08-00970 and 14-08-96011).
A Tool for Parameter-space Explorations
Murase, Yohsuke; Uchitane, Takeshi; Ito, Nobuyasu
A software for managing simulation jobs and results, named "OACIS", is presented. It controls a large number of simulation jobs executed in various remote servers, keeps these results in an organized way, and manages the analyses on these results. The software has a web browser front end, and users can submit various jobs to appropriate remote hosts from a web browser easily. After these jobs are finished, all the result files are automatically downloaded from the computational hosts and stored in a traceable way together with the logs of the date, host, and elapsed time of the jobs. Some visualization functions are also provided so that users can easily grasp the overview of the results distributed in a high-dimensional parameter space. Thus, OACIS is especially beneficial for the complex simulation models having many parameters for which a lot of parameter searches are required. By using API of OACIS, it is easy to write a code that automates parameter selection depending on the previous simulation results. A few examples of the automated parameter selection are also demonstrated.
Free flight in parameter space
DEFF Research Database (Denmark)
Dahlstedt, Palle; Nilsson, Per Anders
2008-01-01
with continuous interpolation between population members. With a suitable sound engine, the system forms a surprisingly expressive performance instrument, used by the electronic free impro duo pantoMorf in concerts and recording sessions over the last year.......The well-known difficulty of controlling many synthesis parameters in performance, for exploration and expression, is addressed. Inspired by interactive evolution, random vectors in parameter space are assigned to an array of pressure sensitive pads. Vectors are scaled with pressure and added...... to define the current point in parameter space. Vectors can be scaled globally, allowing exploration of the whole space or minute timberal expression. The vector origin can be shifted at any time, allowing exploration of subspaces. In essence, this amounts to mutation-based interactive evolution...
The seesaw space, a vector space to identify and characterize large-scale structures at 1 AU
Lara, A.; Niembro, T.
2017-12-01
We introduce the seesaw space, an orthonormal space formed by the local and the global fluctuations of any of the four basic solar parameters: velocity, density, magnetic field and temperature at any heliospheric distance. The fluctuations compare the standard deviation of a moving average of three hours against the running average of the parameter in a month (consider as the local fluctuations) and in a year (global fluctuations) We created this new vectorial spaces to identify the arrival of transients to any spacecraft without the need of an observer. We applied our method to the one-minute resolution data of WIND spacecraft from 1996 to 2016. To study the behavior of the seesaw norms in terms of the solar cycle, we computed annual histograms and fixed piecewise functions formed by two log-normal distributions and observed that one of the distributions is due to large-scale structures while the other to the ambient solar wind. The norm values in which the piecewise functions change vary in terms of the solar cycle. We compared the seesaw norms of each of the basic parameters due to the arrival of coronal mass ejections, co-rotating interaction regions and sector boundaries reported in literature. High seesaw norms are due to large-scale structures. We found three critical values of the norms that can be used to determined the arrival of coronal mass ejections. We present as well general comparisons of the norms during the two maxima and the minimum solar cycle periods and the differences of the norms due to large-scale structures depending on each period.
Impact of large-scale tides on cosmological distortions via redshift-space power spectrum
Akitsu, Kazuyuki; Takada, Masahiro
2018-03-01
Although large-scale perturbations beyond a finite-volume survey region are not direct observables, these affect measurements of clustering statistics of small-scale (subsurvey) perturbations in large-scale structure, compared with the ensemble average, via the mode-coupling effect. In this paper we show that a large-scale tide induced by scalar perturbations causes apparent anisotropic distortions in the redshift-space power spectrum of galaxies in a way depending on an alignment between the tide, wave vector of small-scale modes and line-of-sight direction. Using the perturbation theory of structure formation, we derive a response function of the redshift-space power spectrum to large-scale tide. We then investigate the impact of large-scale tide on estimation of cosmological distances and the redshift-space distortion parameter via the measured redshift-space power spectrum for a hypothetical large-volume survey, based on the Fisher matrix formalism. To do this, we treat the large-scale tide as a signal, rather than an additional source of the statistical errors, and show that a degradation in the parameter is restored if we can employ the prior on the rms amplitude expected for the standard cold dark matter (CDM) model. We also discuss whether the large-scale tide can be constrained at an accuracy better than the CDM prediction, if the effects up to a larger wave number in the nonlinear regime can be included.
MFV Reductions of MSSM Parameter Space
AbdusSalam, S.S.; Quevedo, F.
2015-01-01
The 100+ free parameters of the minimal supersymmetric standard model (MSSM) make it computationally difficult to compare systematically with data, motivating the study of specific parameter reductions such as the cMSSM and pMSSM. Here we instead study the reductions of parameter space implied by using minimal flavour violation (MFV) to organise the R-parity conserving MSSM, with a view towards systematically building in constraints on flavour-violating physics. Within this framework the space of parameters is reduced by expanding soft supersymmetry-breaking terms in powers of the Cabibbo angle, leading to a 24-, 30- or 42-parameter framework (which we call MSSM-24, MSSM-30, and MSSM-42 respectively), depending on the order kept in the expansion. We provide a Bayesian global fit to data of the MSSM-30 parameter set to show that this is manageable with current tools. We compare the MFV reductions to the 19-parameter pMSSM choice and show that the pMSSM is not contained as a subset. The MSSM-30 analysis favours...
The Space Station as a Construction Base for Large Space Structures
Gates, R. M.
1985-01-01
The feasibility of using the Space Station as a construction site for large space structures is examined. An overview is presented of the results of a program entitled Definition of Technology Development Missions (TDM's) for Early Space Stations - Large Space Structures. The definition of LSS technology development missions must be responsive to the needs of future space missions which require large space structures. Long range plans for space were assembled by reviewing Space System Technology Models (SSTM) and other published sources. Those missions which will use large space structures were reviewed to determine the objectives which must be demonstrated by technology development missions. The three TDM's defined during this study are: (1) a construction storage/hangar facility; (2) a passive microwave radiometer; and (3) a precision optical system.
Charge distributions in transverse coordinate space and in impact parameter space
Energy Technology Data Exchange (ETDEWEB)
Hwang, Dae Sung [Department of Physics, Sejong University, Seoul 143-747 (Korea, Republic of)], E-mail: dshwang@slac.stanford.edu; Kim, Dong Soo [Department of Physics, Kangnung National University, Kangnung 210-702 (Korea, Republic of); Kim, Jonghyun [Department of Physics, Sejong University, Seoul 143-747 (Korea, Republic of)
2008-11-27
We study the charge distributions of the valence quarks inside nucleon in the transverse coordinate space, which is conjugate to the transverse momentum space. We compare the results with the charge distributions in the impact parameter space.
Charge distributions in transverse coordinate space and in impact parameter space
Hwang, Dae Sung; Kim, Dong Soo; Kim, Jonghyun
2008-01-01
We study the charge distributions of the valence quarks inside nucleon in the transverse coordinate space, which is conjugate to the transverse momentum space. We compare the results with the charge distributions in the impact parameter space.
SP_Ace: a new code to derive stellar parameters and elemental abundances
Boeche, C.; Grebel, E. K.
2016-03-01
Context. Ongoing and future massive spectroscopic surveys will collect large numbers (106-107) of stellar spectra that need to be analyzed. Highly automated software is needed to derive stellar parameters and chemical abundances from these spectra. Aims: We developed a new method of estimating the stellar parameters Teff, log g, [M/H], and elemental abundances. This method was implemented in a new code, SP_Ace (Stellar Parameters And Chemical abundances Estimator). This is a highly automated code suitable for analyzing the spectra of large spectroscopic surveys with low or medium spectral resolution (R = 2000-20 000). Methods: After the astrophysical calibration of the oscillator strengths of 4643 absorption lines covering the wavelength ranges 5212-6860 Å and 8400-8924 Å, we constructed a library that contains the equivalent widths (EW) of these lines for a grid of stellar parameters. The EWs of each line are fit by a polynomial function that describes the EW of the line as a function of the stellar parameters. The coefficients of these polynomial functions are stored in a library called the "GCOG library". SP_Ace, a code written in FORTRAN95, uses the GCOG library to compute the EWs of the lines, constructs models of spectra as a function of the stellar parameters and abundances, and searches for the model that minimizes the χ2 deviation when compared to the observed spectrum. The code has been tested on synthetic and real spectra for a wide range of signal-to-noise and spectral resolutions. Results: SP_Ace derives stellar parameters such as Teff, log g, [M/H], and chemical abundances of up to ten elements for low to medium resolution spectra of FGK-type stars with precision comparable to the one usually obtained with spectra of higher resolution. Systematic errors in stellar parameters and chemical abundances are presented and identified with tests on synthetic and real spectra. Stochastic errors are automatically estimated by the code for all the parameters
Parameter space of experimental chaotic circuits with high-precision control parameters
Energy Technology Data Exchange (ETDEWEB)
Sousa, Francisco F. G. de; Rubinger, Rero M. [Instituto de Física e Química, Universidade Federal de Itajubá, Itajubá, MG (Brazil); Sartorelli, José C., E-mail: sartorelli@if.usp.br [Universidade de São Paulo, São Paulo, SP (Brazil); Albuquerque, Holokx A. [Departamento de Física, Universidade do Estado de Santa Catarina, Joinville, SC (Brazil); Baptista, Murilo S. [Institute of Complex Systems and Mathematical Biology, SUPA, University of Aberdeen, Aberdeen (United Kingdom)
2016-08-15
We report high-resolution measurements that experimentally confirm a spiral cascade structure and a scaling relationship of shrimps in the Chua's circuit. Circuits constructed using this component allow for a comprehensive characterization of the circuit behaviors through high resolution parameter spaces. To illustrate the power of our technological development for the creation and the study of chaotic circuits, we constructed a Chua circuit and study its high resolution parameter space. The reliability and stability of the designed component allowed us to obtain data for long periods of time (∼21 weeks), a data set from which an accurate estimation of Lyapunov exponents for the circuit characterization was possible. Moreover, this data, rigorously characterized by the Lyapunov exponents, allows us to reassure experimentally that the shrimps, stable islands embedded in a domain of chaos in the parameter spaces, can be observed in the laboratory. Finally, we confirm that their sizes decay exponentially with the period of the attractor, a result expected to be found in maps of the quadratic family.
Image-based Exploration of Iso-surfaces for Large Multi- Variable Datasets using Parameter Space.
Binyahib, Roba S.
2013-05-13
With an increase in processing power, more complex simulations have resulted in larger data size, with higher resolution and more variables. Many techniques have been developed to help the user to visualize and analyze data from such simulations. However, dealing with a large amount of multivariate data is challenging, time- consuming and often requires high-end clusters. Consequently, novel visualization techniques are needed to explore such data. Many users would like to visually explore their data and change certain visual aspects without the need to use special clusters or having to load a large amount of data. This is the idea behind explorable images (EI). Explorable images are a novel approach that provides limited interactive visualization without the need to re-render from the original data [40]. In this work, the concept of EI has been used to create a workflow that deals with explorable iso-surfaces for scalar fields in a multivariate, time-varying dataset. As a pre-processing step, a set of iso-values for each scalar field is inferred and extracted from a user-assisted sampling technique in time-parameter space. These iso-values are then used to generate iso- surfaces that are then pre-rendered (from a fixed viewpoint) along with additional buffers (i.e. normals, depth, values of other fields, etc.) to provide a compressed representation of iso-surfaces in the dataset. We present a tool that at run-time allows the user to interactively browse and calculate a combination of iso-surfaces superimposed on each other. The result is the same as calculating multiple iso- surfaces from the original data but without the memory and processing overhead. Our tool also allows the user to change the (scalar) values superimposed on each of the surfaces, modify their color map, and interactively re-light the surfaces. We demonstrate the effectiveness of our approach over a multi-terabyte combustion dataset. We also illustrate the efficiency and accuracy of our
Naden, Levi N; Shirts, Michael R
2016-04-12
We show how thermodynamic properties of molecular models can be computed over a large, multidimensional parameter space by combining multistate reweighting analysis with a linear basis function approach. This approach reduces the computational cost to estimate thermodynamic properties from molecular simulations for over 130,000 tested parameter combinations from over 1000 CPU years to tens of CPU days. This speed increase is achieved primarily by computing the potential energy as a linear combination of basis functions, computed from either modified simulation code or as the difference of energy between two reference states, which can be done without any simulation code modification. The thermodynamic properties are then estimated with the Multistate Bennett Acceptance Ratio (MBAR) as a function of multiple model parameters without the need to define a priori how the states are connected by a pathway. Instead, we adaptively sample a set of points in parameter space to create mutual configuration space overlap. The existence of regions of poor configuration space overlap are detected by analyzing the eigenvalues of the sampled states' overlap matrix. The configuration space overlap to sampled states is monitored alongside the mean and maximum uncertainty to determine convergence, as neither the uncertainty or the configuration space overlap alone is a sufficient metric of convergence. This adaptive sampling scheme is demonstrated by estimating with high precision the solvation free energies of charged particles of Lennard-Jones plus Coulomb functional form with charges between -2 and +2 and generally physical values of σij and ϵij in TIP3P water. We also compute entropy, enthalpy, and radial distribution functions of arbitrary unsampled parameter combinations using only the data from these sampled states and use the estimates of free energies over the entire space to examine the deviation of atomistic simulations from the Born approximation to the solvation free
Large model-space calculation of the nuclear level density parameter
International Nuclear Information System (INIS)
Agrawal, B.K.; Samaddar, S.K.; De, J.N.; Shlomo, S.
1998-01-01
Recently, several attempts have been made to obtain nuclear level density (ρ) and level density parameter (α) within the microscopic approaches based on path integral representation of the partition function. The results for the inverse level density parameter K es and the level density as a function of excitation energy are presented
An optimal beam alignment method for large-scale distributed space surveillance radar system
Huang, Jian; Wang, Dongya; Xia, Shuangzhi
2018-06-01
Large-scale distributed space surveillance radar is a very important ground-based equipment to maintain a complete catalogue for Low Earth Orbit (LEO) space debris. However, due to the thousands of kilometers distance between each sites of the distributed radar system, how to optimally implement the Transmitting/Receiving (T/R) beams alignment in a great space using the narrow beam, which proposed a special and considerable technical challenge in the space surveillance area. According to the common coordinate transformation model and the radar beam space model, we presented a two dimensional projection algorithm for T/R beam using the direction angles, which could visually describe and assess the beam alignment performance. Subsequently, the optimal mathematical models for the orientation angle of the antenna array, the site location and the T/R beam coverage are constructed, and also the beam alignment parameters are precisely solved. At last, we conducted the optimal beam alignment experiments base on the site parameters of Air Force Space Surveillance System (AFSSS). The simulation results demonstrate the correctness and effectiveness of our novel method, which can significantly stimulate the construction for the LEO space debris surveillance equipment.
On the possibility of large axion moduli spaces
Energy Technology Data Exchange (ETDEWEB)
Rudelius, Tom [Jefferson Physical Laboratory, Harvard University,Cambridge, MA 02138 (United States)
2015-04-28
We study the diameters of axion moduli spaces, focusing primarily on type IIB compactifications on Calabi-Yau three-folds. In this case, we derive a stringent bound on the diameter in the large volume region of parameter space for Calabi-Yaus with simplicial Kähler cone. This bound can be violated by Calabi-Yaus with non-simplicial Kähler cones, but additional contributions are introduced to the effective action which can restrict the field range accessible to the axions. We perform a statistical analysis of simulated moduli spaces, finding in all cases that these additional contributions restrict the diameter so that these moduli spaces are no more likely to yield successful inflation than those with simplicial Kähler cone or with far fewer axions. Further heuristic arguments for axions in other corners of the duality web suggest that the difficulty observed in http://dx.doi.org/10.1088/1475-7516/2003/06/001 of finding an axion decay constant parametrically larger than M{sub p} applies not only to individual axions, but to the diagonals of axion moduli space as well. This observation is shown to follow from the weak gravity conjecture of http://dx.doi.org/10.1088/1126-6708/2007/06/060, so it likely applies not only to axions in string theory, but also to axions in any consistent theory of quantum gravity.
Cui, Tiangang; Marzouk, Youssef; Willcox, Karen
2016-06-01
Two major bottlenecks to the solution of large-scale Bayesian inverse problems are the scaling of posterior sampling algorithms to high-dimensional parameter spaces and the computational cost of forward model evaluations. Yet incomplete or noisy data, the state variation and parameter dependence of the forward model, and correlations in the prior collectively provide useful structure that can be exploited for dimension reduction in this setting-both in the parameter space of the inverse problem and in the state space of the forward model. To this end, we show how to jointly construct low-dimensional subspaces of the parameter space and the state space in order to accelerate the Bayesian solution of the inverse problem. As a byproduct of state dimension reduction, we also show how to identify low-dimensional subspaces of the data in problems with high-dimensional observations. These subspaces enable approximation of the posterior as a product of two factors: (i) a projection of the posterior onto a low-dimensional parameter subspace, wherein the original likelihood is replaced by an approximation involving a reduced model; and (ii) the marginal prior distribution on the high-dimensional complement of the parameter subspace. We present and compare several strategies for constructing these subspaces using only a limited number of forward and adjoint model simulations. The resulting posterior approximations can rapidly be characterized using standard sampling techniques, e.g., Markov chain Monte Carlo. Two numerical examples demonstrate the accuracy and efficiency of our approach: inversion of an integral equation in atmospheric remote sensing, where the data dimension is very high; and the inference of a heterogeneous transmissivity field in a groundwater system, which involves a partial differential equation forward model with high dimensional state and parameters.
Parameter-space metric of semicoherent searches for continuous gravitational waves
International Nuclear Information System (INIS)
Pletsch, Holger J.
2010-01-01
Continuous gravitational-wave (CW) signals such as emitted by spinning neutron stars are an important target class for current detectors. However, the enormous computational demand prohibits fully coherent broadband all-sky searches for prior unknown CW sources over wide ranges of parameter space and for yearlong observation times. More efficient hierarchical ''semicoherent'' search strategies divide the data into segments much shorter than one year, which are analyzed coherently; then detection statistics from different segments are combined incoherently. To optimally perform the incoherent combination, understanding of the underlying parameter-space structure is requisite. This problem is addressed here by using new coordinates on the parameter space, which yield the first analytical parameter-space metric for the incoherent combination step. This semicoherent metric applies to broadband all-sky surveys (also embedding directed searches at fixed sky position) for isolated CW sources. Furthermore, the additional metric resolution attained through the combination of segments is studied. From the search parameters (sky position, frequency, and frequency derivatives), solely the metric resolution in the frequency derivatives is found to significantly increase with the number of segments.
Analyzing Damping Vibration Methods of Large-Size Space Vehicles in the Earth's Magnetic Field
Directory of Open Access Journals (Sweden)
G. A. Shcheglov
2016-01-01
Full Text Available It is known that most of today's space vehicles comprise large antennas, which are bracket-attached to the vehicle body. Dimensions of reflector antennas may be of 30 ... 50 m. The weight of such constructions can reach approximately 200 kg.Since the antenna dimensions are significantly larger than the size of the vehicle body and the points to attach the brackets to the space vehicles have a low stiffness, conventional dampers may be inefficient. The paper proposes to consider the damping antenna in terms of its interaction with the Earth's magnetic field.A simple dynamic model of the space vehicle equipped with a large-size structure is built. The space vehicle is a parallelepiped to which the antenna is attached through a beam.To solve the model problems, was used a simplified model of Earth's magnetic field: uniform, with intensity lines parallel to each other and perpendicular to the plane of the antenna.The paper considers two layouts of coils with respect to the antenna, namely: a vertical one in which an axis of magnetic dipole is perpendicular to the antenna plane, and a horizontal layout in which an axis of magnetic dipole lies in the antenna plane. It also explores two ways for magnetic damping of oscillations: through the controlled current that is supplied from the power supply system of the space vehicle, and by the self-induction current in the coil. Thus, four objectives were formulated.In each task was formulated an oscillation equation. Then a ratio of oscillation amplitudes and their decay time were estimated. It was found that each task requires the certain parameters either of the antenna itself, its dimensions and moment of inertia, or of the coil and, respectively, the current, which is supplied from the space vehicle. In each task for these parameters were found the ranges, which allow us to tell of efficient damping vibrations.The conclusion can be drawn based on the analysis of tasks that a specialized control system
Forecasts of non-Gaussian parameter spaces using Box-Cox transformations
Joachimi, B.; Taylor, A. N.
2011-09-01
Forecasts of statistical constraints on model parameters using the Fisher matrix abound in many fields of astrophysics. The Fisher matrix formalism involves the assumption of Gaussianity in parameter space and hence fails to predict complex features of posterior probability distributions. Combining the standard Fisher matrix with Box-Cox transformations, we propose a novel method that accurately predicts arbitrary posterior shapes. The Box-Cox transformations are applied to parameter space to render it approximately multivariate Gaussian, performing the Fisher matrix calculation on the transformed parameters. We demonstrate that, after the Box-Cox parameters have been determined from an initial likelihood evaluation, the method correctly predicts changes in the posterior when varying various parameters of the experimental setup and the data analysis, with marginally higher computational cost than a standard Fisher matrix calculation. We apply the Box-Cox-Fisher formalism to forecast cosmological parameter constraints by future weak gravitational lensing surveys. The characteristic non-linear degeneracy between matter density parameter and normalization of matter density fluctuations is reproduced for several cases, and the capabilities of breaking this degeneracy by weak-lensing three-point statistics is investigated. Possible applications of Box-Cox transformations of posterior distributions are discussed, including the prospects for performing statistical data analysis steps in the transformed Gaussianized parameter space.
Preliminary results on the dynamics of large and flexible space structures in Halo orbits
Colagrossi, Andrea; Lavagna, Michèle
2017-05-01
The global exploration roadmap suggests, among other ambitious future space programmes, a possible manned outpost in lunar vicinity, to support surface operations and further astronaut training for longer and deeper space missions and transfers. In particular, a Lagrangian point orbit location - in the Earth- Moon system - is suggested for a manned cis-lunar infrastructure; proposal which opens an interesting field of study from the astrodynamics perspective. Literature offers a wide set of scientific research done on orbital dynamics under the Three-Body Problem modelling approach, while less of it includes the attitude dynamics modelling as well. However, whenever a large space structure (ISS-like) is considered, not only the coupled orbit-attitude dynamics should be modelled to run more accurate analyses, but the structural flexibility should be included too. The paper, starting from the well-known Circular Restricted Three-Body Problem formulation, presents some preliminary results obtained by adding a coupled orbit-attitude dynamical model and the effects due to the large structure flexibility. In addition, the most relevant perturbing phenomena, such as the Solar Radiation Pressure (SRP) and the fourth-body (Sun) gravity, are included in the model as well. A multi-body approach has been preferred to represent possible configurations of the large cis-lunar infrastructure: interconnected simple structural elements - such as beams, rods or lumped masses linked by springs - build up the space segment. To better investigate the relevance of the flexibility effects, the lumped parameters approach is compared with a distributed parameters semi-analytical technique. A sensitivity analysis of system dynamics, with respect to different configurations and mechanical properties of the extended structure, is also presented, in order to highlight drivers for the lunar outpost design. Furthermore, a case study for a large and flexible space structure in Halo orbits around
Environmental effects and large space systems
Garrett, H. B.
1981-01-01
When planning large scale operations in space, environmental impact must be considered in addition to radiation, spacecraft charging, contamination, high power and size. Pollution of the atmosphere and space is caused by rocket effluents and by photoelectrons generated by sunlight falling on satellite surfaces even light pollution may result (the SPS may reflect so much light as to be a nuisance to astronomers). Large (100 Km 2) structures also will absorb the high energy particles that impinge on them. Altogether, these effects may drastically alter the Earth's magnetosphere. It is not clear if these alterations will in any way affect the Earth's surface climate. Large structures will also generate large plasma wakes and waves which may cause interference with communications to the vehicle. A high energy, microwave beam from the SPS will cause ionospheric turbulence, affecting UHF and VHF communications. Although none of these effects may ultimately prove critical, they must be considered in the design of large structures.
Determining frequentist confidence limits using a directed parameter space search
International Nuclear Information System (INIS)
Daniel, Scott F.; Connolly, Andrew J.; Schneider, Jeff
2014-01-01
We consider the problem of inferring constraints on a high-dimensional parameter space with a computationally expensive likelihood function. We propose a machine learning algorithm that maps out the Frequentist confidence limit on parameter space by intelligently targeting likelihood evaluations so as to quickly and accurately characterize the likelihood surface in both low- and high-likelihood regions. We compare our algorithm to Bayesian credible limits derived by the well-tested Markov Chain Monte Carlo (MCMC) algorithm using both multi-modal toy likelihood functions and the seven yr Wilkinson Microwave Anisotropy Probe cosmic microwave background likelihood function. We find that our algorithm correctly identifies the location, general size, and general shape of high-likelihood regions in parameter space while being more robust against multi-modality than MCMC.
Parameter estimation in space systems using recurrent neural networks
Parlos, Alexander G.; Atiya, Amir F.; Sunkel, John W.
1991-01-01
The identification of time-varying parameters encountered in space systems is addressed, using artificial neural systems. A hybrid feedforward/feedback neural network, namely a recurrent multilayer perception, is used as the model structure in the nonlinear system identification. The feedforward portion of the network architecture provides its well-known interpolation property, while through recurrency and cross-talk, the local information feedback enables representation of temporal variations in the system nonlinearities. The standard back-propagation-learning algorithm is modified and it is used for both the off-line and on-line supervised training of the proposed hybrid network. The performance of recurrent multilayer perceptron networks in identifying parameters of nonlinear dynamic systems is investigated by estimating the mass properties of a representative large spacecraft. The changes in the spacecraft inertia are predicted using a trained neural network, during two configurations corresponding to the early and late stages of the spacecraft on-orbit assembly sequence. The proposed on-line mass properties estimation capability offers encouraging results, though, further research is warranted for training and testing the predictive capabilities of these networks beyond nominal spacecraft operations.
Replicate periodic windows in the parameter space of driven oscillators
Energy Technology Data Exchange (ETDEWEB)
Medeiros, E.S., E-mail: esm@if.usp.br [Instituto de Fisica, Universidade de Sao Paulo, Sao Paulo (Brazil); Souza, S.L.T. de [Universidade Federal de Sao Joao del-Rei, Campus Alto Paraopeba, Minas Gerais (Brazil); Medrano-T, R.O. [Departamento de Ciencias Exatas e da Terra, Universidade Federal de Sao Paulo, Diadema, Sao Paulo (Brazil); Caldas, I.L. [Instituto de Fisica, Universidade de Sao Paulo, Sao Paulo (Brazil)
2011-11-15
Highlights: > We apply a weak harmonic perturbation to control chaos in two driven oscillators. > We find replicate periodic windows in the driven oscillator parameter space. > We find that the periodic window replication is associated with the chaos control. - Abstract: In the bi-dimensional parameter space of driven oscillators, shrimp-shaped periodic windows are immersed in chaotic regions. For two of these oscillators, namely, Duffing and Josephson junction, we show that a weak harmonic perturbation replicates these periodic windows giving rise to parameter regions correspondent to periodic orbits. The new windows are composed of parameters whose periodic orbits have the same periodicity and pattern of stable and unstable periodic orbits already existent for the unperturbed oscillator. Moreover, these unstable periodic orbits are embedded in chaotic attractors in phase space regions where the new stable orbits are identified. Thus, the observed periodic window replication is an effective oscillator control process, once chaotic orbits are replaced by regular ones.
Investigation of Secondary Neutron Production in Large Space Vehicles for Deep Space
Rojdev, Kristina; Koontz, Steve; Reddell, Brandon; Atwell, William; Boeder, Paul
2016-01-01
Future NASA missions will focus on deep space and Mars surface operations with large structures necessary for transportation of crew and cargo. In addition to the challenges of manufacturing these large structures, there are added challenges from the space radiation environment and its impacts on the crew, electronics, and vehicle materials. Primary radiation from the sun (solar particle events) and from outside the solar system (galactic cosmic rays) interact with materials of the vehicle and the elements inside the vehicle. These interactions lead to the primary radiation being absorbed or producing secondary radiation (primarily neutrons). With all vehicles, the high-energy primary radiation is of most concern. However, with larger vehicles, there is more opportunity for secondary radiation production, which can be significant enough to cause concern. In a previous paper, we embarked upon our first steps toward studying neutron production from large vehicles by validating our radiation transport codes for neutron environments against flight data. The following paper will extend the previous work to focus on the deep space environment and the resulting neutron flux from large vehicles in this deep space environment.
The dynamics of blood biochemical parameters in cosmonauts during long-term space flights
Markin, Andrei; Strogonova, Lubov; Balashov, Oleg; Polyakov, Valery; Tigner, Timoty
Most of the previously obtained data on cosmonauts' metabolic state concerned certain stages of the postflight period. In this connection, all conclusions, as to metabolism peculiarities during the space flight, were to a large extent probabilistic. The purpose of this work was study of metabolism characteristics in cosmonauts directly during long-term space flights. In the capillary blood samples taken from a finger, by "Reflotron IV" biochemical analyzer, "Boehringer Mannheim" GmbH, Germany, adapted to weightlessness environments, the activity of GOT, GPT, CK, gamma-GT, total and pancreatic amylase, as well as concentration of hemoglobin, glucose, total bilirubin, uric acid, urea, creatinine, total, HDL- and LDL cholesterol, triglycerides had been determined. HDL/LDL-cholesterol ratio also was computed. The crewmembers of 6 main missions to the "Mir" orbital station, a total of 17 cosmonauts, were examined. Biochemical tests were carryed out 30-60 days before lounch, and in the flights different stages between the 25-th and the 423-rd days of flights. In cosmonauts during space flight had been found tendency to increase, in compare with basal level, GOT, GPT, total amylase activity, glucose and total cholesterol concentration, and tendency to decrease of CK activity, hemoglobin, HDL-cholesterol concentration, and HDL/LDL — cholesterol ratio. Some definite trends in variations of other determined biochemical parameters had not been found. The same trends of mentioned biochemical parameters alterations observed in majority of tested cosmonauts, allows to suppose existence of connection between noted metabolic alterations with influence of space flight conditions upon cosmonaut's body. Variations of other studied blood biochemical parameters depends on, probably, pure individual causes.
Potential large missions enabled by NASA's space launch system
Stahl, H. Philip; Hopkins, Randall C.; Schnell, Andrew; Smith, David A.; Jackman, Angela; Warfield, Keith R.
2016-07-01
Large space telescope missions have always been limited by their launch vehicle's mass and volume capacities. The Hubble Space Telescope (HST) was specifically designed to fit inside the Space Shuttle and the James Webb Space Telescope (JWST) is specifically designed to fit inside an Ariane 5. Astrophysicists desire even larger space telescopes. NASA's "Enduring Quests Daring Visions" report calls for an 8- to 16-m Large UV-Optical-IR (LUVOIR) Surveyor mission to enable ultra-high-contrast spectroscopy and coronagraphy. AURA's "From Cosmic Birth to Living Earth" report calls for a 12-m class High-Definition Space Telescope to pursue transformational scientific discoveries. NASA's "Planning for the 2020 Decadal Survey" calls for a Habitable Exoplanet Imaging (HabEx) and a LUVOIR as well as Far-IR and an X-Ray Surveyor missions. Packaging larger space telescopes into existing launch vehicles is a significant engineering complexity challenge that drives cost and risk. NASA's planned Space Launch System (SLS), with its 8 or 10-m diameter fairings and ability to deliver 35 to 45-mt of payload to Sun-Earth-Lagrange-2, mitigates this challenge by fundamentally changing the design paradigm for large space telescopes. This paper reviews the mass and volume capacities of the planned SLS, discusses potential implications of these capacities for designing large space telescope missions, and gives three specific mission concept implementation examples: a 4-m monolithic off-axis telescope, an 8-m monolithic on-axis telescope and a 12-m segmented on-axis telescope.
Colagrossi, Andrea; Lavagna, Michèle
2018-03-01
A space station in the vicinity of the Moon can be exploited as a gateway for future human and robotic exploration of the solar system. The natural location for a space system of this kind is about one of the Earth-Moon libration points. The study addresses the dynamics during rendezvous and docking operations with a very large space infrastructure in an EML2 Halo orbit. The model takes into account the coupling effects between the orbital and the attitude motion in a circular restricted three-body problem environment. The flexibility of the system is included, and the interaction between the modes of the structure and those related with the orbital motion is investigated. A lumped parameter technique is used to represents the flexible dynamics. The parameters of the space station are maintained as generic as possible, in a way to delineate a global scenario of the mission. However, the developed model can be tuned and updated according to the information that will be available in the future, when the whole system will be defined with a higher level of precision.
Rodriguez, G. (Editor)
1983-01-01
Two general themes in the control of large space structures are addressed: control theory for distributed parameter systems and distributed control for systems requiring spatially-distributed multipoint sensing and actuation. Topics include modeling and control, stabilization, and estimation and identification.
Parameter and State Estimator for State Space Models
Directory of Open Access Journals (Sweden)
Ruifeng Ding
2014-01-01
Full Text Available This paper proposes a parameter and state estimator for canonical state space systems from measured input-output data. The key is to solve the system state from the state equation and to substitute it into the output equation, eliminating the state variables, and the resulting equation contains only the system inputs and outputs, and to derive a least squares parameter identification algorithm. Furthermore, the system states are computed from the estimated parameters and the input-output data. Convergence analysis using the martingale convergence theorem indicates that the parameter estimates converge to their true values. Finally, an illustrative example is provided to show that the proposed algorithm is effective.
1984-01-01
The large space structures technology development missions to be performed on an early manned space station was studied and defined and the resources needed and the design implications to an early space station to carry out these large space structures technology development missions were determined. Emphasis is being placed on more detail in mission designs and space station resource requirements.
Analysis of large optical ground stations for deep-space optical communications
Garcia-Talavera, M. Reyes; Rivera, C.; Murga, G.; Montilla, I.; Alonso, A.
2017-11-01
Inter-satellite and ground to satellite optical communications have been successfully demonstrated over more than a decade with several experiments, the most recent being NASA's lunar mission Lunar Atmospheric Dust Environment Explorer (LADEE). The technology is in a mature stage that allows to consider optical communications as a high-capacity solution for future deep-space communications [1][2], where there is an increasing demand on downlink data rate to improve science return. To serve these deep-space missions, suitable optical ground stations (OGS) have to be developed providing large collecting areas. The design of such OGSs must face both technical and cost constraints in order to achieve an optimum implementation. To that end, different approaches have already been proposed and analyzed, namely, a large telescope based on a segmented primary mirror, telescope arrays, and even the combination of RF and optical receivers in modified versions of existing Deep-Space Network (DSN) antennas [3][4][5]. Array architectures have been proposed to relax some requirements, acting as one of the key drivers of the present study. The advantages offered by the array approach are attained at the expense of adding subsystems. Critical issues identified for each implementation include their inherent efficiency and losses, as well as its performance under high-background conditions, and the acquisition, pointing, tracking, and synchronization capabilities. It is worth noticing that, due to the photon-counting nature of detection, the system performance is not solely given by the signal-to-noise ratio parameter. To start with the analysis, first the main implications of the deep space scenarios are summarized, since they are the driving requirements to establish the technical specifications for the large OGS. Next, both the main characteristics of the OGS and the potential configuration approaches are presented, getting deeper in key subsystems with strong impact in the
Environmental Disturbance Modeling for Large Inflatable Space Structures
National Research Council Canada - National Science Library
Davis, Donald
2001-01-01
Tightening space budgets and stagnating spacelift capabilities are driving the Air Force and other space agencies to focus on inflatable technology as a reliable, inexpensive means of deploying large structures in orbit...
Extra-large letter spacing improves reading in dyslexia
Zorzi, Marco; Barbiero, Chiara; Facoetti, Andrea; Lonciari, Isabella; Carrozzi, Marco; Montico, Marcella; Bravar, Laura; George, Florence; Pech-Georgel, Catherine; Ziegler, Johannes C.
2012-01-01
Although the causes of dyslexia are still debated, all researchers agree that the main challenge is to find ways that allow a child with dyslexia to read more words in less time, because reading more is undisputedly the most efficient intervention for dyslexia. Sophisticated training programs exist, but they typically target the component skills of reading, such as phonological awareness. After the component skills have improved, the main challenge remains (that is, reading deficits must be treated by reading more—a vicious circle for a dyslexic child). Here, we show that a simple manipulation of letter spacing substantially improved text reading performance on the fly (without any training) in a large, unselected sample of Italian and French dyslexic children. Extra-large letter spacing helps reading, because dyslexics are abnormally affected by crowding, a perceptual phenomenon with detrimental effects on letter recognition that is modulated by the spacing between letters. Extra-large letter spacing may help to break the vicious circle by rendering the reading material more easily accessible. PMID:22665803
Determination of Geometric Parameters of Space Steel Constructions
Directory of Open Access Journals (Sweden)
Jitka Suchá
2005-06-01
Full Text Available The paper contains conclusions of the PhD thesis „Accuracy of determination of geometric parameters of space steel construction using geodetic methods“. Generally it is a difficult task with high requirements for the accuracy and reliability of results, i.e. space coordinates of assessed points on a steel construction. A solution of this task is complicated by the effects of atmospheric influences to begin with the temperature, which strongly affects steel constructions. It is desirable to eliminate the influence of the temperature for the evaluation of the geometric parameters. A choice of an efficient geodetic method, which fulfils demanding requirements, is often affected with a constrained place in an immediate neighbourhood of the measured construction. These conditions disable the choice of efficient points configuration of a geodetic micro network, e.g. the for forward intersection. In addition, points of a construction are often hardly accessible and therefore marking is difficult. The space polar method appears efficient owing to the mentioned reasons and its advantages were increased with the implementation of self-adhesive reflex targets for the distance measurement which enable the ermanent marking of measured points already in the course of placing the construction.
Constraints on pre-big-bang parameter space from CMBR anisotropies
International Nuclear Information System (INIS)
Bozza, V.; Gasperini, M.; Giovannini, M.; Veneziano, G.
2003-01-01
The so-called curvaton mechanism--a way to convert isocurvature perturbations into adiabatic ones--is investigated both analytically and numerically in a pre-big-bang scenario where the role of the curvaton is played by a sufficiently massive Kalb-Ramond axion of superstring theory. When combined with observations of CMBR anisotropies at large and moderate angular scales, the present analysis allows us to constrain quite considerably the parameter space of the model: in particular, the initial displacement of the axion from the minimum of its potential and the rate of evolution of the compactification volume during pre-big-bang inflation. The combination of theoretical and experimental constraints favors a slightly blue spectrum of scalar perturbations, and/or a value of the string scale in the vicinity of the SUSY GUT scale
Constraints on pre-big bang parameter space from CMBR anisotropies
Bozza, Valerio; Giovannini, Massimo; Veneziano, Gabriele
2003-01-01
The so-called curvaton mechanism --a way to convert isocurvature perturbations into adiabatic ones-- is investigated both analytically and numerically in a pre-big bang scenario where the role of the curvaton is played by a sufficiently massive Kalb--Ramond axion of superstring theory. When combined with observations of CMBR anisotropies at large and moderate angular scales, the present analysis allows us to constrain quite considerably the parameter space of the model: in particular, the initial displacement of the axion from the minimum of its potential and the rate of evolution of the compactification volume during pre-big bang inflation. The combination of theoretical and experimental constraints favours a slightly blue spectrum of scalar perturbations, and/or a value of the string scale in the vicinity of the SUSY-GUT scale.
Potential Large Decadal Missions Enabled by Nasas Space Launch System
Stahl, H. Philip; Hopkins, Randall C.; Schnell, Andrew; Smith, David Alan; Jackman, Angela; Warfield, Keith R.
2016-01-01
Large space telescope missions have always been limited by their launch vehicle's mass and volume capacities. The Hubble Space Telescope (HST) was specifically designed to fit inside the Space Shuttle and the James Webb Space Telescope (JWST) is specifically designed to fit inside an Ariane 5. Astrophysicists desire even larger space telescopes. NASA's "Enduring Quests Daring Visions" report calls for an 8- to 16-m Large UV-Optical-IR (LUVOIR) Surveyor mission to enable ultra-high-contrast spectroscopy and coronagraphy. AURA's "From Cosmic Birth to Living Earth" report calls for a 12-m class High-Definition Space Telescope to pursue transformational scientific discoveries. NASA's "Planning for the 2020 Decadal Survey" calls for a Habitable Exoplanet Imaging (HabEx) and a LUVOIR as well as Far-IR and an X-Ray Surveyor missions. Packaging larger space telescopes into existing launch vehicles is a significant engineering complexity challenge that drives cost and risk. NASA's planned Space Launch System (SLS), with its 8 or 10-m diameter fairings and ability to deliver 35 to 45-mt of payload to Sun-Earth-Lagrange-2, mitigates this challenge by fundamentally changing the design paradigm for large space telescopes. This paper reviews the mass and volume capacities of the planned SLS, discusses potential implications of these capacities for designing large space telescope missions, and gives three specific mission concept implementation examples: a 4-m monolithic off-axis telescope, an 8-m monolithic on-axis telescope and a 12-m segmented on-axis telescope.
Indoor Climate of Large Glazed Spaces
DEFF Research Database (Denmark)
Hendriksen, Ole Juhl; Madsen, Christina E.; Heiselberg, Per
In recent years large glazed spaces has found increased use both in connection with renovation of buildings and as part of new buildings. One of the objectives is to add an architectural element, which combines indoor- and outdoor climate. In order to obtain a satisfying indoor climate it is crui...... it is cruicial at the design stage to be able to predict the performance regarding thermal comfort and energy consumption. This paper focus on the practical implementation of Computational Fluid Dynamics (CFD) and the relation to other simulation tools regarding indoor climate.......In recent years large glazed spaces has found increased use both in connection with renovation of buildings and as part of new buildings. One of the objectives is to add an architectural element, which combines indoor- and outdoor climate. In order to obtain a satisfying indoor climate...
Benchmarking processes for managing large international space programs
Mandell, Humboldt C., Jr.; Duke, Michael B.
1993-01-01
The relationship between management style and program costs is analyzed to determine the feasibility of financing large international space missions. The incorporation of management systems is considered to be essential to realizing low cost spacecraft and planetary surface systems. Several companies ranging from large Lockheed 'Skunk Works' to small companies including Space Industries, Inc., Rocket Research Corp., and Orbital Sciences Corp. were studied. It is concluded that to lower the prices, the ways in which spacecraft and hardware are developed must be changed. Benchmarking of successful low cost space programs has revealed a number of prescriptive rules for low cost managements, including major changes in the relationships between the public and private sectors.
Unique Programme of Indian Centre for Space Physics using large rubber Balloons
Chakrabarti, Sandip Kumar; Sarkar, Ritabrata; Bhowmick, Debashis; Chakraborty, Subhankar
Indian Centre for Space Physics (ICSP) has developed a unique capability to pursue space based studies at a very low cost. Here, large rubber balloons are sent to near space (~ 40km) with payloads of less than 4kg weight. These payloads can be cosmic ray detectors, X-ray detectors, muon detectors apart from communication device, GPS, and nine degrees of freedom measuring capabilities. With two balloons in orbiter-launcher configuration, ICSP has been able to conduct long duration flights upto 12 hours. ICSP has so far sent 56 Dignity missions to near space and obtained Cosmic Ray and muon variation on a regular basis, dynamical spectrum of solar flares and gamma ray burst apart from other usual parameters such as wind velocity components, temperature and pressure variations etc. Since all the payloads are retrieved by parachutes, the cost per mission remains very low, typically around USD1000.00. The preparation time is low. Furthermore, no special launching area is required. In principle, such experiments can be conducted on a daily basis, if need be. Presently, we are also incorporating studies relating to earth system science such as Ozone, aerosols, micro-meteorites etc.
Scanning the parameter space of collapsing rotating thin shells
Rocha, Jorge V.; Santarelli, Raphael
2018-06-01
We present results of a comprehensive study of collapsing and bouncing thin shells with rotation, framing it in the context of the weak cosmic censorship conjecture. The analysis is based on a formalism developed specifically for higher odd dimensions that is able to describe the dynamics of collapsing rotating shells exactly. We analyse and classify a plethora of shell trajectories in asymptotically flat spacetimes. The parameters varied include the shell’s mass and angular momentum, its radial velocity at infinity, the (linear) equation-of-state parameter and the spacetime dimensionality. We find that plunges of rotating shells into black holes never produce naked singularities, as long as the matter shell obeys the weak energy condition, and so respects cosmic censorship. This applies to collapses of dust shells starting from rest or with a finite velocity at infinity. Not even shells with a negative isotropic pressure component (i.e. tension) lead to the formation of naked singularities, as long as the weak energy condition is satisfied. Endowing the shells with a positive isotropic pressure component allows for the existence of bouncing trajectories satisfying the dominant energy condition and fully contained outside rotating black holes. Otherwise any turning point occurs always inside the horizon. These results are based on strong numerical evidence from scans of numerous sections in the large parameter space available to these collapsing shells. The generalisation of the radial equation of motion to a polytropic equation-of-state for the matter shell is also included in an appendix.
Wells, J. R.; Kim, J. B.
2011-12-01
Parameters in dynamic global vegetation models (DGVMs) are thought to be weakly constrained and can be a significant source of errors and uncertainties. DGVMs use between 5 and 26 plant functional types (PFTs) to represent the average plant life form in each simulated plot, and each PFT typically has a dozen or more parameters that define the way it uses resource and responds to the simulated growing environment. Sensitivity analysis explores how varying parameters affects the output, but does not do a full exploration of the parameter solution space. The solution space for DGVM parameter values are thought to be complex and non-linear; and multiple sets of acceptable parameters may exist. In published studies, PFT parameters are estimated from published literature, and often a parameter value is estimated from a single published value. Further, the parameters are "tuned" using somewhat arbitrary, "trial-and-error" methods. BIOMAP is a new DGVM created by fusing MAPSS biogeography model with Biome-BGC. It represents the vegetation of North America using 26 PFTs. We are using simulated annealing, a global search method, to systematically and objectively explore the solution space for the BIOMAP PFTs and system parameters important for plant water use. We defined the boundaries of the solution space by obtaining maximum and minimum values from published literature, and where those were not available, using +/-20% of current values. We used stratified random sampling to select a set of grid cells representing the vegetation of the conterminous USA. Simulated annealing algorithm is applied to the parameters for spin-up and a transient run during the historical period 1961-1990. A set of parameter values is considered acceptable if the associated simulation run produces a modern potential vegetation distribution map that is as accurate as one produced by trial-and-error calibration. We expect to confirm that the solution space is non-linear and complex, and that
Measuring Accurate Body Parameters of Dressed Humans with Large-Scale Motion Using a Kinect Sensor
Directory of Open Access Journals (Sweden)
Sidan Du
2013-08-01
Full Text Available Non-contact human body measurement plays an important role in surveillance, physical healthcare, on-line business and virtual fitting. Current methods for measuring the human body without physical contact usually cannot handle humans wearing clothes, which limits their applicability in public environments. In this paper, we propose an effective solution that can measure accurate parameters of the human body with large-scale motion from a Kinect sensor, assuming that the people are wearing clothes. Because motion can drive clothes attached to the human body loosely or tightly, we adopt a space-time analysis to mine the information across the posture variations. Using this information, we recover the human body, regardless of the effect of clothes, and measure the human body parameters accurately. Experimental results show that our system can perform more accurate parameter estimation on the human body than state-of-the-art methods.
HL-LHC parameter space and scenarios
International Nuclear Information System (INIS)
Bruning, O.S.
2012-01-01
The HL-LHC project aims at a total integrated luminosity of approximately 3000 fb -1 over the lifetime of the HL-LHC. Assuming an exploitation period of ca. 10 years this goal implies an annual integrated luminosity of approximately 200 fb -1 to 300 fb -1 per year. This paper looks at potential beam parameters that are compatible with the HL-LHC performance goals and discusses briefly potential variation in the parameter space. It is shown that the design goal of the HL-LHC project can only be achieved with a full upgrade of the injector complex and the operation with β* values close to 0.15 m. Significant margins for leveling can be achieved for β* values close to 0.15 m. However, these margins can only be harvested during the HL-LHC operation if the required leveling techniques have been demonstrated in operation
An open-source job management framework for parameter-space exploration: OACIS
Murase, Y.; Uchitane, T.; Ito, N.
2017-11-01
We present an open-source software framework for parameter-space exporation, named OACIS, which is useful to manage vast amount of simulation jobs and results in a systematic way. Recent development of high-performance computers enabled us to explore parameter spaces comprehensively, however, in such cases, manual management of the workflow is practically impossible. OACIS is developed aiming at reducing the cost of these repetitive tasks when conducting simulations by automating job submissions and data management. In this article, an overview of OACIS as well as a getting started guide are presented.
Shell model in large spaces and statistical spectroscopy
International Nuclear Information System (INIS)
Kota, V.K.B.
1996-01-01
For many nuclear structure problems of current interest it is essential to deal with shell model in large spaces. For this, three different approaches are now in use and two of them are: (i) the conventional shell model diagonalization approach but taking into account new advances in computer technology; (ii) the shell model Monte Carlo method. A brief overview of these two methods is given. Large space shell model studies raise fundamental questions regarding the information content of the shell model spectrum of complex nuclei. This led to the third approach- the statistical spectroscopy methods. The principles of statistical spectroscopy have their basis in nuclear quantum chaos and they are described (which are substantiated by large scale shell model calculations) in some detail. (author)
International Nuclear Information System (INIS)
Sabati, M; Lauzon, M L; Frayne, R
2003-01-01
Data acquisition using a continuously moving table approach is a method capable of generating large field-of-view (FOV) 3D MR angiograms. However, in order to obtain venous contamination-free contrast-enhanced (CE) MR angiograms in the lower limbs, one of the major challenges is to acquire all necessary k-space data during the restricted arterial phase of the contrast agent. Preliminary investigation on the space-time relationship of continuously acquired peripheral angiography is performed in this work. Deterministic and stochastic undersampled hybrid-space (x, k y , k z ) acquisitions are simulated for large FOV peripheral runoff studies. Initial results show the possibility of acquiring isotropic large FOV images of the entire peripheral vascular system. An optimal trade-off between the spatial and temporal sampling properties was found that produced a high-spatial resolution peripheral CE-MR angiogram. The deterministic sampling pattern was capable of reconstructing the global structure of the peripheral arterial tree and showed slightly better global quantitative results than stochastic patterns. Optimal stochastic sampling patterns, on the other hand, enhanced small vessels and had more favourable local quantitative results. These simulations demonstrate the complex spatial-temporal relationship when sampling large FOV peripheral runoff studies. They also suggest that more investigation is required to maximize image quality as a function of hybrid-space coverage, acquisition repetition time and sampling pattern parameters
Physics parameter space of tokamak ignition devices
International Nuclear Information System (INIS)
Selcow, E.C.; Peng, Y.K.M.; Uckan, N.A.; Houlberg, W.A.
1985-01-01
This paper describes the results of a study to explore the physics parameter space of tokamak ignition experiments. A new physics systems code has been developed to perform the study. This code performs a global plasma analysis using steady-state, two-fluid, energy-transport models. In this paper, we discuss the models used in the code and their application to the analysis of compact ignition experiments. 8 refs., 8 figs., 1 tab
Definition of technology development missions for early space stations: Large space structures
Gates, R. M.; Reid, G.
1984-01-01
The objectives studied are the definition of the tested role of an early Space Station for the construction of large space structures. This is accomplished by defining the LSS technology development missions (TDMs) identified in phase 1. Design and operations trade studies are used to identify the best structural concepts and procedures for each TDMs. Details of the TDM designs are then developed along with their operational requirements. Space Station resources required for each mission, both human and physical, are identified. The costs and development schedules for the TDMs provide an indication of the programs needed to develop these missions.
On the Consistency of Bootstrap Testing for a Parameter on the Boundary of the Parameter Space
DEFF Research Database (Denmark)
Cavaliere, Giuseppe; Nielsen, Heino Bohn; Rahbek, Anders
2017-01-01
It is well known that with a parameter on the boundary of the parameter space, such as in the classic cases of testing for a zero location parameter or no autoregressive conditional heteroskedasticity (ARCH) effects, the classic nonparametric bootstrap – based on unrestricted parameter estimates...... – leads to inconsistent testing. In contrast, we show here that for the two aforementioned cases, a nonparametric bootstrap test based on parameter estimates obtained under the null – referred to as ‘restricted bootstrap’ – is indeed consistent. While the restricted bootstrap is simple to implement...... in practice, novel theoretical arguments are required in order to establish consistency. In particular, since the bootstrap is analysed both under the null hypothesis and under the alternative, non-standard asymptotic expansions are required to deal with parameters on the boundary. Detailed proofs...
Research on Geometric Positioning Algorithm of License Plate in Multidimensional Parameter Space
Directory of Open Access Journals (Sweden)
Yinhua Huan
2014-05-01
Full Text Available Considering features of vehicle license plate location method which commonly used, in order to search a consistent location for reference images with license plates feature in multidimensional parameter space, a new algorithm of geometric location is proposed. Geometric location algorithm main include model training and real time search. Which not only adapt the gray-scale linearity and the gray non-linear changes, but also support changes of scale and angle. Compared with the mainstream locating software, numerical results shows under the same test conditions that the position deviation of geometric positioning algorithm is less than 0.5 pixel. Without taking into account the multidimensional parameter space, Geometric positioning algorithm position deviation is less than 1.0 pixel and angle deviation is less than 1.0 degree taking into account the multidimensional parameter space. This algorithm is robust, simple, practical and is better than the traditional method.
Geometry on the parameter space of the belief propagation algorithm on Bayesian networks
Energy Technology Data Exchange (ETDEWEB)
Watanabe, Yodai [National Institute of Informatics, Research Organization of Information and Systems, 2-1-2 Hitotsubashi, Chiyoda-ku, Tokyo 101-8430 (Japan); Laboratory for Mathematical Neuroscience, RIKEN Brain Science Institute, 2-1 Hirosawa, Wako-shi, Saitama 351-0198 (Japan)
2006-01-30
This Letter considers a geometrical structure on the parameter space of the belief propagation algorithm on Bayesian networks. The statistical manifold of posterior distributions is introduced, and the expression for the information metric on the manifold is derived. The expression is used to construct a cost function which can be regarded as a measure of the distance in the parameter space.
Atmospheric stellar parameters for large surveys using FASMA, a new spectral synthesis package
Tsantaki, M.; Andreasen, D. T.; Teixeira, G. D. C.; Sousa, S. G.; Santos, N. C.; Delgado-Mena, E.; Bruzual, G.
2018-02-01
In the era of vast spectroscopic surveys focusing on Galactic stellar populations, astronomers want to exploit the large quantity and good quality of data to derive their atmospheric parameters without losing precision from automatic procedures. In this work, we developed a new spectral package, FASMA, to estimate the stellar atmospheric parameters (namely effective temperature, surface gravity and metallicity) in a fast and robust way. This method is suitable for spectra of FGK-type stars in medium and high resolution. The spectroscopic analysis is based on the spectral synthesis technique using the radiative transfer code, MOOG. The line list is comprised of mainly iron lines in the optical spectrum. The atomic data are calibrated after the Sun and Arcturus. We use two comparison samples to test our method, (i) a sample of 451 FGK-type dwarfs from the high-resolution HARPS spectrograph; and (ii) the Gaia-ESO benchmark stars using both high and medium resolution spectra. We explore biases in our method from the analysis of synthetic spectra covering the parameter space of our interest. We show that our spectral package is able to provide reliable results for a wide range of stellar parameters, different rotational velocities, different instrumental resolutions and for different spectral regions of the VLT-GIRAFFE spectrographs, used amongst others for the Gaia-ESO survey. FASMA estimates stellar parameters in less than 15 m for high-resolution and 3 m for medium-resolution spectra. The complete package is publicly available to the community.
Parameter choice in Banach space regularization under variational inequalities
International Nuclear Information System (INIS)
Hofmann, Bernd; Mathé, Peter
2012-01-01
The authors study parameter choice strategies for the Tikhonov regularization of nonlinear ill-posed problems in Banach spaces. The effectiveness of any parameter choice for obtaining convergence rates depends on the interplay of the solution smoothness and the nonlinearity structure, and it can be expressed concisely in terms of variational inequalities. Such inequalities are link conditions between the penalty term, the norm misfit and the corresponding error measure. The parameter choices under consideration include an a priori choice, the discrepancy principle as well as the Lepskii principle. For the convenience of the reader, the authors review in an appendix a few instances where the validity of a variational inequality can be established. (paper)
Application of parameters space analysis tools for empirical model validation
Energy Technology Data Exchange (ETDEWEB)
Paloma del Barrio, E. [LEPT-ENSAM UMR 8508, Talence (France); Guyon, G. [Electricite de France, Moret-sur-Loing (France)
2004-01-01
A new methodology for empirical model validation has been proposed in the framework of the Task 22 (Building Energy Analysis Tools) of the International Energy Agency. It involves two main steps: checking model validity and diagnosis. Both steps, as well as the underlying methods, have been presented in the first part of the paper. In this part, they are applied for testing modelling hypothesis in the framework of the thermal analysis of an actual building. Sensitivity analysis tools have been first used to identify the parts of the model that can be really tested on the available data. A preliminary diagnosis is then supplied by principal components analysis. Useful information for model behaviour improvement has been finally obtained by optimisation techniques. This example of application shows how model parameters space analysis is a powerful tool for empirical validation. In particular, diagnosis possibilities are largely increased in comparison with residuals analysis techniques. (author)
Frequentist analysis of the parameter space of minimal supergravity
Energy Technology Data Exchange (ETDEWEB)
Buchmueller, O.; Colling, D. [Imperial College, London (United Kingdom). High Energy Physics Group; Cavanaugh, R. [Fermi National Accelerator Laboratory, Batavia, IL (United States); Illinois Univ., Chicago, IL (US). Physics Dept.] (and others)
2010-12-15
We make a frequentist analysis of the parameter space of minimal supergravity (mSUGRA), in which, as well as the gaugino and scalar soft supersymmetry-breaking parameters being universal, there is a specific relation between the trilinear, bilinear and scalar supersymmetry-breaking parameters, A{sub 0}=B{sub 0}+m{sub 0}, and the gravitino mass is fixed by m{sub 3/2}=m{sub 0}. We also consider a more general model, in which the gravitino mass constraint is relaxed (the VCMSSM). We combine in the global likelihood function the experimental constraints from low-energy electroweak precision data, the anomalous magnetic moment of the muon, the lightest Higgs boson mass M{sub h}, B physics and the astrophysical cold dark matter density, assuming that the lightest supersymmetric particle (LSP) is a neutralino. In the VCMSSM, we find a preference for values of m{sub 1/2} and m{sub 0} similar to those found previously in frequentist analyses of the constrained MSSM (CMSSM) and a model with common non-universal Higgs masses (NUHM1). On the other hand, in mSUGRA we find two preferred regions: one with larger values of both m{sub 1/2} and m{sub 0} than in the VCMSSM, and one with large m{sub 0} but small m{sub 1/2}. We compare the probabilities of the frequentist fits in mSUGRA, the VCMSSM, the CMSSM and the NUHM1: the probability that mSUGRA is consistent with the present data is significantly less than in the other models. We also discuss the mSUGRA and VCMSSM predictions for sparticle masses and other observables, identifying potential signatures at the LHC and elsewhere. (orig.)
Vertical integration from the large Hilbert space
Erler, Theodore; Konopka, Sebastian
2017-12-01
We develop an alternative description of the procedure of vertical integration based on the observation that amplitudes can be written in BRST exact form in the large Hilbert space. We relate this approach to the description of vertical integration given by Sen and Witten.
Tuning a space-time scalable PI controller using thermal parameters
Energy Technology Data Exchange (ETDEWEB)
Riverol, C. [University of West Indies, Chemical Engineering Department, St. Augustine, Trinidad (Trinidad and Tobago); Pilipovik, M.V. [Armach Engineers, Urb. Los Palos Grandes, Project Engineering Department, Caracas (Venezuela)
2005-03-01
The paper outlines the successful empirical design and validation of a space-time PI controller based on study of the controlled variable output as function of time and space. The developed control was implemented on two heat exchanger systems (falling film evaporator and milk pasteurizer). The strategy required adding a new term over the classical PI controller, such that a new parameter should be tuned. Measurements made on commercial installations have confirmed the validity of the new controller. (orig.)
Large parameter cases of the Gauss hypergeometric function
N.M. Temme (Nico)
2002-01-01
textabstractWe consider the asymptotic behaviour of the Gauss hypergeometric function when several of the parameters {it a, b, c} are large. We indicate which cases are of interest for orthogonal polynomials (Jacobi, but also Meixner, Krawtchouk, etc.), which results are already available and
Nuclear spectroscopy in large shell model spaces: recent advances
International Nuclear Information System (INIS)
Kota, V.K.B.
1995-01-01
Three different approaches are now available for carrying out nuclear spectroscopy studies in large shell model spaces and they are: (i) the conventional shell model diagonalization approach but taking into account new advances in computer technology; (ii) the recently introduced Monte Carlo method for the shell model; (iii) the spectral averaging theory, based on central limit theorems, in indefinitely large shell model spaces. The various principles, recent applications and possibilities of these three methods are described and the similarity between the Monte Carlo method and the spectral averaging theory is emphasized. (author). 28 refs., 1 fig., 5 tabs
Nonterrestrial material processing and manufacturing of large space systems
Von Tiesenhausen, G.
1979-01-01
Nonterrestrial processing of materials and manufacturing of large space system components from preprocessed lunar materials at a manufacturing site in space is described. Lunar materials mined and preprocessed at the lunar resource complex will be flown to the space manufacturing facility (SMF), where together with supplementary terrestrial materials, they will be final processed and fabricated into space communication systems, solar cell blankets, radio frequency generators, and electrical equipment. Satellite Power System (SPS) material requirements and lunar material availability and utilization are detailed, and the SMF processing, refining, fabricating facilities, material flow and manpower requirements are described.
Lagrangian space consistency relation for large scale structure
International Nuclear Information System (INIS)
Horn, Bart; Hui, Lam; Xiao, Xiao
2015-01-01
Consistency relations, which relate the squeezed limit of an (N+1)-point correlation function to an N-point function, are non-perturbative symmetry statements that hold even if the associated high momentum modes are deep in the nonlinear regime and astrophysically complex. Recently, Kehagias and Riotto and Peloso and Pietroni discovered a consistency relation applicable to large scale structure. We show that this can be recast into a simple physical statement in Lagrangian space: that the squeezed correlation function (suitably normalized) vanishes. This holds regardless of whether the correlation observables are at the same time or not, and regardless of whether multiple-streaming is present. The simplicity of this statement suggests that an analytic understanding of large scale structure in the nonlinear regime may be particularly promising in Lagrangian space
On equivalent parameter learning in simplified feature space based on Bayesian asymptotic analysis.
Yamazaki, Keisuke
2012-07-01
Parametric models for sequential data, such as hidden Markov models, stochastic context-free grammars, and linear dynamical systems, are widely used in time-series analysis and structural data analysis. Computation of the likelihood function is one of primary considerations in many learning methods. Iterative calculation of the likelihood such as the model selection is still time-consuming though there are effective algorithms based on dynamic programming. The present paper studies parameter learning in a simplified feature space to reduce the computational cost. Simplifying data is a common technique seen in feature selection and dimension reduction though an oversimplified space causes adverse learning results. Therefore, we mathematically investigate a condition of the feature map to have an asymptotically equivalent convergence point of estimated parameters, referred to as the vicarious map. As a demonstration to find vicarious maps, we consider the feature space, which limits the length of data, and derive a necessary length for parameter learning in hidden Markov models. Copyright © 2012 Elsevier Ltd. All rights reserved.
Determination of charged particle beam parameters with taking into account of space charge
International Nuclear Information System (INIS)
Ishkhanov, B.S.; Poseryaev, A.V.; Shvedunov, V.I.
2005-01-01
One describes a procedure to determine the basic parameters of a paraxial axially-symmetric beam of charged particles taking account of space charge contribution. The described procedure is based on application of the general equation for beam envelope. Paper presents data on its convergence and resistance to measurement errors. The position determination error of crossover (stretching) and radius of beam in crossover is maximum 15% , while the emittance determination error depends on emittance and space charge correlation. The introduced procedure was used to determine parameters of the available electron gun 20 keV energy beam with 0.64 A current. The derived results turned to agree closely with the design parameters [ru
SparseLeap: Efficient Empty Space Skipping for Large-Scale Volume Rendering
Hadwiger, Markus; Al-Awami, Ali K.; Beyer, Johanna; Agus, Marco; Pfister, Hanspeter
2017-01-01
Recent advances in data acquisition produce volume data of very high resolution and large size, such as terabyte-sized microscopy volumes. These data often contain many fine and intricate structures, which pose huge challenges for volume rendering, and make it particularly important to efficiently skip empty space. This paper addresses two major challenges: (1) The complexity of large volumes containing fine structures often leads to highly fragmented space subdivisions that make empty regions hard to skip efficiently. (2) The classification of space into empty and non-empty regions changes frequently, because the user or the evaluation of an interactive query activate a different set of objects, which makes it unfeasible to pre-compute a well-adapted space subdivision. We describe the novel SparseLeap method for efficient empty space skipping in very large volumes, even around fine structures. The main performance characteristic of SparseLeap is that it moves the major cost of empty space skipping out of the ray-casting stage. We achieve this via a hybrid strategy that balances the computational load between determining empty ray segments in a rasterization (object-order) stage, and sampling non-empty volume data in the ray-casting (image-order) stage. Before ray-casting, we exploit the fast hardware rasterization of GPUs to create a ray segment list for each pixel, which identifies non-empty regions along the ray. The ray-casting stage then leaps over empty space without hierarchy traversal. Ray segment lists are created by rasterizing a set of fine-grained, view-independent bounding boxes. Frame coherence is exploited by re-using the same bounding boxes unless the set of active objects changes. We show that SparseLeap scales better to large, sparse data than standard octree empty space skipping.
SparseLeap: Efficient Empty Space Skipping for Large-Scale Volume Rendering
Hadwiger, Markus
2017-08-28
Recent advances in data acquisition produce volume data of very high resolution and large size, such as terabyte-sized microscopy volumes. These data often contain many fine and intricate structures, which pose huge challenges for volume rendering, and make it particularly important to efficiently skip empty space. This paper addresses two major challenges: (1) The complexity of large volumes containing fine structures often leads to highly fragmented space subdivisions that make empty regions hard to skip efficiently. (2) The classification of space into empty and non-empty regions changes frequently, because the user or the evaluation of an interactive query activate a different set of objects, which makes it unfeasible to pre-compute a well-adapted space subdivision. We describe the novel SparseLeap method for efficient empty space skipping in very large volumes, even around fine structures. The main performance characteristic of SparseLeap is that it moves the major cost of empty space skipping out of the ray-casting stage. We achieve this via a hybrid strategy that balances the computational load between determining empty ray segments in a rasterization (object-order) stage, and sampling non-empty volume data in the ray-casting (image-order) stage. Before ray-casting, we exploit the fast hardware rasterization of GPUs to create a ray segment list for each pixel, which identifies non-empty regions along the ray. The ray-casting stage then leaps over empty space without hierarchy traversal. Ray segment lists are created by rasterizing a set of fine-grained, view-independent bounding boxes. Frame coherence is exploited by re-using the same bounding boxes unless the set of active objects changes. We show that SparseLeap scales better to large, sparse data than standard octree empty space skipping.
Large signal S-parameters: modeling and radiation effects in microwave power transistors
International Nuclear Information System (INIS)
Graham, E.D. Jr.; Chaffin, R.J.; Gwyn, C.W.
1973-01-01
Microwave power transistors are usually characterized by measuring the source and load impedances, efficiency, and power output at a specified frequency and bias condition in a tuned circuit. These measurements provide limited data for circuit design and yield essentially no information concerning broadbanding possibilities. Recently, a method using large signal S-parameters has been developed which provides a rapid and repeatable means for measuring microwave power transistor parameters. These large signal S-parameters have been successfully used to design rf power amplifiers. Attempts at modeling rf power transistors have in the past been restricted to a modified Ebers-Moll procedure with numerous adjustable model parameters. The modified Ebers-Moll model is further complicated by inclusion of package parasitics. In the present paper an exact one-dimensional device analysis code has been used to model the performance of the transistor chip. This code has been integrated into the SCEPTRE circuit analysis code such that chip, package and circuit performance can be coupled together in the analysis. Using []his computational tool, rf transistor performance has been examined with particular attention given to the theoretical validity of large-signal S-parameters and the effects of nuclear radiation on device parameters. (auth)
An Integrated Approach to Parameter Learning in Infinite-Dimensional Space
Energy Technology Data Exchange (ETDEWEB)
Boyd, Zachary M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Wendelberger, Joanne Roth [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-09-14
The availability of sophisticated modern physics codes has greatly extended the ability of domain scientists to understand the processes underlying their observations of complicated processes, but it has also introduced the curse of dimensionality via the many user-set parameters available to tune. Many of these parameters are naturally expressed as functional data, such as initial temperature distributions, equations of state, and controls. Thus, when attempting to find parameters that match observed data, being able to navigate parameter-space becomes highly non-trivial, especially considering that accurate simulations can be expensive both in terms of time and money. Existing solutions include batch-parallel simulations, high-dimensional, derivative-free optimization, and expert guessing, all of which make some contribution to solving the problem but do not completely resolve the issue. In this work, we explore the possibility of coupling together all three of the techniques just described by designing user-guided, batch-parallel optimization schemes. Our motivating example is a neutron diffusion partial differential equation where the time-varying multiplication factor serves as the unknown control parameter to be learned. We find that a simple, batch-parallelizable, random-walk scheme is able to make some progress on the problem but does not by itself produce satisfactory results. After reducing the dimensionality of the problem using functional principal component analysis (fPCA), we are able to track the progress of the solver in a visually simple way as well as viewing the associated principle components. This allows a human to make reasonable guesses about which points in the state space the random walker should try next. Thus, by combining the random walker's ability to find descent directions with the human's understanding of the underlying physics, it is possible to use expensive simulations more efficiently and more quickly arrive at the
The magnetically driven imploding liner parameter space of the ATLAS capacitor bank
Lindemuth, I R; Faehl, R J; Reinovsky, R E
2001-01-01
Summary form only given, as follows. The Atlas capacitor bank (23 MJ, 30 MA) is now operational at Los Alamos. Atlas was designed primarily to magnetically drive imploding liners for use as impactors in shock and hydrodynamic experiments. We have conducted a computational "mapping" of the high-performance imploding liner parameter space accessible to Atlas. The effect of charge voltage, transmission inductance, liner thickness, liner initial radius, and liner length has been investigated. One conclusion is that Atlas is ideally suited to be a liner driver for liner-on-plasma experiments in a magnetized target fusion (MTF) context . The parameter space of possible Atlas reconfigurations has also been investigated.
Data Mining for Efficient and Accurate Large Scale Retrieval of Geophysical Parameters
Obradovic, Z.; Vucetic, S.; Peng, K.; Han, B.
2004-12-01
Our effort is devoted to developing data mining technology for improving efficiency and accuracy of the geophysical parameter retrievals by learning a mapping from observation attributes to the corresponding parameters within the framework of classification and regression. We will describe a method for efficient learning of neural network-based classification and regression models from high-volume data streams. The proposed procedure automatically learns a series of neural networks of different complexities on smaller data stream chunks and then properly combines them into an ensemble predictor through averaging. Based on the idea of progressive sampling the proposed approach starts with a very simple network trained on a very small chunk and then gradually increases the model complexity and the chunk size until the learning performance no longer improves. Our empirical study on aerosol retrievals from data obtained with the MISR instrument mounted at Terra satellite suggests that the proposed method is successful in learning complex concepts from large data streams with near-optimal computational effort. We will also report on a method that complements deterministic retrievals by constructing accurate predictive algorithms and applying them on appropriately selected subsets of observed data. The method is based on developing more accurate predictors aimed to catch global and local properties synthesized in a region. The procedure starts by learning the global properties of data sampled over the entire space, and continues by constructing specialized models on selected localized regions. The global and local models are integrated through an automated procedure that determines the optimal trade-off between the two components with the objective of minimizing the overall mean square errors over a specific region. Our experimental results on MISR data showed that the combined model can increase the retrieval accuracy significantly. The preliminary results on various
Efficiently enclosing the compact binary parameter space by singular-value decomposition
International Nuclear Information System (INIS)
Cannon, Kipp; Hanna, Chad; Keppel, Drew
2011-01-01
Gravitational-wave searches for the merger of compact binaries use matched filtering as the method of detecting signals and estimating parameters. Such searches construct a fine mesh of filters covering a signal parameter space at high density. Previously it has been shown that singular-value decomposition can reduce the effective number of filters required to search the data. Here we study how the basis provided by the singular-value decomposition changes dimension as a function of template-bank density. We will demonstrate that it is sufficient to use the basis provided by the singular-value decomposition of a low-density bank to accurately reconstruct arbitrary points within the boundaries of the template bank. Since this technique is purely numerical, it may have applications to interpolating the space of numerical relativity waveforms.
Characteristics and prediction of sound level in extra-large spaces
Wang, C.; Ma, H.; Wu, Y.; Kang, J.
2018-01-01
This paper aims to examine sound fields in extra-large spaces, which are defined in this paper as spaces used by people, with a volume approximately larger than 125,000m 3 and absorption coefficient less than 0.7. In such spaces inhomogeneous reverberant energy caused by uneven early reflections with increasing volume has a significant effect on sound fields. Measurements were conducted in four spaces to examine the attenuation of the total and reverberant energy with increasing source-receiv...
Precision Optical Coatings for Large Space Telescope Mirrors
Sheikh, David
This proposal “Precision Optical Coatings for Large Space Telescope Mirrors” addresses the need to develop and advance the state-of-the-art in optical coating technology. NASA is considering large monolithic mirrors 1 to 8-meters in diameter for future telescopes such as HabEx and LUVOIR. Improved large area coating processes are needed to meet the future requirements of large astronomical mirrors. In this project, we will demonstrate a broadband reflective coating process for achieving high reflectivity from 90-nm to 2500-nm over a 2.3-meter diameter coating area. The coating process is scalable to larger mirrors, 6+ meters in diameter. We will use a battery-driven coating process to make an aluminum reflector, and a motion-controlled coating technology for depositing protective layers. We will advance the state-of-the-art for coating technology and manufacturing infrastructure, to meet the reflectance and wavefront requirements of both HabEx and LUVOIR. Specifically, we will combine the broadband reflective coating designs and processes developed at GSFC and JPL with large area manufacturing technologies developed at ZeCoat Corporation. Our primary objectives are to: Demonstrate an aluminum coating process to create uniform coatings over large areas with near-theoretical aluminum reflectance Demonstrate a motion-controlled coating process to apply very precise 2-nm to 5- nm thick protective/interference layers to large areas, Demonstrate a broadband coating system (90-nm to 2500-nm) over a 2.3-meter coating area and test it against the current coating specifications for LUVOIR/HabEx. We will perform simulated space-environment testing, and we expect to advance the TRL from 3 to >5 in 3-years.
Large space antenna concepts for ESGP
Love, Allan W.
1989-01-01
It is appropriate to note that 1988 marks the 100th anniversary of the birth of the reflector antenna. It was in 1888 that Heinrich Hertz constructed the first one, a parabolic cylinder made of sheet zinc bent to shape and supported by a wooden frame. Hertz demonstrated the existence of the electromagnetic waves that had been predicted theoretically by James Clerk Maxwell some 22 years earlier. In the 100 years since Hertz's pioneering work the field of electromagnetics has grown explosively: one of the technologies is that of remote sensing of planet Earth by means of electromagnetic waves, using both passive and active sensors located on an Earth Science Geostationary Platform (ESEP). For these purposes some exquisitely sensitive instruments were developed, capable of reaching to the fringes of the known universe, and relying on large reflector antennas to collect the minute signals and direct them to appropriate receiving devices. These antennas are electrically large, with diameters of 3000 to 10,000 wavelengths and with gains approaching 80 to 90 dB. Some of the reflector antennas proposed for ESGP are also electrically large. For example, at 220 GHz a 4-meter reflector is nearly 3000 wavelengths in diameter, and is electrically quite comparable with a number of the millimeter wave radiotelescopes that are being built around the world. Its surface must meet stringent requirements on rms smoothness, and ability to resist deformation. Here, however, the environmental forces at work are different. There are no varying forces due to wind and gravity, but inertial forces due to mechanical scanning must be reckoned with. With this form of beam scanning, minimizing momentum transfer to the space platform is a problem that demands an answer. Finally, reflector surface distortion due to thermal gradients caused by the solar flux probably represents the most challenging problem to be solved if these Large Space Antennas are to achieve the gain and resolution required of
Modeling, Analysis, and Optimization Issues for Large Space Structures
Pinson, L. D. (Compiler); Amos, A. K. (Compiler); Venkayya, V. B. (Compiler)
1983-01-01
Topics concerning the modeling, analysis, and optimization of large space structures are discussed including structure-control interaction, structural and structural dynamics modeling, thermal analysis, testing, and design.
Recovering a Probabilistic Knowledge Structure by Constraining Its Parameter Space
Stefanutti, Luca; Robusto, Egidio
2009-01-01
In the Basic Local Independence Model (BLIM) of Doignon and Falmagne ("Knowledge Spaces," Springer, Berlin, 1999), the probabilistic relationship between the latent knowledge states and the observable response patterns is established by the introduction of a pair of parameters for each of the problems: a lucky guess probability and a careless…
Large Deployable Reflector (LDR) Requirements for Space Station Accommodations
Crowe, D. A.; Clayton, M. J.; Runge, F. C.
1985-01-01
Top level requirements for assembly and integration of the Large Deployable Reflector (LDR) Observatory at the Space Station are examined. Concepts are currently under study for LDR which will provide a sequel to the Infrared Astronomy Satellite and the Space Infrared Telescope Facility. LDR will provide a spectacular capability over a very broad spectral range. The Space Station will provide an essential facility for the initial assembly and check out of LDR, as well as a necessary base for refurbishment, repair and modification. By providing a manned platform, the Space Station will remove the time constraint on assembly associated with use of the Shuttle alone. Personnel safety during necessary EVA is enhanced by the presence of the manned facility.
Large Deployable Reflector (LDR) requirements for space station accommodations
Crowe, D. A.; Clayton, M. J.; Runge, F. C.
1985-04-01
Top level requirements for assembly and integration of the Large Deployable Reflector (LDR) Observatory at the Space Station are examined. Concepts are currently under study for LDR which will provide a sequel to the Infrared Astronomy Satellite and the Space Infrared Telescope Facility. LDR will provide a spectacular capability over a very broad spectral range. The Space Station will provide an essential facility for the initial assembly and check out of LDR, as well as a necessary base for refurbishment, repair and modification. By providing a manned platform, the Space Station will remove the time constraint on assembly associated with use of the Shuttle alone. Personnel safety during necessary EVA is enhanced by the presence of the manned facility.
Saleem, M.; Resmi, L.; Misra, Kuntal; Pai, Archana; Arun, K. G.
2018-03-01
Short duration Gamma Ray Bursts (SGRB) and their afterglows are among the most promising electromagnetic (EM) counterparts of Neutron Star (NS) mergers. The afterglow emission is broad-band, visible across the entire electromagnetic window from γ-ray to radio frequencies. The flux evolution in these frequencies is sensitive to the multidimensional afterglow physical parameter space. Observations of gravitational wave (GW) from BNS mergers in spatial and temporal coincidence with SGRB and associated afterglows can provide valuable constraints on afterglow physics. We run simulations of GW-detected BNS events and assuming that all of them are associated with a GRB jet which also produces an afterglow, investigate how detections or non-detections in X-ray, optical and radio frequencies can be influenced by the parameter space. We narrow down the regions of afterglow parameter space for a uniform top-hat jet model, which would result in different detection scenarios. We list inferences which can be drawn on the physics of GRB afterglows from multimessenger astronomy with coincident GW-EM observations.
Automated Modal Parameter Estimation for Operational Modal Analysis of Large Systems
DEFF Research Database (Denmark)
Andersen, Palle; Brincker, Rune; Goursat, Maurice
2007-01-01
In this paper the problems of doing automatic modal parameter extraction and how to account for large number of data to process are considered. Two different approaches for obtaining the modal parameters automatically using OMA are presented: The Frequency Domain Decomposition (FDD) technique and...
International Nuclear Information System (INIS)
Lell, R.M.; Hanan, N.A.
1987-01-01
Effects of multigroup neutron cross section generation procedures on core physics parameters for compact fast spectrum reactors have been examined. Homogeneous and space-dependent multigroup cross section sets were generated in 11 and 27 groups for a representative fast reactor core. These cross sections were used to compute various reactor physics parameters for the reference core. Coarse group structure and neglect of space-dependence in the generation procedure resulted in inaccurate computations of reactor flux and power distributions and in significant errors regarding estimates of core reactivity and control system worth. Delayed neutron fraction was insensitive to cross section treatment, and computed reactivity coefficients were only slightly sensitive. However, neutron lifetime was found to be very sensitive to cross section treatment. Deficiencies in multigroup cross sections are reflected in core nuclear design and, consequently, in system mechanical design
LAMOST DR1: Stellar Parameters and Chemical Abundances with SP_Ace
Boeche, C.; Smith, M. C.; Grebel, E. K.; Zhong, J.; Hou, J. L.; Chen, L.; Stello, D.
2018-04-01
We present a new analysis of the LAMOST DR1 survey spectral database performed with the code SP_Ace, which provides the derived stellar parameters {T}{{eff}}, {log}g, [Fe/H], and [α/H] for 1,097,231 stellar objects. We tested the reliability of our results by comparing them to reference results from high spectral resolution surveys. The expected errors can be summarized as ∼120 K in {T}{{eff}}, ∼0.2 in {log}g, ∼0.15 dex in [Fe/H], and ∼0.1 dex in [α/Fe] for spectra with S/N > 40, with some differences between dwarf and giant stars. SP_Ace provides error estimations consistent with the discrepancies observed between derived and reference parameters. Some systematic errors are identified and discussed. The resulting catalog is publicly available at the LAMOST and CDS websites.
B→τν: Opening up the charged Higgs parameter space with R-parity violation
International Nuclear Information System (INIS)
Bose, Roshni; Kundu, Anirban
2012-01-01
The theoretically clean channel B + →τ + ν shows a close to 3σ discrepancy between the Standard Model prediction and the data. This in turn puts a strong constraint on the parameter space of a two-Higgs doublet model, including R-parity conserving supersymmetry. The constraint is so strong that it almost smells of fine-tuning. We show how the parameter space opens up with the introduction of suitable R-parity violating interactions, and release the tension between data and theory.
Entropy considerations in constraining the mSUGRA parameter space
International Nuclear Information System (INIS)
Nunez, Dario; Sussman, Roberto A.; Zavala, Jesus; Nellen, Lukas; Cabral-Rosetti, Luis G.; Mondragon, Myriam
2006-01-01
We explore the use of two criteria to constraint the allowed parameter space in mSUGRA models. Both criteria are based in the calculation of the present density of neutralinos as dark matter in the Universe. The first one is the usual ''abundance'' criterion which is used to calculate the relic density after the ''freeze-out'' era. To compute the relic density we used the numerical public code micrOMEGAs. The second criterion applies the microcanonical definition of entropy to a weakly interacting and self-gravitating gas evaluating then the change in the entropy per particle of this gas between the ''freeze-out'' era and present day virialized structures (i.e systems in virial equilibrium). An ''entropy-consistency'' criterion emerges by comparing theoretical and empirical estimates of this entropy. The main objective of our work is to determine for which regions of the parameter space in the mSUGRA model are both criteria consistent with the 2σ bounds according to WMAP for the relic density: 0.0945 < ΩCDMh2 < 0.1287. As a first result, we found that for A0 = 0, sgnμ +, small values of tanβ are not favored; only for tanβ ≅ 50 are both criteria significantly consistent
Virtual walks in spin space: A study in a family of two-parameter models
Mullick, Pratik; Sen, Parongama
2018-05-01
We investigate the dynamics of classical spins mapped as walkers in a virtual "spin" space using a generalized two-parameter family of spin models characterized by parameters y and z [de Oliveira et al., J. Phys. A 26, 2317 (1993), 10.1088/0305-4470/26/10/006]. The behavior of S (x ,t ) , the probability that the walker is at position x at time t , is studied in detail. In general S (x ,t ) ˜t-αf (x /tα) with α ≃1 or 0.5 at large times depending on the parameters. In particular, S (x ,t ) for the point y =1 ,z =0.5 corresponding to the Voter model shows a crossover in time; associated with this crossover, two timescales can be defined which vary with the system size L as L2logL . We also show that as the Voter model point is approached from the disordered regions along different directions, the width of the Gaussian distribution S (x ,t ) diverges in a power law manner with different exponents. For the majority Voter case, the results indicate that the the virtual walk can detect the phase transition perhaps more efficiently compared to other nonequilibrium methods.
Legal Parameters of Space Tourism
Smith, Lesley Jane; Hörl, Kay-Uwe
2004-01-01
The commercial concept of space tourism raises important legal issues not specifically addressed by first generation rules of international spacelaw. The principles established in the nineteen sixties and seventies were inspired by the philosophy that exploration of space was undertaken by and for the benefit of mankind. Technical developments since then haveincreased the potential for new space applications, with a corresponding increase in commercial interest in space. If space tourism is t...
Krenn, Julia; Zangerl, Christian; Mergili, Martin
2017-04-01
r.randomwalk is a GIS-based, multi-functional, conceptual open source model application for forward and backward analyses of the propagation of mass flows. It relies on a set of empirically derived, uncertain input parameters. In contrast to many other tools, r.randomwalk accepts input parameter ranges (or, in case of two or more parameters, spaces) in order to directly account for these uncertainties. Parameter spaces represent a possibility to withdraw from discrete input values which in most cases are likely to be off target. r.randomwalk automatically performs multiple calculations with various parameter combinations in a given parameter space, resulting in the impact indicator index (III) which denotes the fraction of parameter value combinations predicting an impact on a given pixel. Still, there is a need to constrain the parameter space used for a certain process type or magnitude prior to performing forward calculations. This can be done by optimizing the parameter space in terms of bringing the model results in line with well-documented past events. As most existing parameter optimization algorithms are designed for discrete values rather than for ranges or spaces, the necessity for a new and innovative technique arises. The present study aims at developing such a technique and at applying it to derive guiding parameter spaces for the forward calculation of rock avalanches through back-calculation of multiple events. In order to automatize the work flow we have designed r.ranger, an optimization and sensitivity analysis tool for parameter spaces which can be directly coupled to r.randomwalk. With r.ranger we apply a nested approach where the total value range of each parameter is divided into various levels of subranges. All possible combinations of subranges of all parameters are tested for the performance of the associated pattern of III. Performance indicators are the area under the ROC curve (AUROC) and the factor of conservativeness (FoC). This
Directory of Open Access Journals (Sweden)
Dimitrios V Vavoulis
Full Text Available Traditional approaches to the problem of parameter estimation in biophysical models of neurons and neural networks usually adopt a global search algorithm (for example, an evolutionary algorithm, often in combination with a local search method (such as gradient descent in order to minimize the value of a cost function, which measures the discrepancy between various features of the available experimental data and model output. In this study, we approach the problem of parameter estimation in conductance-based models of single neurons from a different perspective. By adopting a hidden-dynamical-systems formalism, we expressed parameter estimation as an inference problem in these systems, which can then be tackled using a range of well-established statistical inference methods. The particular method we used was Kitagawa's self-organizing state-space model, which was applied on a number of Hodgkin-Huxley-type models using simulated or actual electrophysiological data. We showed that the algorithm can be used to estimate a large number of parameters, including maximal conductances, reversal potentials, kinetics of ionic currents, measurement and intrinsic noise, based on low-dimensional experimental data and sufficiently informative priors in the form of pre-defined constraints imposed on model parameters. The algorithm remained operational even when very noisy experimental data were used. Importantly, by combining the self-organizing state-space model with an adaptive sampling algorithm akin to the Covariance Matrix Adaptation Evolution Strategy, we achieved a significant reduction in the variance of parameter estimates. The algorithm did not require the explicit formulation of a cost function and it was straightforward to apply on compartmental models and multiple data sets. Overall, the proposed methodology is particularly suitable for resolving high-dimensional inference problems based on noisy electrophysiological data and, therefore, a
Sounding-derived parameters associated with large hail and tornadoes in the Netherlands
Groenemeijer, P.H.; van Delden, A.J.|info:eu-repo/dai/nl/072670703
2007-01-01
A study is presented focusing on the potential value of parameters derived from radiosonde data or data from numerical atmospheric models for the forecasting of severe weather associated with convective storms. Parameters have been derived from soundings in the proximity of large hail, tornadoes
The Legion Support for Advanced Parameter-Space Studies on a Grid
National Research Council Canada - National Science Library
Natrajan, Anand; Humphrey, Marty A; Grimshaw, Andrew S
2006-01-01
.... Legion provides tools and services that support advanced parameter-space studies, i.e., studies that make complex demands such as transparent access to distributed files, fault-tolerance and security. We demonstrate these benefits with a protein-folding experiment in which a molecular simulation package was run over a grid managed by Legion.
Large anterior temporal Virchow-Robin spaces: unique MR imaging features
Energy Technology Data Exchange (ETDEWEB)
Lim, Anthony T. [Monash University, Neuroradiology Service, Monash Imaging, Monash Health, Melbourne, Victoria (Australia); Chandra, Ronil V. [Monash University, Neuroradiology Service, Monash Imaging, Monash Health, Melbourne, Victoria (Australia); Monash University, Department of Surgery, Faculty of Medicine, Nursing and Health Sciences, Melbourne (Australia); Trost, Nicholas M. [St Vincent' s Hospital, Neuroradiology Service, Melbourne (Australia); McKelvie, Penelope A. [St Vincent' s Hospital, Anatomical Pathology, Melbourne (Australia); Stuckey, Stephen L. [Monash University, Neuroradiology Service, Monash Imaging, Monash Health, Melbourne, Victoria (Australia); Monash University, Southern Clinical School, Faculty of Medicine, Nursing and Health Sciences, Melbourne (Australia)
2015-05-01
Large Virchow-Robin (VR) spaces may mimic cystic tumor. The anterior temporal subcortical white matter is a recently described preferential location, with only 18 reported cases. Our aim was to identify unique MR features that could increase prospective diagnostic confidence. Thirty-nine cases were identified between November 2003 and February 2014. Demographic, clinical data and the initial radiological report were retrospectively reviewed. Two neuroradiologists reviewed all MR imaging; a neuropathologist reviewed histological data. Median age was 58 years (range 24-86 years); the majority (69 %) was female. There were no clinical symptoms that could be directly referable to the lesion. Two thirds were considered to be VR spaces on the initial radiological report. Mean maximal size was 9 mm (range 5-17 mm); majority (79 %) had perilesional T2 or fluid-attenuated inversion recovery (FLAIR) hyperintensity. The following were identified as potential unique MR features: focal cortical distortion by an adjacent branch of the middle cerebral artery (92 %), smaller adjacent VR spaces (26 %), and a contiguous cerebrospinal fluid (CSF) intensity tract (21 %). Surgery was performed in three asymptomatic patients; histopathology confirmed VR spaces. Unique MR features were retrospectively identified in all three patients. Large anterior temporal lobe VR spaces commonly demonstrate perilesional T2 or FLAIR signal and can be misdiagnosed as cystic tumor. Potential unique MR features that could increase prospective diagnostic confidence include focal cortical distortion by an adjacent branch of the middle cerebral artery, smaller adjacent VR spaces, and a contiguous CSF intensity tract. (orig.)
DEFF Research Database (Denmark)
Kaniecki, M.; Saenz, E.; Rolo, L.
2014-01-01
This paper demonstrates a method for material characterization (permittivity, permeability, loss tangent) based on the scattering parameters. The performance of the extraction algorithm will be shown for modelled and measured data. The measurements were carried out at the European Space Agency...
Parameter and State Estimation of Large-Scale Complex Systems Using Python Tools
Directory of Open Access Journals (Sweden)
M. Anushka S. Perera
2015-07-01
Full Text Available This paper discusses the topics related to automating parameter, disturbance and state estimation analysis of large-scale complex nonlinear dynamic systems using free programming tools. For large-scale complex systems, before implementing any state estimator, the system should be analyzed for structural observability and the structural observability analysis can be automated using Modelica and Python. As a result of structural observability analysis, the system may be decomposed into subsystems where some of them may be observable --- with respect to parameter, disturbances, and states --- while some may not. The state estimation process is carried out for those observable subsystems and the optimum number of additional measurements are prescribed for unobservable subsystems to make them observable. In this paper, an industrial case study is considered: the copper production process at Glencore Nikkelverk, Kristiansand, Norway. The copper production process is a large-scale complex system. It is shown how to implement various state estimators, in Python, to estimate parameters and disturbances, in addition to states, based on available measurements.
Review of the different methods to derive average spacing from resolved resonance parameters sets
International Nuclear Information System (INIS)
Fort, E.; Derrien, H.; Lafond, D.
1979-12-01
The average spacing of resonances is an important parameter for statistical model calculations, especially concerning non fissile nuclei. The different methods to derive this average value from resonance parameters sets have been reviewed and analyzed in order to tentatively detect their respective weaknesses and propose recommendations. Possible improvements are suggested
Kotik, A.; Usyukin, V.; Vinogradov, I.; Arkhipov, M.
2017-11-01
he realization of astrophysical researches requires the development of high-sensitive centimeterband parabolic space radiotelescopes (SRT) with the large-size mirrors. Constructively such SRT with the mirror size more than 10 m can be realized as deployable rigid structures. Mesh-structures of such size do not provide the reflector reflecting surface accuracy which is necessary for the centimeter band observations. Now such telescope with the 10 m diameter mirror is developed in Russia in the frame of "SPECTR - R" program. External dimensions of the telescope is more than the size of existing thermo-vacuum chambers used to prove SRT reflecting surface accuracy parameters under the action of space environment factors. That's why the numerical simulation turns out to be the basis required to accept the taken designs. Such modeling should be based on experimental working of the basic constructive materials and elements of the future reflector. In the article computational modeling of reflecting surface deviations of a centimeter-band of a large-sized deployable space reflector at a stage of his orbital functioning is considered. The analysis of the factors that determines the deviations - both determined (temperatures fields) and not-determined (telescope manufacturing and installation faults; the deformations caused by features of composite materials behavior in space) is carried out. The finite-element model and complex of methods are developed. They allow to carry out computational modeling of reflecting surface deviations caused by influence of all factors and to take into account the deviations correction by space vehicle orientation system. The results of modeling for two modes of functioning (orientation at the Sun) SRT are presented.
Soni, Rahul Kumar; De, Ashoke
2018-05-01
The present study primarily focuses on the effect of the jet spacing and strut geometry on the evolution and structure of the large-scale vortices which play a key role in mixing characteristics in turbulent supersonic flows. Numerically simulated results corresponding to varying parameters such as strut geometry and jet spacing (Xn = nDj such that n = 2, 3, and 5) for a square jet of height Dj = 0.6 mm are presented in the current study, while the work also investigates the presence of the local quasi-two-dimensionality for the X2(2Dj) jet spacing; however, the same is not true for higher jet spacing. Further, the tapered strut (TS) section is modified into the straight strut (SS) for investigation, where the remarkable difference in flow physics is unfolded between the two configurations for similar jet spacing (X2: 2Dj). The instantaneous density and vorticity contours reveal the structures of varying scales undergoing different evolution for the different configurations. The effect of local spanwise rollers is clearly manifested in the mixing efficiency and the jet spreading rate. The SS configuration exhibits excellent near field mixing behavior amongst all the arrangements. However, in the case of TS cases, only the X2(2Dj) configuration performs better due to the presence of local spanwise rollers. The qualitative and quantitative analysis reveals that near-field mixing is strongly affected by the two-dimensional rollers, while the early onset of the wake mode is another crucial parameter to have improved mixing. Modal decomposition performed for the SS arrangement sheds light onto the spatial and temporal coherence of the structures, where the most dominant structures are found to be the von Kármán street vortices in the wake region.
International Nuclear Information System (INIS)
Oraevskij, V.N.; Golyshev, S.A.; Levitin, A.E.; Breus, T.K.; Ivanova, S.V.; Komarov, F.I.; Rapoport, S.I.
1995-01-01
Space and time distribution of the electric and magnetic fields and current systems in the near terrestrial space (electromagnetic weather) were studied in connection with ambulance calls in Moscow, Russia, related to the cardia-vascular diseases. The some examples of the correlations between the solar activity parameters and geomagnetic variations and the events of the extreme number of ambulance calls were presented. 4 refs., 5 figs., 2 tabs
Jia, Bing
2014-03-01
A comb-shaped chaotic region has been simulated in multiple two-dimensional parameter spaces using the Hindmarsh—Rose (HR) neuron model in many recent studies, which can interpret almost all of the previously simulated bifurcation processes with chaos in neural firing patterns. In the present paper, a comb-shaped chaotic region in a two-dimensional parameter space was reproduced, which presented different processes of period-adding bifurcations with chaos with changing one parameter and fixed the other parameter at different levels. In the biological experiments, different period-adding bifurcation scenarios with chaos by decreasing the extra-cellular calcium concentration were observed from some neural pacemakers at different levels of extra-cellular 4-aminopyridine concentration and from other pacemakers at different levels of extra-cellular caesium concentration. By using the nonlinear time series analysis method, the deterministic dynamics of the experimental chaotic firings were investigated. The period-adding bifurcations with chaos observed in the experiments resembled those simulated in the comb-shaped chaotic region using the HR model. The experimental results show that period-adding bifurcations with chaos are preserved in different two-dimensional parameter spaces, which provides evidence of the existence of the comb-shaped chaotic region and a demonstration of the simulation results in different two-dimensional parameter spaces in the HR neuron model. The results also present relationships between different firing patterns in two-dimensional parameter spaces.
International Nuclear Information System (INIS)
Jia Bing
2014-01-01
A comb-shaped chaotic region has been simulated in multiple two-dimensional parameter spaces using the Hindmarsh—Rose (HR) neuron model in many recent studies, which can interpret almost all of the previously simulated bifurcation processes with chaos in neural firing patterns. In the present paper, a comb-shaped chaotic region in a two-dimensional parameter space was reproduced, which presented different processes of period-adding bifurcations with chaos with changing one parameter and fixed the other parameter at different levels. In the biological experiments, different period-adding bifurcation scenarios with chaos by decreasing the extra-cellular calcium concentration were observed from some neural pacemakers at different levels of extra-cellular 4-aminopyridine concentration and from other pacemakers at different levels of extra-cellular caesium concentration. By using the nonlinear time series analysis method, the deterministic dynamics of the experimental chaotic firings were investigated. The period-adding bifurcations with chaos observed in the experiments resembled those simulated in the comb-shaped chaotic region using the HR model. The experimental results show that period-adding bifurcations with chaos are preserved in different two-dimensional parameter spaces, which provides evidence of the existence of the comb-shaped chaotic region and a demonstration of the simulation results in different two-dimensional parameter spaces in the HR neuron model. The results also present relationships between different firing patterns in two-dimensional parameter spaces
Low-Power Large-Area Radiation Detector for Space Science Measurements
National Aeronautics and Space Administration — The objective of this task is to develop a low-power, large-area detectors from SiC, taking advantage of very low thermal noise characteristics and high radiation...
International Nuclear Information System (INIS)
Potlog, T.
2007-01-01
Thin Film CdS/CdTe solar cells were fabricated by Close Space Sublimation at the substrate temperature ranging from 300 degrees ± 5 degrees to 340 degrees ± degrees. The best photovoltaic parameters were achieved at substrate temperature 320 degrees and source temperature 610 degrees. The open circuit voltage and current density changes significantly with the substrate temperature and depends on the dimension of the grain sizes. Grain size is an efficiency limiting parameter for CdTe layers with large grains. The open circuit voltage and current density are the best for the cells having dimension of grains between 1.0 μm and ∼ 5.0 μm. CdS/CdTe solar cells with an efficiency of ∼ 10% were obtained. (author)
Shape, size, and robustness: feasible regions in the parameter space of biochemical networks.
Directory of Open Access Journals (Sweden)
Adel Dayarian
2009-01-01
Full Text Available The concept of robustness of regulatory networks has received much attention in the last decade. One measure of robustness has been associated with the volume of the feasible region, namely, the region in the parameter space in which the system is functional. In this paper, we show that, in addition to volume, the geometry of this region has important consequences for the robustness and the fragility of a network. We develop an approximation within which we could algebraically specify the feasible region. We analyze the segment polarity gene network to illustrate our approach. The study of random walks in the parameter space and how they exit the feasible region provide us with a rich perspective on the different modes of failure of this network model. In particular, we found that, between two alternative ways of activating Wingless, one is more robust than the other. Our method provides a more complete measure of robustness to parameter variation. As a general modeling strategy, our approach is an interesting alternative to Boolean representation of biochemical networks.
Prime focus architectures for large space telescopes: reduce surfaces to save cost
Breckinridge, J. B.; Lillie, C. F.
2016-07-01
Conceptual architectures are now being developed to identify future directions for post JWST large space telescope systems to operate in the UV Optical and near IR regions of the spectrum. Here we show that the cost of optical surfaces within large aperture telescope/instrument systems can exceed $100M/reflection when expressed in terms of the aperture increase needed to over come internal absorption loss. We recommend a program in innovative optical design to minimize the number of surfaces by considering multiple functions for mirrors. An example is given using the Rowland circle imaging spectrometer systems for UV space science. With few exceptions, current space telescope architectures are based on systems optimized for ground-based astronomy. Both HST and JWST are classical "Cassegrain" telescopes derived from the ground-based tradition to co-locate the massive primary mirror and the instruments at the same end of the metrology structure. This requirement derives from the dual need to minimize observatory dome size and cost in the presence of the Earth's 1-g gravitational field. Space telescopes, however function in the zero gravity of space and the 1- g constraint is relieved to the advantage of astronomers. Here we suggest that a prime focus large aperture telescope system in space may have potentially have higher transmittance, better pointing, improved thermal and structural control, less internal polarization and broader wavelength coverage than Cassegrain telescopes. An example is given showing how UV astronomy telescopes use single optical elements for multiple functions and therefore have a minimum number of reflections.
Autonomous sensor particle for parameter tracking in large vessels
International Nuclear Information System (INIS)
Thiele, Sebastian; Da Silva, Marco Jose; Hampel, Uwe
2010-01-01
A self-powered and neutrally buoyant sensor particle has been developed for the long-term measurement of spatially distributed process parameters in the chemically harsh environments of large vessels. One intended application is the measurement of flow parameters in stirred fermentation biogas reactors. The prototype sensor particle is a robust and neutrally buoyant capsule, which allows free movement with the flow. It contains measurement devices that log the temperature, absolute pressure (immersion depth) and 3D-acceleration data. A careful calibration including an uncertainty analysis has been performed. Furthermore, autonomous operation of the developed prototype was successfully proven in a flow experiment in a stirred reactor model. It showed that the sensor particle is feasible for future application in fermentation reactors and other industrial processes
Challenges in parameter identification of large structural dynamic systems
International Nuclear Information System (INIS)
Koh, C.G.
2001-01-01
In theory, it is possible to determine the parameters of a structural or mechanical system by subjecting it to some dynamic excitation and measuring the response. Considerable research has been carried out in this subject area known as the system identification over the past two decades. Nevertheless, the challenges associated with numerical convergence are still formidable when the system is large in terms of the number of degrees of freedom and number of unknowns. While many methods work for small systems, the convergence becomes difficult, if not impossible, for large systems. In this keynote lecture, both classical and non-classical system identification methods for dynamic testing and vibration-based inspection are discussed. For classical methods, the extended Kalman filter (EKF) approach is used. On this basis, a substructural identification method has been developed as a strategy to deal with large structural systems. This is achieved by reducing the problem size, thereby significantly improving the numerical convergence and efficiency. Two versions of this method are presented each with its own merits. A numerical example of frame structure with 20 unknown parameters is illustrated. For non-classical methods, the Genetic Algorithm (GA) is shown to be applicable with relative ease due to its 'forward analysis' nature. The computational time is, however, still enormous for large structural systems due to the combinatorial explosion problem. A model GA method has been developed to address this problem and tested with considerable success on a relatively large system of 50 degrees of freedom, accounting for input and output noise effects. An advantages of this GA-based identification method is that the objective function can be defined in response measured. Numerical studies show that the method is relatively robust, as it does in response measured. Numerical studies show that the method is relatively robust, as it dos not require good initial guess and the
Myers, J. G.; Feola, A.; Werner, C.; Nelson, E. S.; Raykin, J.; Samuels, B.; Ethier, C. R.
2016-01-01
The earliest manifestations of Visual Impairment and Intracranial Pressure (VIIP) syndrome become evident after months of spaceflight and include a variety of ophthalmic changes, including posterior globe flattening and distension of the optic nerve sheath. Prevailing evidence links the occurrence of VIIP to the cephalic fluid shift induced by microgravity and the subsequent pressure changes around the optic nerve and eye. Deducing the etiology of VIIP is challenging due to the wide range of physiological parameters that may be influenced by spaceflight and are required to address a realistic spectrum of physiological responses. Here, we report on the application of an efficient approach to interrogating physiological parameter space through computational modeling. Specifically, we assess the influence of uncertainty in input parameters for two models of VIIP syndrome: a lumped-parameter model (LPM) of the cardiovascular and central nervous systems, and a finite-element model (FEM) of the posterior eye, optic nerve head (ONH) and optic nerve sheath. Methods: To investigate the parameter space in each model, we employed Latin hypercube sampling partial rank correlation coefficient (LHSPRCC) strategies. LHS techniques outperform Monte Carlo approaches by enforcing efficient sampling across the entire range of all parameters. The PRCC method estimates the sensitivity of model outputs to these parameters while adjusting for the linear effects of all other inputs. The LPM analysis addressed uncertainties in 42 physiological parameters, such as initial compartmental volume and nominal compartment percentage of total cardiac output in the supine state, while the FEM evaluated the effects on biomechanical strain from uncertainties in 23 material and pressure parameters for the ocular anatomy. Results and Conclusion: The LPM analysis identified several key factors including high sensitivity to the initial fluid distribution. The FEM study found that intraocular pressure and
Miksovsky, J.; Raidl, A.
Time delays phase space reconstruction represents one of useful tools of nonlinear time series analysis, enabling number of applications. Its utilization requires the value of time delay to be known, as well as the value of embedding dimension. There are sev- eral methods how to estimate both these parameters. Typically, time delay is computed first, followed by embedding dimension. Our presented approach is slightly different - we reconstructed phase space for various combinations of mentioned parameters and used it for prediction by means of the nearest neighbours in the phase space. Then some measure of prediction's success was computed (correlation or RMSE, e.g.). The position of its global maximum (minimum) should indicate the suitable combination of time delay and embedding dimension. Several meteorological (particularly clima- tological) time series were used for the computations. We have also created a MS- Windows based program in order to implement this approach - its basic features will be presented as well.
Power conditioning for large dc motors for space flight applications
Veatch, Martin S.; Anderson, Paul M.; Eason, Douglas J.; Landis, David M.
1988-01-01
The design and performance of a prototype power-conditioning system for use with large brushless dc motors on NASA space missions are discussed in detail and illustrated with extensive diagrams, drawings, and graphs. The 5-kW 8-phase parallel module evaluated here would be suitable for use in the Space Shuttle Orbiter cargo bay. A current-balancing magnetic assembly with low distributed inductance permits high-speed current switching from a low-voltage bus as well as current balancing between parallel MOSFETs.
Dynamical quantum Hall effect in the parameter space.
Gritsev, V; Polkovnikov, A
2012-04-24
Geometric phases in quantum mechanics play an extraordinary role in broadening our understanding of fundamental significance of geometry in nature. One of the best known examples is the Berry phase [M.V. Berry (1984), Proc. Royal. Soc. London A, 392:45], which naturally emerges in quantum adiabatic evolution. So far the applicability and measurements of the Berry phase were mostly limited to systems of weakly interacting quasi-particles, where interference experiments are feasible. Here we show how one can go beyond this limitation and observe the Berry curvature, and hence the Berry phase, in generic systems as a nonadiabatic response of physical observables to the rate of change of an external parameter. These results can be interpreted as a dynamical quantum Hall effect in a parameter space. The conventional quantum Hall effect is a particular example of the general relation if one views the electric field as a rate of change of the vector potential. We illustrate our findings by analyzing the response of interacting spin chains to a rotating magnetic field. We observe the quantization of this response, which we term the rotational quantum Hall effect.
Quantum sensing of the phase-space-displacement parameters using a single trapped ion
Ivanov, Peter A.; Vitanov, Nikolay V.
2018-03-01
We introduce a quantum sensing protocol for detecting the parameters characterizing the phase-space displacement by using a single trapped ion as a quantum probe. We show that, thanks to the laser-induced coupling between the ion's internal states and the motion mode, the estimation of the two conjugated parameters describing the displacement can be efficiently performed by a set of measurements of the atomic state populations. Furthermore, we introduce a three-parameter protocol capable of detecting the magnitude, the transverse direction, and the phase of the displacement. We characterize the uncertainty of the two- and three-parameter problems in terms of the Fisher information and show that state projective measurement saturates the fundamental quantum Cramér-Rao bound.
Relations between source parameters for large Persian earthquakes
Directory of Open Access Journals (Sweden)
Majid Nemati
2015-11-01
Full Text Available Empirical relationships for magnitude scales and fault parameters were produced using 436 Iranian intraplate earthquakes of recently regional databases since the continental events represent a large portion of total seismicity of Iran. The relations between different source parameters of the earthquakes were derived using input information which has usefully been provided from the databases after 1900. Suggested equations for magnitude scales relate the body-wave, surface-wave as well as local magnitude scales to scalar moment of the earthquakes. Also, dependence of source parameters as surface and subsurface rupture length and maximum surface displacement on the moment magnitude for some well documented earthquakes was investigated. For meeting this aim, ordinary linear regression procedures were employed for all relations. Our evaluations reveal a fair agreement between obtained relations and equations described in other worldwide and regional works in literature. The M0-mb and M0-MS equations are correlated well to the worldwide relations. Also, both M0-MS and M0-ML relations have a good agreement with regional studies in Taiwan. The equations derived from this study mainly confirm the results of the global investigations about rupture length of historical and instrumental events. However, some relations like MW-MN and MN-ML which are remarkably unlike to available regional works (e.g., American and Canadian were also found.
Prediction of Thermal Environment in a Large Space Using Artificial Neural Network
Directory of Open Access Journals (Sweden)
Hyun-Jung Yoon
2018-02-01
Full Text Available Since the thermal environment of large space buildings such as stadiums can vary depending on the location of the stands, it is important to divide them into different zones and evaluate their thermal environment separately. The thermal environment can be evaluated using physical values measured with the sensors, but the occupant density of the stadium stands is high, which limits the locations available to install the sensors. As a method to resolve the limitations of installing the sensors, we propose a method to predict the thermal environment of each zone in a large space. We set six key thermal factors affecting the thermal environment in a large space to be predicted factors (indoor air temperature, mean radiant temperature, and clothing and the fixed factors (air velocity, metabolic rate, and relative humidity. Using artificial neural network (ANN models and the outdoor air temperature and the surface temperature of the interior walls around the stands as input data, we developed a method to predict the three thermal factors. Learning and verification datasets were established using STAR CCM+ (2016.10, Siemens PLM software, Plano, TX, USA. An analysis of each model’s prediction results showed that the prediction accuracy increased with the number of learning data points. The thermal environment evaluation process developed in this study can be used to control heating, ventilation, and air conditioning (HVAC facilities in each zone in a large space building with sufficient learning by ANN models at the building testing or the evaluation stage.
Effects of Turbine Spacings in Very Large Wind Farms
DEFF Research Database (Denmark)
farm. LES simulations of large wind farms are performed with full aero-elastic Actuator Lines. The simulations investigate the inherent dynamics inside wind farms in the absence of atmospheric turbulence compared to cases with atmospheric turbulence. Resulting low frequency structures are inherent...... in wind farms for certain turbine spacings and affect both power production and loads...
Visualising very large phylogenetic trees in three dimensional hyperbolic space
Directory of Open Access Journals (Sweden)
Liberles David A
2004-04-01
Full Text Available Abstract Background Common existing phylogenetic tree visualisation tools are not able to display readable trees with more than a few thousand nodes. These existing methodologies are based in two dimensional space. Results We introduce the idea of visualising phylogenetic trees in three dimensional hyperbolic space with the Walrus graph visualisation tool and have developed a conversion tool that enables the conversion of standard phylogenetic tree formats to Walrus' format. With Walrus, it becomes possible to visualise and navigate phylogenetic trees with more than 100,000 nodes. Conclusion Walrus enables desktop visualisation of very large phylogenetic trees in 3 dimensional hyperbolic space. This application is potentially useful for visualisation of the tree of life and for functional genomics derivatives, like The Adaptive Evolution Database (TAED.
Automatic Measurement in Large-Scale Space with the Laser Theodolite and Vision Guiding Technology
Directory of Open Access Journals (Sweden)
Bin Wu
2013-01-01
Full Text Available The multitheodolite intersection measurement is a traditional approach to the coordinate measurement in large-scale space. However, the procedure of manual labeling and aiming results in the low automation level and the low measuring efficiency, and the measurement accuracy is affected easily by the manual aiming error. Based on the traditional theodolite measuring methods, this paper introduces the mechanism of vision measurement principle and presents a novel automatic measurement method for large-scale space and large workpieces (equipment combined with the laser theodolite measuring and vision guiding technologies. The measuring mark is established on the surface of the measured workpiece by the collimating laser which is coaxial with the sight-axis of theodolite, so the cooperation targets or manual marks are no longer needed. With the theoretical model data and the multiresolution visual imaging and tracking technology, it can realize the automatic, quick, and accurate measurement of large workpieces in large-scale space. Meanwhile, the impact of artificial error is reduced and the measuring efficiency is improved. Therefore, this method has significant ramification for the measurement of large workpieces, such as the geometry appearance characteristics measuring of ships, large aircraft, and spacecraft, and deformation monitoring for large building, dams.
Xue, Zhang-Na; Yu, Ya-Jun; Tian, Xiao-Geng
2017-07-01
Based upon the coupled thermoelasticity and Green and Lindsay theory, the new governing equations of two-temperature thermoelastic theory with thermal nonlocal parameter is formulated. To more realistically model thermal loading of a half-space surface, a linear temperature ramping function is adopted. Laplace transform techniques are used to get the general analytical solutions in Laplace domain, and the inverse Laplace transforms based on Fourier expansion techniques are numerically implemented to obtain the numerical solutions in time domain. Specific attention is paid to study the effect of thermal nonlocal parameter, ramping time, and two-temperature parameter on the distributions of temperature, displacement and stress distribution.
On the identifiability of inertia parameters of planar Multi-Body Space Systems
Nabavi-Chashmi, Seyed Yaser; Malaek, Seyed Mohammad-Bagher
2018-04-01
This work describes a new formulation to study the identifiability characteristics of Serially Linked Multi-body Space Systems (SLMBSS). The process exploits the so called "Lagrange Formulation" to develop a linear form of Equations of Motion w.r.t the system Inertia Parameters (IPs). Having developed a specific form of regressor matrix, we aim to expedite the identification process. The new approach allows analytical as well as numerical identification and identifiability analysis for different SLMBSSs' configurations. Moreover, the explicit forms of SLMBSSs identifiable parameters are derived by analyzing the identifiability characteristics of the robot. We further show that any SLMBSS designed with Variable Configurations Joint allows all IPs to be identifiable through comparing two successive identification outcomes. This feature paves the way to design new class of SLMBSS for which accurate identification of all IPs is at hand. Different case studies reveal that proposed formulation provides fast and accurate results, as required by the space applications. Further studies might be necessary for cases where planar-body assumption becomes inaccurate.
GRID-BASED EXPLORATION OF COSMOLOGICAL PARAMETER SPACE WITH SNAKE
International Nuclear Information System (INIS)
Mikkelsen, K.; Næss, S. K.; Eriksen, H. K.
2013-01-01
We present a fully parallelized grid-based parameter estimation algorithm for investigating multidimensional likelihoods called Snake, and apply it to cosmological parameter estimation. The basic idea is to map out the likelihood grid-cell by grid-cell according to decreasing likelihood, and stop when a certain threshold has been reached. This approach improves vastly on the 'curse of dimensionality' problem plaguing standard grid-based parameter estimation simply by disregarding grid cells with negligible likelihood. The main advantages of this method compared to standard Metropolis-Hastings Markov Chain Monte Carlo methods include (1) trivial extraction of arbitrary conditional distributions; (2) direct access to Bayesian evidences; (3) better sampling of the tails of the distribution; and (4) nearly perfect parallelization scaling. The main disadvantage is, as in the case of brute-force grid-based evaluation, a dependency on the number of parameters, N par . One of the main goals of the present paper is to determine how large N par can be, while still maintaining reasonable computational efficiency; we find that N par = 12 is well within the capabilities of the method. The performance of the code is tested by comparing cosmological parameters estimated using Snake and the WMAP-7 data with those obtained using CosmoMC, the current standard code in the field. We find fully consistent results, with similar computational expenses, but shorter wall time due to the perfect parallelization scheme
Extraterrestrial processing and manufacturing of large space systems. Volume 3: Executive summary
Miller, R. H.; Smith, D. B. S.
1979-01-01
Facilities and equipment are defined for refining processes to commercial grade of lunar material that is delivered to a 'space manufacturing facility' in beneficiated, primary processed quality. The manufacturing facilities and the equipment for producing elements of large space systems from these materials and providing programmatic assessments of the concepts are also defined. In-space production processes of solar cells (by vapor deposition) and arrays, structures and joints, conduits, waveguides, RF equipment radiators, wire cables, converters, and others are described.
Operational definition of (brane-induced) space-time and constraints on the fundamental parameters
International Nuclear Information System (INIS)
Maziashvili, Michael
2008-01-01
First we contemplate the operational definition of space-time in four dimensions in light of basic principles of quantum mechanics and general relativity and consider some of its phenomenological consequences. The quantum gravitational fluctuations of the background metric that comes through the operational definition of space-time are controlled by the Planck scale and are therefore strongly suppressed. Then we extend our analysis to the braneworld setup with low fundamental scale of gravity. It is observed that in this case the quantum gravitational fluctuations on the brane may become unacceptably large. The magnification of fluctuations is not linked directly to the low quantum gravity scale but rather to the higher-dimensional modification of Newton's inverse square law at relatively large distances. For models with compact extra dimensions the shape modulus of extra space can be used as a most natural and safe stabilization mechanism against these fluctuations
Precision Parameter Estimation and Machine Learning
Wandelt, Benjamin D.
2008-12-01
I discuss the strategy of ``Acceleration by Parallel Precomputation and Learning'' (AP-PLe) that can vastly accelerate parameter estimation in high-dimensional parameter spaces and costly likelihood functions, using trivially parallel computing to speed up sequential exploration of parameter space. This strategy combines the power of distributed computing with machine learning and Markov-Chain Monte Carlo techniques efficiently to explore a likelihood function, posterior distribution or χ2-surface. This strategy is particularly successful in cases where computing the likelihood is costly and the number of parameters is moderate or large. We apply this technique to two central problems in cosmology: the solution of the cosmological parameter estimation problem with sufficient accuracy for the Planck data using PICo; and the detailed calculation of cosmological helium and hydrogen recombination with RICO. Since the APPLe approach is designed to be able to use massively parallel resources to speed up problems that are inherently serial, we can bring the power of distributed computing to bear on parameter estimation problems. We have demonstrated this with the CosmologyatHome project.
Scoping the parameter space for demo and the engineering test
International Nuclear Information System (INIS)
Meier, W R.
1999-01-01
In our IFE development plan, we have set a goal of building an Engineering Test Facility (ETF) for a total cost of $2B and a Demo for $3B. In Mike Campbell s presentation at Madison, we included a viewgraph with an example Demo that had 80 to 250 MWe of net power and showed a plausible argument that it could cost less than $3B. In this memo, I examine the design space for the Demo and then briefly for the ETF. Instead of attempting to estimate the costs of the drivers, I pose the question in a way to define R ampersand D goals: As a function of key design and performance parameters, how much can the driver cost if the total facility cost is limited to the specified goal? The design parameters examined for the Demo included target gain, driver energy, driver efficiency, and net power output. For the ETF; the design parameters are target gain, driver energy, and target yield. The resulting graphs of allowable driver cost determine the goals that the driver R ampersand D programs must seek to meet
Takizawa, Kenji; Tezduyar, Tayfun E.; Otoguro, Yuto
2018-04-01
Stabilized methods, which have been very common in flow computations for many years, typically involve stabilization parameters, and discontinuity-capturing (DC) parameters if the method is supplemented with a DC term. Various well-performing stabilization and DC parameters have been introduced for stabilized space-time (ST) computational methods in the context of the advection-diffusion equation and the Navier-Stokes equations of incompressible and compressible flows. These parameters were all originally intended for finite element discretization but quite often used also for isogeometric discretization. The stabilization and DC parameters we present here for ST computations are in the context of the advection-diffusion equation and the Navier-Stokes equations of incompressible flows, target isogeometric discretization, and are also applicable to finite element discretization. The parameters are based on a direction-dependent element length expression. The expression is outcome of an easy to understand derivation. The key components of the derivation are mapping the direction vector from the physical ST element to the parent ST element, accounting for the discretization spacing along each of the parametric coordinates, and mapping what we have in the parent element back to the physical element. The test computations we present for pure-advection cases show that the parameters proposed result in good solution profiles.
International Nuclear Information System (INIS)
Surzhikov, S.
2012-01-01
Graphical abstract: It has been shown that different coupled vibrational dissociation models, being applied for solving coupled radiative gasdynamic problems for large size space vehicles, exert noticeable effect on radiative heating of its surface at orbital entry on high altitudes (h ⩾ 70 km). This influence decreases with decreasing the space vehicles sizes. Figure shows translational (solid lines) and vibrational (dashed lines) temperatures in shock layer with (circle markers) and without (triangles markers) radiative-gasdynamic interaction for one trajectory point of entering space vehicle. Highlights: ► Nonequilibrium dissociation processes exert effect on radiation heating of space vehicles (SV). ► The radiation gas dynamic interaction enhances this influence. ► This influence increases with increasing the SV sizes. - Abstract: Radiative aerothermodynamics of large-scale space vehicles is considered for Earth orbital entry at zero angle of attack. Brief description of used radiative gasdynamic model of physically and chemically nonequilibrium, viscous, heat conductive and radiative gas of complex chemical composition is presented. Radiation gasdynamic (RadGD) interaction in high temperature shock layer is studied by means of numerical experiment. It is shown that radiation–gasdynamic coupling for orbital space vehicles of large size is important for high altitude part of entering trajectory. It is demonstrated that the use of different models of coupled vibrational dissociation (CVD) in conditions of RadGD interaction gives rise temperature variation in shock layer and, as a result, leads to significant variation of radiative heating of space vehicle.
Directory of Open Access Journals (Sweden)
Haiwen Li
2018-01-01
Full Text Available The estimation speed of positioning parameters determines the effectiveness of the positioning system. The time of arrival (TOA and direction of arrival (DOA parameters can be estimated by the space-time two-dimensional multiple signal classification (2D-MUSIC algorithm for array antenna. However, this algorithm needs much time to complete the two-dimensional pseudo spectral peak search, which makes it difficult to apply in practice. Aiming at solving this problem, a fast estimation method of space-time two-dimensional positioning parameters based on Hadamard product is proposed in orthogonal frequency division multiplexing (OFDM system, and the Cramer-Rao bound (CRB is also presented. Firstly, according to the channel frequency domain response vector of each array, the channel frequency domain estimation vector is constructed using the Hadamard product form containing location information. Then, the autocorrelation matrix of the channel response vector for the extended array element in frequency domain and the noise subspace are calculated successively. Finally, by combining the closed-form solution and parameter pairing, the fast joint estimation for time delay and arrival direction is accomplished. The theoretical analysis and simulation results show that the proposed algorithm can significantly reduce the computational complexity and guarantee that the estimation accuracy is not only better than estimating signal parameters via rotational invariance techniques (ESPRIT algorithm and 2D matrix pencil (MP algorithm but also close to 2D-MUSIC algorithm. Moreover, the proposed algorithm also has certain adaptability to multipath environment and effectively improves the ability of fast acquisition of location parameters.
An Autonomous Sensor Tasking Approach for Large Scale Space Object Cataloging
Linares, R.; Furfaro, R.
The field of Space Situational Awareness (SSA) has progressed over the last few decades with new sensors coming online, the development of new approaches for making observations, and new algorithms for processing them. Although there has been success in the development of new approaches, a missing piece is the translation of SSA goals to sensors and resource allocation; otherwise known as the Sensor Management Problem (SMP). This work solves the SMP using an artificial intelligence approach called Deep Reinforcement Learning (DRL). Stable methods for training DRL approaches based on neural networks exist, but most of these approaches are not suitable for high dimensional systems. The Asynchronous Advantage Actor-Critic (A3C) method is a recently developed and effective approach for high dimensional systems, and this work leverages these results and applies this approach to decision making in SSA. The decision space for the SSA problems can be high dimensional, even for tasking of a single telescope. Since the number of SOs in space is relatively high, each sensor will have a large number of possible actions at a given time. Therefore, efficient DRL approaches are required when solving the SMP for SSA. This work develops a A3C based method for DRL applied to SSA sensor tasking. One of the key benefits of DRL approaches is the ability to handle high dimensional data. For example DRL methods have been applied to image processing for the autonomous car application. For example, a 256x256 RGB image has 196608 parameters (256*256*3=196608) which is very high dimensional, and deep learning approaches routinely take images like this as inputs. Therefore, when applied to the whole catalog the DRL approach offers the ability to solve this high dimensional problem. This work has the potential to, for the first time, solve the non-myopic sensor tasking problem for the whole SO catalog (over 22,000 objects) providing a truly revolutionary result.
Extraterrestrial processing and manufacturing of large space systems, volume 1, chapters 1-6
Miller, R. H.; Smith, D. B. S.
1979-01-01
Space program scenarios for production of large space structures from lunar materials are defined. The concept of the space manufacturing facility (SMF) is presented. The manufacturing processes and equipment for the SMF are defined and the conceptual layouts are described for the production of solar cells and arrays, structures and joints, conduits, waveguides, RF equipment radiators, wire cables, and converters. A 'reference' SMF was designed and its operation requirements are described.
GRID-BASED EXPLORATION OF COSMOLOGICAL PARAMETER SPACE WITH SNAKE
Energy Technology Data Exchange (ETDEWEB)
Mikkelsen, K.; Næss, S. K.; Eriksen, H. K., E-mail: kristin.mikkelsen@astro.uio.no [Institute of Theoretical Astrophysics, University of Oslo, P.O. Box 1029, Blindern, NO-0315 Oslo (Norway)
2013-11-10
We present a fully parallelized grid-based parameter estimation algorithm for investigating multidimensional likelihoods called Snake, and apply it to cosmological parameter estimation. The basic idea is to map out the likelihood grid-cell by grid-cell according to decreasing likelihood, and stop when a certain threshold has been reached. This approach improves vastly on the 'curse of dimensionality' problem plaguing standard grid-based parameter estimation simply by disregarding grid cells with negligible likelihood. The main advantages of this method compared to standard Metropolis-Hastings Markov Chain Monte Carlo methods include (1) trivial extraction of arbitrary conditional distributions; (2) direct access to Bayesian evidences; (3) better sampling of the tails of the distribution; and (4) nearly perfect parallelization scaling. The main disadvantage is, as in the case of brute-force grid-based evaluation, a dependency on the number of parameters, N{sub par}. One of the main goals of the present paper is to determine how large N{sub par} can be, while still maintaining reasonable computational efficiency; we find that N{sub par} = 12 is well within the capabilities of the method. The performance of the code is tested by comparing cosmological parameters estimated using Snake and the WMAP-7 data with those obtained using CosmoMC, the current standard code in the field. We find fully consistent results, with similar computational expenses, but shorter wall time due to the perfect parallelization scheme.
Space weather and space anomalies
Directory of Open Access Journals (Sweden)
L. I. Dorman
2005-11-01
Full Text Available A large database of anomalies, registered by 220 satellites in different orbits over the period 1971-1994 has been compiled. For the first time, data from 49 Russian Kosmos satellites have been included in a statistical analysis. The database also contains a large set of daily and hourly space weather parameters. A series of statistical analyses made it possible to quantify, for different satellite orbits, space weather conditions on the days characterized by anomaly occurrences. In particular, very intense fluxes (>1000 pfu at energy >10 MeV of solar protons are linked to anomalies registered by satellites in high-altitude (>15000 km, near-polar (inclination >55° orbits typical for navigation satellites, such as those used in the GPS network, NAVSTAR, etc. (the rate of anomalies increases by a factor ~20, and to a much smaller extent to anomalies in geostationary orbits, (they increase by a factor ~4. Direct and indirect connections between anomaly occurrence and geomagnetic perturbations are also discussed.
Cosmological special relativity the large scale structure of space, time and velocity
Carmeli, Moshe
1997-01-01
This book deals with special relativity theory and its application to cosmology. It presents Einstein's theory of space and time in detail, and describes the large scale structure of space, time and velocity as a new cosmological special relativity. A cosmological Lorentz-like transformation, which relates events at different cosmic times, is derived and applied. A new law of addition of cosmic times is obtained, and the inflation of the space at the early universe is derived, both from the cosmological transformation. The book will be of interest to cosmologists, astrophysicists, theoretical
Cosmological special relativity the large scale structure of space, time and velocity
Carmeli, Moshe
2002-01-01
This book presents Einstein's theory of space and time in detail, and describes the large-scale structure of space, time and velocity as a new cosmological special relativity. A cosmological Lorentz-like transformation, which relates events at different cosmic times, is derived and applied. A new law of addition of cosmic times is obtained, and the inflation of the space at the early universe is derived, both from the cosmological transformation. The relationship between cosmic velocity, acceleration and distances is given. In the appendices gravitation is added in the form of a cosmological g
Directory of Open Access Journals (Sweden)
Tibor Tot
2012-01-01
Full Text Available Breast cancer subgross morphological parameters (disease extent, lesion distribution, and tumor size provide significant prognostic information and guide therapeutic decisions. Modern multimodality radiological imaging can determine these parameters with increasing accuracy in most patients. Large-format histopathology preserves the spatial relationship of the tumor components and their relationship to the resection margins and has clear advantages over traditional routine pathology techniques. We report a series of 1000 consecutive breast cancer cases worked up with large-format histology with detailed radiological-pathological correlation. We confirmed that breast carcinomas often exhibit complex subgross morphology in both early and advanced stages. Half of the cases were extensive tumors and occupied a tissue space ≥40 mm in its largest dimension. Because both in situ and invasive tumor components may exhibit unifocal, multifocal, and diffuse lesion distribution, 17 different breast cancer growth patterns can be observed. Combining in situ and invasive tumor components, most cases fall into three aggregate growth patterns: unifocal (36%, multifocal (35%, and diffuse (28%. Large-format histology categories of tumor size and disease extent were concordant with radiological measurements in approximately 80% of the cases. Noncalcified, low-grade in situ foci, and invasive tumor foci <5 mm were the most frequent causes of discrepant findings.
International Nuclear Information System (INIS)
Passos, E.J.V. de; Toledo Piza, A.F.R. de.
The properties of the subspaces of the many-body Hilbert space which are associated with the use of the Generator Coordinate Method (GCM) in connection with one parameter, and with two-conjugate parameter families of generator states are examined in detail. It is shown that natural orthonormal base vectors in each case are immediately related to Peierls-Voccoz and Peierls-Thouless projections respectively. Through the formal consideration of a canonical transformation to collective, P and Q, and intrinsic degrees of freedom, the properties of the GCM subspaces with respect to the kinematical separation of these degrees of freedom are discussed in detail. An application is made, using the ideas developed in this paper, a) to translation; b) to illustrate the qualitative understanting of the content of existing GCM calculations of giant ressonances in light nuclei and c) to the definition of appropriate asymptotic states in current GCM descriptions of scattering [pt
EFT of large scale structures in redshift space
Lewandowski, Matthew; Senatore, Leonardo; Prada, Francisco; Zhao, Cheng; Chuang, Chia-Hsun
2018-03-01
We further develop the description of redshift-space distortions within the effective field theory of large scale structures. First, we generalize the counterterms to include the effect of baryonic physics and primordial non-Gaussianity. Second, we evaluate the IR resummation of the dark matter power spectrum in redshift space. This requires us to identify a controlled approximation that makes the numerical evaluation straightforward and efficient. Third, we compare the predictions of the theory at one loop with the power spectrum from numerical simulations up to ℓ=6 . We find that the IR resummation allows us to correctly reproduce the baryon acoustic oscillation peak. The k reach—or, equivalently, the precision for a given k —depends on additional counterterms that need to be matched to simulations. Since the nonlinear scale for the velocity is expected to be longer than the one for the overdensity, we consider a minimal and a nonminimal set of counterterms. The quality of our numerical data makes it hard to firmly establish the performance of the theory at high wave numbers. Within this limitation, we find that the theory at redshift z =0.56 and up to ℓ=2 matches the data at the percent level approximately up to k ˜0.13 h Mpc-1 or k ˜0.18 h Mpc-1 , depending on the number of counterterms used, with a potentially large improvement over former analytical techniques.
International Nuclear Information System (INIS)
Yang Jianyi; Yu Zuguo; Anh, Vo
2009-01-01
The Schneider and Wrede hydrophobicity scale of amino acids and the 6-letter model of protein are proposed to study the relationship between the primary structure and the secondary structural classification of proteins. Two kinds of multifractal analyses are performed on the two measures obtained from these two kinds of data on large proteins. Nine parameters from the multifractal analyses are considered to construct the parameter spaces. Each protein is represented by one point in these spaces. A procedure is proposed to separate large proteins in the α, β, α + β and α/β structural classes in these parameter spaces. Fisher's linear discriminant algorithm is used to assess our clustering accuracy on the 49 selected large proteins. Numerical results indicate that the discriminant accuracies are satisfactory. In particular, they reach 100.00% and 84.21% in separating the α proteins from the {β, α + β, α/β} proteins in a parameter space; 92.86% and 86.96% in separating the β proteins from the {α + β, α/β} proteins in another parameter space; 91.67% and 83.33% in separating the α/β proteins from the α + β proteins in the last parameter space.
Near-Space TOPSAR Large-Scene Full-Aperture Imaging Scheme Based on Two-Step Processing
Directory of Open Access Journals (Sweden)
Qianghui Zhang
2016-07-01
Full Text Available Free of the constraints of orbit mechanisms, weather conditions and minimum antenna area, synthetic aperture radar (SAR equipped on near-space platform is more suitable for sustained large-scene imaging compared with the spaceborne and airborne counterparts. Terrain observation by progressive scans (TOPS, which is a novel wide-swath imaging mode and allows the beam of SAR to scan along the azimuth, can reduce the time of echo acquisition for large scene. Thus, near-space TOPS-mode SAR (NS-TOPSAR provides a new opportunity for sustained large-scene imaging. An efficient full-aperture imaging scheme for NS-TOPSAR is proposed in this paper. In this scheme, firstly, two-step processing (TSP is adopted to eliminate the Doppler aliasing of the echo. Then, the data is focused in two-dimensional frequency domain (FD based on Stolt interpolation. Finally, a modified TSP (MTSP is performed to remove the azimuth aliasing. Simulations are presented to demonstrate the validity of the proposed imaging scheme for near-space large-scene imaging application.
Modified distribution parameter for churn-turbulent flows in large diameter channels
Energy Technology Data Exchange (ETDEWEB)
Schlegel, J.P., E-mail: jschlege@purdue.edu; Macke, C.J.; Hibiki, T.; Ishii, M.
2013-10-15
Highlights: • Void fraction data collected in pipe sizes up to 0.304 m using impedance void meters. • Flow conditions extend to transition between churn-turbulent and annular flow. • Flow regime identification results agree with previous studies. • A new model for the distribution parameter in churn-turbulent flow is proposed. -- Abstract: Two phase flows in large diameter channels are important in a wide range of industrial applications, but especially in analysis of nuclear reactor safety for the prediction of BWR behavior and safety analysis in PWRs. To remedy an inability of current drift-flux models to accurately predict the void fraction in churn-turbulent flows in large diameter pipes, extensive experiments have been performed in pipes with diameters of 0.152 m, 0.203 m and 0.304 m to collect area-averaged void fraction data using electrical impedance void meters. The standard deviation and skewness of the impedance meter signal have been used to characterize the flow regime and confirm previous flow regime transition results. By treating churn-turbulent flow as a transition between cap-bubbly dispersed flow and annular separated flow and using a linear ramp, the distribution parameter has been modified for churn-turbulent flow. The modified distribution parameter has been evaluated through comparison of the void fraction predicted by the drift-flux model and the measured void fraction.
Modified distribution parameter for churn-turbulent flows in large diameter channels
International Nuclear Information System (INIS)
Schlegel, J.P.; Macke, C.J.; Hibiki, T.; Ishii, M.
2013-01-01
Highlights: • Void fraction data collected in pipe sizes up to 0.304 m using impedance void meters. • Flow conditions extend to transition between churn-turbulent and annular flow. • Flow regime identification results agree with previous studies. • A new model for the distribution parameter in churn-turbulent flow is proposed. -- Abstract: Two phase flows in large diameter channels are important in a wide range of industrial applications, but especially in analysis of nuclear reactor safety for the prediction of BWR behavior and safety analysis in PWRs. To remedy an inability of current drift-flux models to accurately predict the void fraction in churn-turbulent flows in large diameter pipes, extensive experiments have been performed in pipes with diameters of 0.152 m, 0.203 m and 0.304 m to collect area-averaged void fraction data using electrical impedance void meters. The standard deviation and skewness of the impedance meter signal have been used to characterize the flow regime and confirm previous flow regime transition results. By treating churn-turbulent flow as a transition between cap-bubbly dispersed flow and annular separated flow and using a linear ramp, the distribution parameter has been modified for churn-turbulent flow. The modified distribution parameter has been evaluated through comparison of the void fraction predicted by the drift-flux model and the measured void fraction
Parameter retrieval of chiral metamaterials based on the state-space approach.
Zarifi, Davoud; Soleimani, Mohammad; Abdolali, Ali
2013-08-01
This paper deals with the introduction of an approach for the electromagnetic characterization of homogeneous chiral layers. The proposed method is based on the state-space approach and properties of a 4×4 state transition matrix. Based on this, first, the forward problem analysis through the state-space method is reviewed and properties of the state transition matrix of a chiral layer are presented and proved as two theorems. The formulation of a proposed electromagnetic characterization method is then presented. In this method, scattering data for a linearly polarized plane wave incident normally on a homogeneous chiral slab are combined with properties of a state transition matrix and provide a powerful characterization method. The main difference with respect to other well-established retrieval procedures based on the use of the scattering parameters relies on the direct computation of the transfer matrix of the slab as opposed to the conventional calculation of the propagation constant and impedance of the modes supported by the medium. The proposed approach allows avoiding nonlinearity of the problem but requires getting enough equations to fulfill the task which was provided by considering some properties of the state transition matrix. To demonstrate the applicability and validity of the method, the constitutive parameters of two well-known dispersive chiral metamaterial structures at microwave frequencies are retrieved. The results show that the proposed method is robust and reliable.
Space dependence of reactivity parameters on reactor dynamic perturbation measurements
International Nuclear Information System (INIS)
Maletti, R.; Ziegenbein, D.
1985-01-01
Practical application of reactor-dynamic perturbation measurements for on-power determination of differential reactivity weight of control rods and power coefficients of reactivity has shown a significant dependence of parameters on the position of outcore detectors. The space dependence of neutron flux signal in the core of a VVER-440-type reactor was measured by means of 60 self-powered neutron detectors. The greatest neutron flux alterations are located close to moved control rods and in height of the perturbation position. By means of computations, detector positions can be found in the core in which the one-point model is almost valid. (author)
On the impact of large angle CMB polarization data on cosmological parameters
Energy Technology Data Exchange (ETDEWEB)
Lattanzi, Massimiliano; Mandolesi, Nazzareno; Natoli, Paolo [Dipartimento di Fisica e Scienze della Terra, Università di Ferrara, Via Giuseppe Saragat 1, I-44122 Ferrara (Italy); Burigana, Carlo; Gruppuso, Alessandro; Trombetti, Tiziana [Istituto Nazionale di Astrofisica, Istituto di Astrofisica Spaziale e Fisica Cosmica di Bologna, Via Piero Gobetti 101, I-40129 Bologna (Italy); Gerbino, Martina [The Oskar Klein Centre for Cosmoparticle Physics, Department of Physics, Stockholm University, AlbaNova, SE-106 91 Stockholm (Sweden); Polenta, Gianluca [Agenzia Spaziale Italiana Science Data Center, Via del Politecnico snc, 00133, Roma (Italy); Salvati, Laura, E-mail: lattanzi@fe.infn.it, E-mail: burigana@iasfbo.inaf.it, E-mail: martina.gerbino@fysik.su.se, E-mail: gruppuso@iasfbo.inaf.it, E-mail: nazzareno.mandolesi@unife.it, E-mail: paolo.natoli@unife.it, E-mail: gianluca.polenta@asdc.asi.it, E-mail: laura.salvati@ias.u-psud.fr, E-mail: trombetti@iasfbo.inaf.it [Dipartimento di Fisica, Università La Sapienza, Piazzale Aldo Moro 2, I-00185 Roma (Italy)
2017-02-01
We study the impact of the large-angle CMB polarization datasets publicly released by the WMAP and Planck satellites on the estimation of cosmological parameters of the ΛCDM model. To complement large-angle polarization, we consider the high resolution (or 'high-ℓ') CMB datasets from either WMAP or Planck as well as CMB lensing as traced by Planck 's measured four point correlation function. In the case of WMAP, we compute the large-angle polarization likelihood starting over from low resolution frequency maps and their covariance matrices, and perform our own foreground mitigation technique, which includes as a possible alternative Planck 353 GHz data to trace polarized dust. We find that the latter choice induces a downward shift in the optical depth τ, roughly of order 2σ, robust to the choice of the complementary high resolution dataset. When the Planck 353 GHz is consistently used to minimize polarized dust emission, WMAP and Planck 70 GHz large-angle polarization data are in remarkable agreement: by combining them we find τ = 0.066 {sup +0.012}{sub −0.013}, again very stable against the particular choice for high-ℓ data. We find that the amplitude of primordial fluctuations A {sub s} , notoriously degenerate with τ, is the parameter second most affected by the assumptions on polarized dust removal, but the other parameters are also affected, typically between 0.5 and 1σ. In particular, cleaning dust with Planck 's 353 GHz data imposes a 1σ downward shift in the value of the Hubble constant H {sub 0}, significantly contributing to the tension reported between CMB based and direct measurements of the present expansion rate. On the other hand, we find that the appearance of the so-called low ℓ anomaly, a well-known tension between the high- and low-resolution CMB anisotropy amplitude, is not significantly affected by the details of large-angle polarization, or by the particular high-ℓ dataset employed.
Evasive Maneuvers in Space Debris Environment and Technological Parameters
Directory of Open Access Journals (Sweden)
Antônio D. C. Jesus
2012-01-01
Full Text Available We present a study of collisional dynamics between space debris and an operational vehicle in LEO. We adopted an approach based on the relative dynamics between the objects on a collisional course and with a short warning time and established a semianalytical solution for the final trajectories of these objects. Our results show that there are angular ranges in 3D, in addition to the initial conditions, that favor the collisions. These results allowed the investigation of a range of technological parameters for the spacecraft (e.g., fuel reserve that allow a safe evasive maneuver (e.g., time available for the maneuver. The numerical model was tested for different values of the impact velocity and relative distance between the approaching objects.
First-principles calculations of Moessbauer hyperfine parameters for solids and large molecules
International Nuclear Information System (INIS)
Guenzburger, Diana; Ellis, D.E.; Zeng, Z.
1997-10-01
Electronic structure calculations based on Density Functional theory were performed for solids and large molecules. The solids were represented by clusters of 60-100 atoms embedded in the potential of the external crystal. Magnetic moments and Moessbauer hyperfine parameters were derived. (author)
Energy Technology Data Exchange (ETDEWEB)
Nunez, Dario; Zavala, Jesus; Nellen, Lukas; Sussman, Roberto A [Instituto de Ciencias Nucleares, Universidad Nacional Autonoma de Mexico (ICN-UNAM), AP 70-543, Mexico 04510 DF (Mexico); Cabral-Rosetti, Luis G [Departamento de Posgrado, Centro Interdisciplinario de Investigacion y Docencia en Educacion Tecnica (CIIDET), Avenida Universidad 282 Pte., Col. Centro, Apartado Postal 752, C. P. 76000, Santiago de Queretaro, Qro. (Mexico); Mondragon, Myriam, E-mail: nunez@nucleares.unam.mx, E-mail: jzavala@nucleares.unam.mx, E-mail: jzavala@shao.ac.cn, E-mail: lukas@nucleares.unam.mx, E-mail: sussman@nucleares.unam.mx, E-mail: lgcabral@ciidet.edu.mx, E-mail: myriam@fisica.unam.mx [Instituto de Fisica, Universidad Nacional Autonoma de Mexico (IF-UNAM), Apartado Postal 20-364, 01000 Mexico DF (Mexico); Collaboration: For the Instituto Avanzado de Cosmologia, IAC
2008-05-15
We derive an expression for the entropy of a dark matter halo described using a Navarro-Frenk-White model with a core. The comparison of this entropy with that of dark matter in the freeze-out era allows us to constrain the parameter space in mSUGRA models. Moreover, combining these constraints with the ones obtained from the usual abundance criterion and demanding that these criteria be consistent with the 2{sigma} bounds for the abundance of dark matter: 0.112{<=}{Omega}{sub DM}h{sup 2}{<=}0.122, we are able to clearly identify validity regions among the values of tan{beta}, which is one of the parameters of the mSUGRA model. We found that for the regions of the parameter space explored, small values of tan{beta} are not favored; only for tan {beta} Asymptotically-Equal-To 50 are the two criteria significantly consistent. In the region where the two criteria are consistent we also found a lower bound for the neutralino mass, m{sub {chi}}{>=}141 GeV.
International Nuclear Information System (INIS)
Núñez, Darío; Zavala, Jesús; Nellen, Lukas; Sussman, Roberto A; Cabral-Rosetti, Luis G; Mondragón, Myriam
2008-01-01
We derive an expression for the entropy of a dark matter halo described using a Navarro–Frenk–White model with a core. The comparison of this entropy with that of dark matter in the freeze-out era allows us to constrain the parameter space in mSUGRA models. Moreover, combining these constraints with the ones obtained from the usual abundance criterion and demanding that these criteria be consistent with the 2σ bounds for the abundance of dark matter: 0.112≤Ω DM h 2 ≤0.122, we are able to clearly identify validity regions among the values of tanβ, which is one of the parameters of the mSUGRA model. We found that for the regions of the parameter space explored, small values of tanβ are not favored; only for tan β ≃ 50 are the two criteria significantly consistent. In the region where the two criteria are consistent we also found a lower bound for the neutralino mass, m χ ≥141 GeV
Finding viable models in SUSY parameter spaces with signal specific discovery potential
Burgess, Thomas; Lindroos, Jan Øye; Lipniacka, Anna; Sandaker, Heidi
2013-08-01
Recent results from ATLAS giving a Higgs mass of 125.5 GeV, further constrain already highly constrained supersymmetric models such as pMSSM or CMSSM/mSUGRA. As a consequence, finding potentially discoverable and non-excluded regions of model parameter space is becoming increasingly difficult. Several groups have invested large effort in studying the consequences of Higgs mass bounds, upper limits on rare B-meson decays, and limits on relic dark matter density on constrained models, aiming at predicting superpartner masses, and establishing likelihood of SUSY models compared to that of the Standard Model vis-á-vis experimental data. In this paper a framework for efficient search for discoverable, non-excluded regions of different SUSY spaces giving specific experimental signature of interest is presented. The method employs an improved Markov Chain Monte Carlo (MCMC) scheme exploiting an iteratively updated likelihood function to guide search for viable models. Existing experimental and theoretical bounds as well as the LHC discovery potential are taken into account. This includes recent bounds on relic dark matter density, the Higgs sector and rare B-mesons decays. A clustering algorithm is applied to classify selected models according to expected phenomenology enabling automated choice of experimental benchmarks and regions to be used for optimizing searches. The aim is to provide experimentalist with a viable tool helping to target experimental signatures to search for, once a class of models of interest is established. As an example a search for viable CMSSM models with τ-lepton signatures observable with the 2012 LHC data set is presented. In the search 105209 unique models were probed. From these, ten reference benchmark points covering different ranges of phenomenological observables at the LHC were selected.
Optimal control of large space structures via generalized inverse matrix
Nguyen, Charles C.; Fang, Xiaowen
1987-01-01
Independent Modal Space Control (IMSC) is a control scheme that decouples the space structure into n independent second-order subsystems according to n controlled modes and controls each mode independently. It is well-known that the IMSC eliminates control and observation spillover caused when the conventional coupled modal control scheme is employed. The independent control of each mode requires that the number of actuators be equal to the number of modelled modes, which is very high for a faithful modeling of large space structures. A control scheme is proposed that allows one to use a reduced number of actuators to control all modeled modes suboptimally. In particular, the method of generalized inverse matrices is employed to implement the actuators such that the eigenvalues of the closed-loop system are as closed as possible to those specified by the optimal IMSC. Computer simulation of the proposed control scheme on a simply supported beam is given.
Directory of Open Access Journals (Sweden)
Li Sen
2012-03-01
Full Text Available Abstract Background The Approximate Bayesian Computation (ABC approach has been used to infer demographic parameters for numerous species, including humans. However, most applications of ABC still use limited amounts of data, from a small number of loci, compared to the large amount of genome-wide population-genetic data which have become available in the last few years. Results We evaluated the performance of the ABC approach for three 'population divergence' models - similar to the 'isolation with migration' model - when the data consists of several hundred thousand SNPs typed for multiple individuals by simulating data from known demographic models. The ABC approach was used to infer demographic parameters of interest and we compared the inferred values to the true parameter values that was used to generate hypothetical "observed" data. For all three case models, the ABC approach inferred most demographic parameters quite well with narrow credible intervals, for example, population divergence times and past population sizes, but some parameters were more difficult to infer, such as population sizes at present and migration rates. We compared the ability of different summary statistics to infer demographic parameters, including haplotype and LD based statistics, and found that the accuracy of the parameter estimates can be improved by combining summary statistics that capture different parts of information in the data. Furthermore, our results suggest that poor choices of prior distributions can in some circumstances be detected using ABC. Finally, increasing the amount of data beyond some hundred loci will substantially improve the accuracy of many parameter estimates using ABC. Conclusions We conclude that the ABC approach can accommodate realistic genome-wide population genetic data, which may be difficult to analyze with full likelihood approaches, and that the ABC can provide accurate and precise inference of demographic parameters from
A General 2D Meshless Interpolating Boundary Node Method Based on the Parameter Space
Directory of Open Access Journals (Sweden)
Hongyin Yang
2017-01-01
Full Text Available The presented study proposed an improved interpolating boundary node method (IIBNM for 2D potential problems. The improved interpolating moving least-square (IIMLS method was applied to construct the shape functions, of which the delta function properties and boundary conditions were directly implemented. In addition, any weight function used in the moving least-square (MLS method was also applicable in the IIMLS method. Boundary cells were required in the computation of the boundary integrals, and additional discretization error was not avoided if traditional cells were used to approximate the geometry. The present study applied the parametric cells created in the parameter space to preserve the exact geometry, and the geometry was maintained due to the number of cells. Only the number of nodes on the boundary was required as additional information for boundary node construction. Most importantly, the IIMLS method can be applied in the parameter space to construct shape functions without the requirement of additional computations for the curve length.
Large-size deployable construction heated by solar irradiation in free space
Pestrenina, Irena; Kondyurin, Alexey; Pestrenin, Valery; Kashin, Nickolay; Naymushin, Alexey
Large-size deployable construction in free space with subsequent direct curing was invented more than fifteen years ago (Briskman et al., 1997 and Kondyurin, 1998). It caused a lot of scientific problems, one of which is a possibility to use the solar energy for initiation of the curing reaction. This paper is devoted to investigate the curing process under sun irradiation during a space flight in Earth orbits. A rotation of the construction is considered. This motion can provide an optimal temperature distribution in the construction that is required for the polymerization reaction. The cylindrical construction of 80 m length with two hemispherical ends of 10 m radius is considered. The wall of the construction of 10 mm carbon fibers/epoxy matrix composite is irradiated by heat flux from the sun and radiates heat from the external surface by the Stefan- Boltzmann law. A stage of polymerization reaction is calculated as a function of temperature/time based on the laboratory experiments with certified composite materials for space exploitation. The curing kinetics of the composite is calculated for different inclination Low Earth Orbits (300 km altitude) and Geostationary Earth Orbit (40000 km altitude). The results show that • the curing process depends strongly on the Earth orbit and the rotation of the construction; • the optimal flight orbit and rotation can be found to provide the thermal regime that is sufficient for the complete curing of the considered construction. The study is supported by RFBR grant No.12-08-00970-a. 1. Briskman V., A.Kondyurin, K.Kostarev, V.Leontyev, M.Levkovich, A.Mashinsky, G.Nechitailo, T.Yudina, Polymerization in microgravity as a new process in space technology, Paper No IAA-97-IAA.12.1.07, 48th International Astronautical Congress, October 6-10, 1997, Turin Italy. 2. Kondyurin A.V., Building the shells of large space stations by the polymerisation of epoxy composites in open space, Int. Polymer Sci. and Technol., v.25, N4
Constraining the mSUGRA parameter space through entropy and abundance criteria
International Nuclear Information System (INIS)
Cabral-Rosetti, Luis G.; Mondragon, Myriam; Nunez, Dario; Sussman, Roberto A.; Zavala, Jesus; Nellen, Lukas
2007-01-01
We explore the use of two criteria to constrain the allowed parameter space in mSUGRA models; both criteria are based in the calculation of the present density of neutralinos χ0 as Dark Matter in the Universe. The first one is the usual ''abundance'' criterion that requieres that present neutralino relic density complies with 0.0945 < ΩCDMh2 < 0.1287, which are the 2σ bounds according to WMAP. To calculate the relic density we use the public numerical code micrOMEGAS. The second criterion is the original idea presented in [3] that basically applies the microcanonical definition of entropy to a weakly interacting and self-gravitating gas, and then evaluate the change in entropy per particle of this gas between the freeze-out era and present day virialized structures. An 'entropy consistency' criterion emerges by comparing theoretical and empirical estimates of this entropy. One of the objetives of the work is to analyze the joint application of both criteria, already done in [3], to see if their results, using approximations for the calculations of the relic density, agree with the results coming from the exact numerical results of micrOMEGAS. The main objetive of the work is to use this method to constrain the parameter space in mSUGRA models that are inputs for the calculations of micrOMEGAS, and thus to get some bounds on the predictions for the SUSY spectra
Misra, Aalok
2008-01-01
We consider issues of moduli stabilization and "area codes" for type II flux compactifications, and the "Inverse Problem" and "Fake Superpotentials" for extremal (non)supersymmetric black holes in type II compactifications on (orientifold of) a compact two-parameter Calabi-Yau expressed as a degree-18 hypersurface in WCP^4[1,1,1,6,9] which has multiple singular loci in its moduli space. We argue the existence of extended "area codes" [1] wherein for the same set of large NS-NS and RR fluxes, one can stabilize all the complex structure moduli and the axion-dilaton modulus (to different sets of values) for points in the moduli space away as well as near the different singular conifold loci leading to the existence of domain walls. By including non-perturbative alpha' and instanton corrections in the Kaehler potential and superpotential [2], we show the possibility of getting a large-volume non-supersymmetric (A)dS minimum. Further, using techniques of [3] we explicitly show that given a set of moduli and choice...
Energy Technology Data Exchange (ETDEWEB)
Costa, Diogo Ricardo da, E-mail: diogo_cost@hotmail.com [Departamento de Física, UNESP – Universidade Estadual Paulista, Av. 24A, 1515, Bela Vista, 13506-900, Rio Claro, SP (Brazil); Hansen, Matheus [Departamento de Física, UNESP – Universidade Estadual Paulista, Av. 24A, 1515, Bela Vista, 13506-900, Rio Claro, SP (Brazil); Instituto de Física, Univ. São Paulo, Rua do Matão, Cidade Universitária, 05314-970, São Paulo – SP (Brazil); Guarise, Gustavo [Departamento de Física, UNESP – Universidade Estadual Paulista, Av. 24A, 1515, Bela Vista, 13506-900, Rio Claro, SP (Brazil); Medrano-T, Rene O. [Departamento de Ciências Exatas e da Terra, UNIFESP – Universidade Federal de São Paulo, Rua São Nicolau, 210, Centro, 09913-030, Diadema, SP (Brazil); Department of Mathematics, Imperial College London, London SW7 2AZ (United Kingdom); Leonel, Edson D. [Departamento de Física, UNESP – Universidade Estadual Paulista, Av. 24A, 1515, Bela Vista, 13506-900, Rio Claro, SP (Brazil); Abdus Salam International Center for Theoretical Physics, Strada Costiera 11, 34151 Trieste (Italy)
2016-04-22
We show that extreme orbits, trajectories that connect local maximum and minimum values of one dimensional maps, play a major role in the parameter space of dissipative systems dictating the organization for the windows of periodicity, hence producing sets of shrimp-like structures. Here we solve three fundamental problems regarding the distribution of these sets and give: (i) their precise localization in the parameter space, even for sets of very high periods; (ii) their local and global distributions along cascades; and (iii) the association of these cascades to complicate sets of periodicity. The extreme orbits are proved to be a powerful indicator to investigate the organization of windows of periodicity in parameter planes. As applications of the theory, we obtain some results for the circle map and perturbed logistic map. The formalism presented here can be extended to many other different nonlinear and dissipative systems. - Highlights: • Extreme orbits and the organization of periodic regions in parameter space. • One-dimensional dissipative mappings. • The circle map and also a time perturbed logistic map were studied.
International Nuclear Information System (INIS)
Costa, Diogo Ricardo da; Hansen, Matheus; Guarise, Gustavo; Medrano-T, Rene O.; Leonel, Edson D.
2016-01-01
We show that extreme orbits, trajectories that connect local maximum and minimum values of one dimensional maps, play a major role in the parameter space of dissipative systems dictating the organization for the windows of periodicity, hence producing sets of shrimp-like structures. Here we solve three fundamental problems regarding the distribution of these sets and give: (i) their precise localization in the parameter space, even for sets of very high periods; (ii) their local and global distributions along cascades; and (iii) the association of these cascades to complicate sets of periodicity. The extreme orbits are proved to be a powerful indicator to investigate the organization of windows of periodicity in parameter planes. As applications of the theory, we obtain some results for the circle map and perturbed logistic map. The formalism presented here can be extended to many other different nonlinear and dissipative systems. - Highlights: • Extreme orbits and the organization of periodic regions in parameter space. • One-dimensional dissipative mappings. • The circle map and also a time perturbed logistic map were studied.
Some thoughts on the management of large, complex international space ventures
Lee, T. J.; Kutzer, Ants; Schneider, W. C.
1992-01-01
Management issues relevant to the development and deployment of large international space ventures are discussed with particular attention given to previous experience. Management approaches utilized in the past are labeled as either simple or complex, and signs of efficient management are examined. Simple approaches include those in which experiments and subsystems are developed for integration into spacecraft, and the Apollo-Soyuz Test Project is given as an example of a simple multinational approach. Complex approaches include those for ESA's Spacelab Project and the Space Station Freedom in which functional interfaces cross agency and political boundaries. It is concluded that individual elements of space programs should be managed by individual participating agencies, and overall configuration control is coordinated by level with a program director acting to manage overall objectives and project interfaces.
Halogenation of Hydraulic Fracturing Additives in the Shale Well Parameter Space
Sumner, A. J.; Plata, D.
2017-12-01
Horizontal Drilling and Hydraulic fracturing (HDHF) involves the deep-well injection of a `fracking fluid' composed of diverse and numerous chemical additives designed to facilitate the release and collection of natural gas from shale plays. The potential impacts of HDHF operations on water resources and ecosystems are numerous, and analyses of flowback samples revealed organic compounds from both geogenic and anthropogenic sources. Furthermore, halogenated chemicals were also detected, and these compounds are rarely disclosed, suggesting the in situ halogenation of reactive additives. To test this transformation hypothesis, we designed and operated a novel high pressure and temperature reactor system to simulate the shale well parameter space and investigate the chemical reactivity of twelve commonly disclosed and functionally diverse HDHF additives. Early results revealed an unanticipated halogenation pathway of α-β unsaturated aldehyde, Cinnamaldehyde, in the presence of oxidant and concentrated brine. Ongoing experiments over a range of parameters informed a proposed mechanism, demonstrating the role of various shale-well specific parameters in enabling the demonstrated halogenation pathway. Ultimately, these results will inform a host of potentially unintended interactions of HDHF additives during the extreme conditions down-bore of a shale well during HDHF activities.
A logistics model for large space power systems
Koelle, H. H.
Space Power Systems (SPS) have to overcome two hurdles: (1) to find an attractive design, manufacturing and assembly concept and (2) to have available a space transportation system that can provide economical logistic support during the construction and operational phases. An initial system feasibility study, some five years ago, was based on a reference system that used terrestrial resources only and was based partially on electric propulsion systems. The conclusion was: it is feasible but not yet economically competitive with other options. This study is based on terrestrial and extraterrestrial resources and on chemical (LH 2/LOX) propulsion systems. These engines are available from the Space Shuttle production line and require small changes only. Other so-called advanced propulsion systems investigated did not prove economically superior if lunar LOX is available! We assume that a Shuttle derived Heavy Lift Launch Vehicle (HLLV) will become available around the turn of the century and that this will be used to establish a research base on the lunar surface. This lunar base has the potential to grow into a lunar factory producing LOX and construction materials for supporting among other projects also the construction of space power systems in geostationary orbit. A model was developed to simulate the logistics support of such an operation for a 50-year life cycle. After 50 years 111 SPS units with 5 GW each and an availability of 90% will produce 100 × 5 = 500 GW. The model comprises 60 equations and requires 29 assumptions of the parameter involved. 60-state variables calculated with the 60 equations mentioned above are given on an annual basis and as averages for the 50-year life cycle. Recycling of defective parts in geostationary orbit is one of the features of the model. The state-of-the-art with respect to SPS technology is introduced as a variable Mg mass/MW electric power delivered. If the space manufacturing facility, a maintenance and repair facility
Effect of alloy deformation on the average spacing parameters of non-deforming particles
International Nuclear Information System (INIS)
Fisher, J.; Gurland, J.
1980-02-01
It is shown on the basis of stereological definitions and a few simple experiments that the commonly used average dispersion parameters, area fraction (A/sub A/)/sub β/, areal particle density N/sub Aβ/ and mean free path lambda/sub α/, remain invariant during plastic deformation in the case of non-deforming equiaxed particles. Directional effects on the spacing parameters N/sub Aβ/ and lambda/sub α/ arise during uniaxial deformation by rotation and preferred orientation of nonequiaxed particles. Particle arrangement in stringered or layered structures and the effect of deformation on nearest neighbor distances of particles and voids are briefly discussed in relation to strength and fracture theories
The effect of environmental parameters to dust concentration in air-conditioned space
Ismail, A. M. M.; Manssor, N. A. S.; Nalisa, A.; Yahaya, N.
2017-08-01
Malaysia has a wet and hot climate, therefore most of the spaces are air conditioned. The environment might affect dust concentration inside a space and affect the indoor air quality (IAQ). The main objective of this study is to study the dust concentration collected inside enclosed air-conditioned space. The measurement was done physically at four selected offices and two classrooms using a number of equipment to measure the dust concentration and environmental parameters which are temperature and relative air humidity. It was found that the highest dust concentration produced in office (temperature of 24.7°C, relative humidity of 66.5%) is 0.075 mg/m3, as compared to classroom, the highest dust concentration produced is 0.060 mg/m3 office (temperature of 25.9°C, relative humidity of 64.0%). However, both measurements show that value still within the safety level set by DOSH Malaysia (2005-2010) and ASHRAE 62.2 2016. The office contained higher dust concentration compared to classroom because of frequent movement transpires daily due to the functional of the offices.
First-principles calculations of Moessbauer hyperfine parameters for solids and large molecules
Energy Technology Data Exchange (ETDEWEB)
Guenzburger, Diana [Centro Brasileiro de Pesquisas Fisicas (CBPF), Rio de Janeiro, RJ (Brazil); Ellis, D.E. [Northwestern Univ., Evanston, IL (United States). Dept. of Physics; Zeng, Z. [Academia Sinica, Hefei, AH (China). Inst. of Solid-State Physics
1997-10-01
Electronic structure calculations based on Density Functional theory were performed for solids and large molecules. The solids were represented by clusters of 60-100 atoms embedded in the potential of the external crystal. Magnetic moments and Moessbauer hyperfine parameters were derived. (author) 22 refs., 8 figs., 1 tab.
Camera memory study for large space telescope. [charge coupled devices
Hoffman, C. P.; Brewer, J. E.; Brager, E. A.; Farnsworth, D. L.
1975-01-01
Specifications were developed for a memory system to be used as the storage media for camera detectors on the large space telescope (LST) satellite. Detectors with limited internal storage time such as intensities charge coupled devices and silicon intensified targets are implied. The general characteristics are reported of different approaches to the memory system with comparisons made within the guidelines set forth for the LST application. Priority ordering of comparisons is on the basis of cost, reliability, power, and physical characteristics. Specific rationales are provided for the rejection of unsuitable memory technologies. A recommended technology was selected and used to establish specifications for a breadboard memory. Procurement scheduling is provided for delivery of system breadboards in 1976, prototypes in 1978, and space qualified units in 1980.
Fast estimation of space-robots inertia parameters: A modular mathematical formulation
Nabavi Chashmi, Seyed Yaser; Malaek, Seyed Mohammad-Bagher
2016-10-01
This work aims to propose a new technique that considerably helps enhance time and precision needed to identify ;Inertia Parameters (IPs); of a typical Autonomous Space-Robot (ASR). Operations might include, capturing an unknown Target Space-Object (TSO), ;active space-debris removal; or ;automated in-orbit assemblies;. In these operations generating precise successive commands are essential to the success of the mission. We show how a generalized, repeatable estimation-process could play an effective role to manage the operation. With the help of the well-known Force-Based approach, a new ;modular formulation; has been developed to simultaneously identify IPs of an ASR while it captures a TSO. The idea is to reorganize the equations with associated IPs with a ;Modular Set; of matrices instead of a single matrix representing the overall system dynamics. The devised Modular Matrix Set will then facilitate the estimation process. It provides a conjugate linear model in mass and inertia terms. The new formulation is, therefore, well-suited for ;simultaneous estimation processes; using recursive algorithms like RLS. Further enhancements would be needed for cases the effect of center of mass location becomes important. Extensive case studies reveal that estimation time is drastically reduced which in-turn paves the way to acquire better results.
Overview of Small and Large-Scale Space Solar Power Concepts
Potter, Seth; Henley, Mark; Howell, Joe; Carrington, Connie; Fikes, John
2006-01-01
An overview of space solar power studies performed at the Boeing Company under contract with NASA will be presented. The major concepts to be presented are: 1. Power Plug in Orbit: this is a spacecraft that collects solar energy and distributes it to users in space using directed radio frequency or optical energy. Our concept uses solar arrays having the same dimensions as ISS arrays, but are assumed to be more efficient. If radiofrequency wavelengths are used, it will necessitate that the receiving satellite be equipped with a rectifying antenna (rectenna). For optical wavelengths, the solar arrays on the receiving satellite will collect the power. 2. Mars Clipper I Power Explorer: this is a solar electric Mars transfer vehicle to support human missions. A near-term precursor could be a high-power radar mapping spacecraft with self-transport capability. Advanced solar electric power systems and electric propulsion technology constitute viable elements for conducting human Mars missions that are roughly comparable in performance to similar missions utilizing alternative high thrust systems, with the one exception being their inability to achieve short Earth-Mars trip times. 3. Alternative Architectures: this task involves investigating alternatives to the traditional solar power satellite (SPS) to supply commercial power from space for use on Earth. Four concepts were studied: two using photovoltaic power generation, and two using solar dynamic power generation, with microwave and laser power transmission alternatives considered for each. All four architectures use geostationary orbit. 4. Cryogenic Propellant Depot in Earth Orbit: this concept uses large solar arrays (producing perhaps 600 kW) to electrolyze water launched from Earth, liquefy the resulting hydrogen and oxygen gases, and store them until needed by spacecraft. 5. Beam-Powered Lunar Polar Rover: a lunar rover powered by a microwave or laser beam can explore permanently shadowed craters near the lunar
Hadronic total cross-sections through soft gluon summation in impact parameter space
International Nuclear Information System (INIS)
Grau, A.
1999-01-01
IThe Bloch-Nordsieck model for the parton distribution of hadrons in impact parameter space, constructed using soft gluon summation, is investigated in detail. Its dependence upon the infrared structure of the strong coupling constant α s is discussed, both for finite as well as singular, but integrable, α s . The formalism is applied to the prediction of total proton-proton and proton-antiproton cross-sections, where screening, due to soft gluon emission from the initial valence quarks, becomes evident
Sastry, Madhavi; Lowrie, Jeffrey F; Dixon, Steven L; Sherman, Woody
2010-05-24
A systematic virtual screening study on 11 pharmaceutically relevant targets has been conducted to investigate the interrelation between 8 two-dimensional (2D) fingerprinting methods, 13 atom-typing schemes, 13 bit scaling rules, and 12 similarity metrics using the new cheminformatics package Canvas. In total, 157 872 virtual screens were performed to assess the ability of each combination of parameters to identify actives in a database screen. In general, fingerprint methods, such as MOLPRINT2D, Radial, and Dendritic that encode information about local environment beyond simple linear paths outperformed other fingerprint methods. Atom-typing schemes with more specific information, such as Daylight, Mol2, and Carhart were generally superior to more generic atom-typing schemes. Enrichment factors across all targets were improved considerably with the best settings, although no single set of parameters performed optimally on all targets. The size of the addressable bit space for the fingerprints was also explored, and it was found to have a substantial impact on enrichments. Small bit spaces, such as 1024, resulted in many collisions and in a significant degradation in enrichments compared to larger bit spaces that avoid collisions.
Effect of solar wind plasma parameters on space weather
International Nuclear Information System (INIS)
Rathore, Balveer S.; Gupta, Dinesh C.; Kaushik, Subhash C.
2015-01-01
Today's challenge for space weather research is to quantitatively predict the dynamics of the magnetosphere from measured solar wind and interplanetary magnetic field (IMF) conditions. Correlative studies between geomagnetic storms (GMSs) and the various interplanetary (IP) field/plasma parameters have been performed to search for the causes of geomagnetic activity and develop models for predicting the occurrence of GMSs, which are important for space weather predictions. We find a possible relation between GMSs and solar wind and IMF parameters in three different situations and also derived the linear relation for all parameters in three situations. On the basis of the present statistical study, we develop an empirical model. With the help of this model, we can predict all categories of GMSs. This model is based on the following fact: the total IMF B total can be used to trigger an alarm for GMSs, when sudden changes in total magnetic field B total occur. This is the first alarm condition for a storm's arrival. It is observed in the present study that the southward B z component of the IMF is an important factor for describing GMSs. A result of the paper is that the magnitude of B z is maximum neither during the initial phase (at the instant of the IP shock) nor during the main phase (at the instant of Disturbance storm time (Dst) minimum). It is seen in this study that there is a time delay between the maximum value of southward B z and the Dst minimum, and this time delay can be used in the prediction of the intensity of a magnetic storm two-three hours before the main phase of a GMS. A linear relation has been derived between the maximum value of the southward component of B z and the Dst, which is Dst = (−0.06) + (7.65) B z +t. Some auxiliary conditions should be fulfilled with this, for example the speed of the solar wind should, on average, be 350 km s −1 to 750 km s −1 , plasma β should be low and, most importantly, plasma temperature
Interactive computer graphics and its role in control system design of large space structures
Reddy, A. S. S. R.
1985-01-01
This paper attempts to show the relevance of interactive computer graphics in the design of control systems to maintain attitude and shape of large space structures to accomplish the required mission objectives. The typical phases of control system design, starting from the physical model such as modeling the dynamics, modal analysis, and control system design methodology are reviewed and the need of the interactive computer graphics is demonstrated. Typical constituent parts of large space structures such as free-free beams and free-free plates are used to demonstrate the complexity of the control system design and the effectiveness of the interactive computer graphics.
Ozkat, Erkan Caner; Franciosa, Pasquale; Ceglarek, Dariusz
2017-08-01
Remote laser welding technology offers opportunities for high production throughput at a competitive cost. However, the remote laser welding process of zinc-coated sheet metal parts in lap joint configuration poses a challenge due to the difference between the melting temperature of the steel (∼1500 °C) and the vapourizing temperature of the zinc (∼907 °C). In fact, the zinc layer at the faying surface is vapourized and the vapour might be trapped within the melting pool leading to weld defects. Various solutions have been proposed to overcome this problem over the years. Among them, laser dimpling has been adopted by manufacturers because of its flexibility and effectiveness along with its cost advantages. In essence, the dimple works as a spacer between the two sheets in lap joint and allows the zinc vapour escape during welding process, thereby preventing weld defects. However, there is a lack of comprehensive characterization of dimpling process for effective implementation in real manufacturing system taking into consideration inherent changes in variability of process parameters. This paper introduces a methodology to develop (i) surrogate model for dimpling process characterization considering multiple-inputs (i.e. key control characteristics) and multiple-outputs (i.e. key performance indicators) system by conducting physical experimentation and using multivariate adaptive regression splines; (ii) process capability space (Cp-Space) based on the developed surrogate model that allows the estimation of a desired process fallout rate in the case of violation of process requirements in the presence of stochastic variation; and, (iii) selection and optimization of the process parameters based on the process capability space. The proposed methodology provides a unique capability to: (i) simulate the effect of process variation as generated by manufacturing process; (ii) model quality requirements with multiple and coupled quality requirements; and (iii
Abidi, Yassine; Bellassoued, Mourad; Mahjoub, Moncef; Zemzemi, Nejib
2018-03-01
In this paper, we consider the inverse problem of space dependent multiple ionic parameters identification in cardiac electrophysiology modelling from a set of observations. We use the monodomain system known as a state-of-the-art model in cardiac electrophysiology and we consider a general Hodgkin-Huxley formalism to describe the ionic exchanges at the microscopic level. This formalism covers many physiological transmembrane potential models including those in cardiac electrophysiology. Our main result is the proof of the uniqueness and a Lipschitz stability estimate of ion channels conductance parameters based on some observations on an arbitrary subdomain. The key idea is a Carleman estimate for a parabolic operator with multiple coefficients and an ordinary differential equation system.
Penas, David R; González, Patricia; Egea, Jose A; Doallo, Ramón; Banga, Julio R
2017-01-21
The development of large-scale kinetic models is one of the current key issues in computational systems biology and bioinformatics. Here we consider the problem of parameter estimation in nonlinear dynamic models. Global optimization methods can be used to solve this type of problems but the associated computational cost is very large. Moreover, many of these methods need the tuning of a number of adjustable search parameters, requiring a number of initial exploratory runs and therefore further increasing the computation times. Here we present a novel parallel method, self-adaptive cooperative enhanced scatter search (saCeSS), to accelerate the solution of this class of problems. The method is based on the scatter search optimization metaheuristic and incorporates several key new mechanisms: (i) asynchronous cooperation between parallel processes, (ii) coarse and fine-grained parallelism, and (iii) self-tuning strategies. The performance and robustness of saCeSS is illustrated by solving a set of challenging parameter estimation problems, including medium and large-scale kinetic models of the bacterium E. coli, bakerés yeast S. cerevisiae, the vinegar fly D. melanogaster, Chinese Hamster Ovary cells, and a generic signal transduction network. The results consistently show that saCeSS is a robust and efficient method, allowing very significant reduction of computation times with respect to several previous state of the art methods (from days to minutes, in several cases) even when only a small number of processors is used. The new parallel cooperative method presented here allows the solution of medium and large scale parameter estimation problems in reasonable computation times and with small hardware requirements. Further, the method includes self-tuning mechanisms which facilitate its use by non-experts. We believe that this new method can play a key role in the development of large-scale and even whole-cell dynamic models.
Moving to continuous facial expression space using the MPEG-4 facial definition parameter (FDP) set
Karpouzis, Kostas; Tsapatsoulis, Nicolas; Kollias, Stefanos D.
2000-06-01
Research in facial expression has concluded that at least six emotions, conveyed by human faces, are universally associated with distinct expressions. Sadness, anger, joy, fear, disgust and surprise are categories of expressions that are recognizable across cultures. In this work we form a relation between the description of the universal expressions and the MPEG-4 Facial Definition Parameter Set (FDP). We also investigate the relation between the movement of basic FDPs and the parameters that describe emotion-related words according to some classical psychological studies. In particular Whissel suggested that emotions are points in a space, which seem to occupy two dimensions: activation and evaluation. We show that some of the MPEG-4 Facial Animation Parameters (FAPs), approximated by the motion of the corresponding FDPs, can be combined by means of a fuzzy rule system to estimate the activation parameter. In this way variations of the six archetypal emotions can be achieved. Moreover, Plutchik concluded that emotion terms are unevenly distributed through the space defined by dimensions like Whissel's; instead they tend to form an approximately circular pattern, called 'emotion wheel,' modeled using an angular measure. The 'emotion wheel' can be defined as a reference for creating intermediate expressions from the universal ones, by interpolating the movement of dominant FDP points between neighboring basic expressions. By exploiting the relation between the movement of the basic FDP point and the activation and angular parameters we can model more emotions than the primary ones and achieve efficient recognition in video sequences.
Charting the Parameter Space of the 21-cm Power Spectrum
Cohen, Aviad; Fialkov, Anastasia; Barkana, Rennan
2018-05-01
The high-redshift 21-cm signal of neutral hydrogen is expected to be observed within the next decade and will reveal epochs of cosmic evolution that have been previously inaccessible. Due to the lack of observations, many of the astrophysical processes that took place at early times are poorly constrained. In recent work we explored the astrophysical parameter space and the resulting large variety of possible global (sky-averaged) 21-cm signals. Here we extend our analysis to the fluctuations in the 21-cm signal, accounting for those introduced by density and velocity, Lyα radiation, X-ray heating, and ionization. While the radiation sources are usually highlighted, we find that in many cases the density fluctuations play a significant role at intermediate redshifts. Using both the power spectrum and its slope, we show that properties of high-redshift sources can be extracted from the observable features of the fluctuation pattern. For instance, the peak amplitude of ionization fluctuations can be used to estimate whether heating occurred early or late and, in the early case, to also deduce the cosmic mean ionized fraction at that time. The slope of the power spectrum has a more universal redshift evolution than the power spectrum itself and can thus be used more easily as a tracer of high-redshift astrophysics. Its peaks can be used, for example, to estimate the redshift of the Lyα coupling transition and the redshift of the heating transition (and the mean gas temperature at that time). We also show that a tight correlation is predicted between features of the power spectrum and of the global signal, potentially yielding important consistency checks.
Photoluminescence in large fluence radiation irradiated space silicon solar cells
Energy Technology Data Exchange (ETDEWEB)
Hisamatsu, Tadashi; Kawasaki, Osamu; Matsuda, Sumio [National Space Development Agency of Japan, Tsukuba, Ibaraki (Japan). Tsukuba Space Center; Tsukamoto, Kazuyoshi
1997-03-01
Photoluminescence spectroscopy measurements were carried out for silicon 50{mu}m BSFR space solar cells irradiated with 1MeV electrons with a fluence exceeding 1 x 10{sup 16} e/cm{sup 2} and 10MeV protons with a fluence exceeding 1 x 10{sup 13} p/cm{sup 2}. The results were compared with the previous result performed in a relative low fluence region, and the radiation-induced defects which cause anomalous degradation of the cell performance in such large fluence regions were discussed. As far as we know, this is the first report which presents the PL measurement results at 4.2K of the large fluence radiation irradiated silicon solar cells. (author)
Non-Abelian monopole in the parameter space of point-like interactions
International Nuclear Information System (INIS)
Ohya, Satoshi
2014-01-01
We study non-Abelian geometric phase in N=2 supersymmetric quantum mechanics for a free particle on a circle with two point-like interactions at antipodal points. We show that non-Abelian Berry’s connection is that of SU(2) magnetic monopole discovered by Moody, Shapere and Wilczek in the context of adiabatic decoupling limit of diatomic molecule. - Highlights: • Supersymmetric quantum mechanics is an ideal playground for studying geometric phase. • We determine the parameter space of supersymmetric point-like interactions. • Berry’s connection is given by a Wu–Yang-like magnetic monopole in SU(2) Yang–Mills
National Aeronautics and Space Administration — TRS Technologies proposes innovative hybrid electrostatic/flextensional membrane deformable mirror capable of large amplitude aberration correction for large...
Very large virtual compound spaces: construction, storage and utility in drug discovery.
Peng, Zhengwei
2013-09-01
Recent activities in the construction, storage and exploration of very large virtual compound spaces are reviewed by this report. As expected, the systematic exploration of compound spaces at the highest resolution (individual atoms and bonds) is intrinsically intractable. By contrast, by staying within a finite number of reactions and a finite number of reactants or fragments, several virtual compound spaces have been constructed in a combinatorial fashion with sizes ranging from 10(11)11 to 10(20)20 compounds. Multiple search methods have been developed to perform searches (e.g. similarity, exact and substructure) into those compound spaces without the need for full enumeration. The up-front investment spent on synthetic feasibility during the construction of some of those virtual compound spaces enables a wider adoption by medicinal chemists to design and synthesize important compounds for drug discovery. Recent activities in the area of exploring virtual compound spaces via the evolutionary approach based on Genetic Algorithm also suggests a positive shift of focus from method development to workflow, integration and ease of use, all of which are required for this approach to be widely adopted by medicinal chemists.
International Nuclear Information System (INIS)
Novak Pintarič, Zorka; Kravanja, Zdravko
2015-01-01
This paper presents a robust computational methodology for the synthesis and design of flexible HEN (Heat Exchanger Networks) having large numbers of uncertain parameters. This methodology combines several heuristic methods which progressively lead to a flexible HEN design at a specific level of confidence. During the first step, a HEN topology is generated under nominal conditions followed by determining those points critical for flexibility. A significantly reduced multi-scenario model for flexible HEN design is formulated at the nominal point with the flexibility constraints at the critical points. The optimal design obtained is tested by stochastic Monte Carlo optimization and the flexibility index through solving one-scenario problems within a loop. This presented methodology is novel regarding the enormous reduction of scenarios in HEN design problems, and computational effort. Despite several simplifications, the capability of designing flexible HENs with large numbers of uncertain parameters, which are typical throughout industry, is not compromised. An illustrative case study is presented for flexible HEN synthesis comprising 42 uncertain parameters. - Highlights: • Methodology for HEN (Heat Exchanger Network) design under uncertainty is presented. • The main benefit is solving HENs having large numbers of uncertain parameters. • Drastically reduced multi-scenario HEN design problem is formulated through several steps. • Flexibility of HEN is guaranteed at a specific level of confidence.
Liu, W.; Wang, H.; Liu, D.; Miu, Y.
2018-05-01
Precise geometric parameters are essential to ensure the positioning accuracy for space optical cameras. However, state-of-the-art onorbit calibration method inevitably suffers from long update cycle and poor timeliness performance. To this end, in this paper we exploit the optical auto-collimation principle and propose a real-time onboard calibration scheme for monitoring key geometric parameters. Specifically, in the proposed scheme, auto-collimation devices are first designed by installing collimated light sources, area-array CCDs, and prisms inside the satellite payload system. Through utilizing those devices, the changes in the geometric parameters are elegantly converted into changes in the spot image positions. The variation of geometric parameters can be derived via extracting and processing the spot images. An experimental platform is then set up to verify the feasibility and analyze the precision index of the proposed scheme. The experiment results demonstrate that it is feasible to apply the optical auto-collimation principle for real-time onboard monitoring.
An Engineering Design Reference Mission for a Future Large-Aperture UVOIR Space Observatory
Thronson, Harley A.; Bolcar, Matthew R.; Clampin, Mark; Crooke, Julie A.; Redding, David; Rioux, Norman; Stahl, H. Philip
2016-01-01
From the 2010 NRC Decadal Survey and the NASA Thirty-Year Roadmap, Enduring Quests, Daring Visions, to the recent AURA report, From Cosmic Birth to Living Earths, multiple community assessments have recommended development of a large-aperture UVOIR space observatory capable of achieving a broad range of compelling scientific goals. Of these priority science goals, the most technically challenging is the search for spectroscopic biomarkers in the atmospheres of exoplanets in the solar neighborhood. Here we present an engineering design reference mission (EDRM) for the Advanced Technology Large-Aperture Space Telescope (ATLAST), which was conceived from the start as capable of breakthrough science paired with an emphasis on cost control and cost effectiveness. An EDRM allows the engineering design trade space to be explored in depth to determine what are the most demanding requirements and where there are opportunities for margin against requirements. Our joint NASA GSFC/JPL/MSFC/STScI study team has used community-provided science goals to derive mission needs, requirements, and candidate mission architectures for a future large-aperture, non-cryogenic UVOIR space observatory. The ATLAST observatory is designed to operate at a Sun-Earth L2 orbit, which provides a stable thermal environment and excellent field of regard. Our reference designs have emphasized a serviceable 36-segment 9.2 m aperture telescope that stows within a five-meter diameter launch vehicle fairing. As part of our cost-management effort, this particular reference mission builds upon the engineering design for JWST. Moreover, it is scalable to a variety of launch vehicle fairings. Performance needs developed under the study are traceable to a variety of additional reference designs, including options for a monolithic primary mirror.
Heating of large format filters in sub-mm and fir space optics
Baccichet, N.; Savini, G.
2017-11-01
Most FIR and sub-mm space borne observatories use polymer-based quasi-optical elements like filters and lenses, due to their high transparency and low absorption in such wavelength ranges. Nevertheless, data from those missions have proven that thermal imbalances in the instrument (not caused by filters) can complicate the data analysis. Consequently, for future, higher precision instrumentation, further investigation is required on any thermal imbalances embedded in such polymer-based filters. Particularly, in this paper the heating of polymers when operating at cryogenic temperature in space will be studied. Such phenomenon is an important aspect of their functioning since the transient emission of unwanted thermal radiation may affect the scientific measurements. To assess this effect, a computer model was developed for polypropylene based filters and PTFE-based coatings. Specifically, a theoretical model of their thermal properties was created and used into a multi-physics simulation that accounts for conductive and radiative heating effects of large optical elements, the geometry of which was suggested by the large format array instruments designed for future space missions. It was found that in the simulated conditions, the filters temperature was characterized by a time-dependent behaviour, modulated by a small scale fluctuation. Moreover, it was noticed that thermalization was reached only when a low power input was present.
Cosmological Parameter Estimation with Large Scale Structure Observations
Di Dio, Enea; Durrer, Ruth; Lesgourgues, Julien
2014-01-01
We estimate the sensitivity of future galaxy surveys to cosmological parameters, using the redshift dependent angular power spectra of galaxy number counts, $C_\\ell(z_1,z_2)$, calculated with all relativistic corrections at first order in perturbation theory. We pay special attention to the redshift dependence of the non-linearity scale and present Fisher matrix forecasts for Euclid-like and DES-like galaxy surveys. We compare the standard $P(k)$ analysis with the new $C_\\ell(z_1,z_2)$ method. We show that for surveys with photometric redshifts the new analysis performs significantly better than the $P(k)$ analysis. For spectroscopic redshifts, however, the large number of redshift bins which would be needed to fully profit from the redshift information, is severely limited by shot noise. We also identify surveys which can measure the lensing contribution and we study the monopole, $C_0(z_1,z_2)$.
The MSSM Parameter Space with Non-Universal Higgs Masses
Ellis, Jonathan Richard; Santoso, Y; Ellis, John; Olive, Keith A.; Santoso, Yudi
2002-01-01
Without assuming that Higgs masses have the same values as other scalar masses at the input GUT scale, we combine constraints on the minimal supersymmetric extension of the Standard Model (MSSM) coming from the cold dark matter density with the limits from direct searches at accelerators such as LEP, indirect measurements such as b to s gamma decay and the anomalous magnetic moment of the muon. The requirement that Higgs masses-squared be positive at the GUT scale imposes important restrictions on the MSSM parameter space, as does the requirement that the LSP be neutral. We analyze the interplay of these constraints in the (mu, m_A), (mu, m_{1/2}), (m_{1/2}, m_0) and (m_A, tan beta) planes. These exhibit new features not seen in the corresponding planes in the constrained MSSM in which universality is extended to Higgs masses.
Application of separable parameter space techniques to multi-tracer PET compartment modeling
International Nuclear Information System (INIS)
Zhang, Jeff L; Michael Morey, A; Kadrmas, Dan J
2016-01-01
Multi-tracer positron emission tomography (PET) can image two or more tracers in a single scan, characterizing multiple aspects of biological functions to provide new insights into many diseases. The technique uses dynamic imaging, resulting in time-activity curves that contain contributions from each tracer present. The process of separating and recovering separate images and/or imaging measures for each tracer requires the application of kinetic constraints, which are most commonly applied by fitting parallel compartment models for all tracers. Such multi-tracer compartment modeling presents challenging nonlinear fits in multiple dimensions. This work extends separable parameter space kinetic modeling techniques, previously developed for fitting single-tracer compartment models, to fitting multi-tracer compartment models. The multi-tracer compartment model solution equations were reformulated to maximally separate the linear and nonlinear aspects of the fitting problem, and separable least-squares techniques were applied to effectively reduce the dimensionality of the nonlinear fit. The benefits of the approach are then explored through a number of illustrative examples, including characterization of separable parameter space multi-tracer objective functions and demonstration of exhaustive search fits which guarantee the true global minimum to within arbitrary search precision. Iterative gradient-descent algorithms using Levenberg–Marquardt were also tested, demonstrating improved fitting speed and robustness as compared to corresponding fits using conventional model formulations. The proposed technique overcomes many of the challenges in fitting simultaneous multi-tracer PET compartment models. (paper)
Using the Talbot_Lau_interferometer_parameters Spreadsheet
Energy Technology Data Exchange (ETDEWEB)
Kallman, Jeffrey S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2015-06-04
Talbot-Lau interferometers allow incoherent X-ray sources to be used for phase contrast imaging. A spreadsheet for exploring the parameter space of Talbot and Talbot-Lau interferometers has been assembled. This spreadsheet allows the user to examine the consequences of choosing phase grating pitch, source energy, and source location on the overall geometry of a Talbot or Talbot-Lau X-ray interferometer. For the X-ray energies required to penetrate scanned luggage the spacing between gratings is large enough that the mechanical tolerances for amplitude grating positioning are unlikely to be met.
Cosmological parameters from large scale structure - geometric versus shape information
Hamann, Jan; Lesgourgues, Julien; Rampf, Cornelius; Wong, Yvonne Y Y
2010-01-01
The matter power spectrum as derived from large scale structure (LSS) surveys contains two important and distinct pieces of information: an overall smooth shape and the imprint of baryon acoustic oscillations (BAO). We investigate the separate impact of these two types of information on cosmological parameter estimation, and show that for the simplest cosmological models, the broad-band shape information currently contained in the SDSS DR7 halo power spectrum (HPS) is by far superseded by geometric information derived from the baryonic features. An immediate corollary is that contrary to popular beliefs, the upper limit on the neutrino mass m_\
Large-scale investigation of the parameters in response to Eimeria maxima challenge in broilers.
Hamzic, E; Bed'Hom, B; Juin, H; Hawken, R; Abrahamsen, M S; Elsen, J M; Servin, B; Pinard-van der Laan, M H; Demeure, O
2015-04-01
Coccidiosis, a parasitic disease of the intestinal tract caused by members of the genera Eimeria and Isospora, is one of the most common and costly diseases in chicken. The aims of this study were to assess the effect of the challenge and level of variability of measured parameters in chickens during the challenge with Eimeria maxima. Furthermore, this study aimed to investigate which parameters are the most relevant indicators of the health status. Finally, the study also aimed to estimate accuracy of prediction for traits that cannot be measured on large scale (such as intestinal lesion score and fecal oocyst count) using parameters that can easily be measured on all animals. The study was performed in 2 parts: a pilot challenge on 240 animals followed by a large-scale challenge on 2,024 animals. In both experiments, animals were challenged with 50,000 Eimeria maxima oocysts at 16 d of age. In the pilot challenge, all animals were measured for BW gain, plasma coloration, hematocrit, and rectal temperature and, in addition, a subset of 48 animals was measured for oocyst count and the intestinal lesion score. All animals from the second challenge were measured for BW gain, plasma coloration, and hematocrit whereas a subset of 184 animals was measured for intestinal lesion score, fecal oocyst count, blood parameters, and plasma protein content and composition. Most of the parameters measured were significantly affected by the challenge. Lesion scores for duodenum and jejunum (P Eimeria maxima. Prediction of intestinal lesion score and fecal oocyst count using the other parameters measured was not very precise (R2 Eimeria maxima has a strong genetic determinism, which may be improved by genetic selection.
Plasma parameter estimations for the Large Helical Device based on the gyro-reduced Bohm scaling
International Nuclear Information System (INIS)
Okamoto, Masao; Nakajima, Noriyoshi; Sugama, Hideo.
1991-10-01
A model of gyro-reduced Bohm scaling law is incorporated into a one-dimensional transport code to predict plasma parameters for the Large Helical Device (LHD). The transport code calculations reproduce well the LHD empirical scaling law and basic parameters and profiles of the LHD plasma are calculated. The amounts of toroidal currents (bootstrap current and beam-driven current) are also estimated. (author)
An optimum organizational structure for a large earth-orbiting multidisciplinary Space Base
Ragusa, J. M.
1973-01-01
The purpose of this exploratory study was to identify an optimum hypothetical organizational structure for a large earth-orbiting multidisciplinary research and applications (R&A) Space Base manned by a mixed crew of technologists. Since such a facility does not presently exist, in situ empirical testing was not possible. Study activity was, therefore, concerned with the identification of a desired organizational structural model rather than the empirical testing of it. The essential finding of this research was that a four-level project type 'total matrix' model will optimize the efficiency and effectiveness of Space Base technologists.
Directory of Open Access Journals (Sweden)
I. N. Esau
2006-01-01
Full Text Available We consider the resistance law for the planetary boundary layer (PBL from the point of view of the similarity theory. In other words, we select the set of the PBL governing parameters and search for an optimal way to express through these parameters the geostrophic drag coefficient Cg=u* /Ug and the cross isobaric angle α (where u* is the friction velocity and Ug is the geostrophic wind speed. By this example, we demonstrate how to determine the 'parameter space' in the most convenient way, so that make independent the dimensionless numbers representing co-ordinates in the parameter space, and to avoid (or at least minimise artificial self-correlations caused by the appearance of the same factors (such as u* in the examined dimensionless combinations (e.g. in Cg=u* /Ug and in dimensionless numbers composed of the governing parameters. We also discuss the 'completeness' of the parameter space from the point of view of large-eddy simulation (LES modeller creating a database for a specific physical problem. As recognised recently, very large scatter of data in prior empirical dependencies of Cg and α on the surface Rossby number Ro=Ug| fz0|-1 (where z0 is the roughness length and the stratification characterised by µ was to a large extent caused by incompactness of the set of the governing parameters. The most important parameter overlooked in the traditional approach is the typical value of the Brunt-Väisälä frequency N in the free atmosphere (immediately above the PBL, which involves, besides Ro and µ, one more dimensionless number: µN=N/ | f |. Accordingly, we consider Cg and α as dependent on the three (rather then two basic dimensionless numbers (including µN using LES database DATABASE64. By these means we determine the form of the dependencies under consideration in the part of the parameter space representing typical atmospheric PBLs, and provide analytical expressions for Cg and α.
Multiplicity distributions in impact parameter space
International Nuclear Information System (INIS)
Wakano, Masami
1976-01-01
A definition for the average multiplicity of pions as a function of momentum transfer and total energy in the high energy proton-proton collisions is proposed by using the n-pion production differential cross section with the given momentum transfer from a proton to other final products and the given energy of the latter. Contributions from nondiffractive and diffractive processes are formulated in a multi-Regge model. We define a relationship between impact parameter and momentum transfer in the sense of classical theory for inelastic processes and we obtain the average multiplicity of pions as a function of impact parameter and total energy from the corresponding quantity afore-mentioned. By comparing this quantity with the square root of the opaqueness at given impact parameter, we conclude that the overlap of localized constituents is important in determining the opaqueness at given impact parameter in a collision of two hadrons. (auth.)
Marcus, Hani J; Seneci, Carlo A; Hughes-Hallett, Archie; Cundy, Thomas P; Nandi, Dipankar; Yang, Guang-Zhong; Darzi, Ara
2016-04-01
Surgical approaches such as transanal endoscopic microsurgery, which utilize small operative working spaces, and are necessarily single-port, are particularly demanding with standard instruments and have not been widely adopted. The aim of this study was to compare simultaneously surgical performance in single-port versus multiport approaches, and small versus large working spaces. Ten novice, 4 intermediate, and 1 expert surgeons were recruited from a university hospital. A preclinical randomized crossover study design was implemented, comparing performance under the following conditions: (1) multiport approach and large working space, (2) multiport approach and intermediate working space, (3) single-port approach and large working space, (4) single-port approach and intermediate working space, and (5) single-port approach and small working space. In each case, participants performed a peg transfer and pattern cutting tasks, and each task repetition was scored. Intermediate and expert surgeons performed significantly better than novices in all conditions (P Performance in single-port surgery was significantly worse than multiport surgery (P performance in the intermediate versus large working space. In single-port surgery, there was a converse trend; performances in the intermediate and small working spaces were significantly better than in the large working space. Single-port approaches were significantly more technically challenging than multiport approaches, possibly reflecting loss of instrument triangulation. Surprisingly, in single-port approaches, in which triangulation was no longer a factor, performance in large working spaces was worse than in intermediate and small working spaces. © The Author(s) 2015.
Exploring the triplet parameters space to optimise the final focus of the FCC-hh
AUTHOR|(CDS)2141109; Abelleira, Jose; Seryi, Andrei; Cruz Alaniz, Emilia
2017-01-01
One of the main challenges when designing final focus systems of particle accelerators is maximising the beam stay clear in the strong quadrupole magnets of the inner triplet. Moreover it is desirable to keep the quadrupoles in the triplet as short as possible for space and costs reasons but also to reduce chromaticity and simplify corrections schemes. An algorithm that explores the triplet parameter space to optimise both these aspects was written. It uses thin lenses as a first approximation and MADX for more precise calculations. In cooperation with radiation studies, this algorithm was then applied to design an alternative triplet for the final focus of the Future Circular Collider (FCC-hh).
Adaptive Large Neighbourhood Search
DEFF Research Database (Denmark)
Røpke, Stefan
Large neighborhood search is a metaheuristic that has gained popularity in recent years. The heuristic repeatedly moves from solution to solution by first partially destroying the solution and then repairing it. The best solution observed during this search is presented as the final solution....... This tutorial introduces the large neighborhood search metaheuristic and the variant adaptive large neighborhood search that dynamically tunes parameters of the heuristic while it is running. Both heuristics belong to a broader class of heuristics that are searching a solution space using very large...... neighborhoods. The tutorial also present applications of the adaptive large neighborhood search, mostly related to vehicle routing problems for which the heuristic has been extremely successful. We discuss how the heuristic can be parallelized and thereby take advantage of modern desktop computers...
Dorninger, P.; Koma, Z.; Székely, B.
2012-04-01
In recent years, laser scanning, also referred to as LiDAR, has proved to be an important tool for topographic data acquisition. Basically, laser scanning acquires a more or less homogeneously distributed point cloud. These points represent all natural objects like terrain and vegetation as well as man-made objects such as buildings, streets, powerlines, or other constructions. Due to the enormous amount of data provided by current scanning systems capturing up to several hundred thousands of points per second, the immediate application of such point clouds for large scale interpretation and analysis is often prohibitive due to restrictions of the hard- and software infrastructure. To overcome this, numerous methods for the determination of derived products do exist. Commonly, Digital Terrain Models (DTM) or Digital Surface Models (DSM) are derived to represent the topography using a regular grid as datastructure. The obvious advantages are a significant reduction of the amount of data and the introduction of an implicit neighborhood topology enabling the application of efficient post processing methods. The major disadvantages are the loss of 3D information (i.e. overhangs) as well as the loss of information due to the interpolation approach used. We introduced a segmentation approach enabling the determination of planar structures within a given point cloud. It was originally developed for the purpose of building modeling but has proven to be well suited for large scale geomorphological analysis as well. The result is an assignment of the original points to a set of planes. Each plane is represented by its plane parameters. Additionally, numerous quality and quantity parameters are determined (e.g. aspect, slope, local roughness, etc.). In this contribution, we investigate the influence of the control parameters required for the plane segmentation on the geomorphological interpretation of the derived product. The respective control parameters may be determined
Advanced UVOIR Mirror Technology Development (AMTD) for Very Large Space Telescopes
Stahl, H. Philip; Smith, W. Scott; Mosier, Gary; Abplanalp, Laura; Arnold, William
2014-01-01
ASTRO2010 Decadal stated that an advanced large-aperture ultraviolet, optical, near-infrared (UVOIR) telescope is required to enable the next generation of compelling astrophysics and exoplanet science; and, that present technology is not mature enough to affordably build and launch any potential UVOIR mission concept. AMTD builds on the state of art (SOA) defined by over 30 years of monolithic & segmented ground & space-telescope mirror technology to mature six key technologies. AMTD is deliberately pursuing multiple design paths to provide the science community with op-tions to enable either large aperture monolithic or segmented mirrors with clear engineering metrics traceable to science requirements.
Visual exploration of parameter influence on phylogenetic trees.
Hess, Martin; Bremm, Sebastian; Weissgraeber, Stephanie; Hamacher, Kay; Goesele, Michael; Wiemeyer, Josef; von Landesberger, Tatiana
2014-01-01
Evolutionary relationships between organisms are frequently derived as phylogenetic trees inferred from multiple sequence alignments (MSAs). The MSA parameter space is exponentially large, so tens of thousands of potential trees can emerge for each dataset. A proposed visual-analytics approach can reveal the parameters' impact on the trees. Given input trees created with different parameter settings, it hierarchically clusters the trees according to their structural similarity. The most important clusters of similar trees are shown together with their parameters. This view offers interactive parameter exploration and automatic identification of relevant parameters. Biologists applied this approach to real data of 16S ribosomal RNA and protein sequences of ion channels. It revealed which parameters affected the tree structures. This led to a more reliable selection of the best trees.
Longitudinal Phase Space Tomography with Space Charge
Hancock, S; Lindroos, M
2000-01-01
Tomography is now a very broad topic with a wealth of algorithms for the reconstruction of both qualitative and quantitative images. In an extension in the domain of particle accelerators, one of the simplest algorithms has been modified to take into account the non-linearity of large-amplitude synchrotron motion. This permits the accurate reconstruction of longitudinal phase space density from one-dimensional bunch profile data. The method is a hybrid one which incorporates particle tracking. Hitherto, a very simple tracking algorithm has been employed because only a brief span of measured profile data is required to build a snapshot of phase space. This is one of the strengths of the method, as tracking for relatively few turns relaxes the precision to which input machine parameters need to be known. The recent addition of longitudinal space charge considerations as an optional refinement of the code is described. Simplicity suggested an approach based on the derivative of bunch shape with the properties of...
Model Experiments for the Determination of Airflow in Large Spaces
DEFF Research Database (Denmark)
Nielsen, Peter V.
Model experiments are one of the methods used for the determination of airflow in large spaces. This paper will discuss the formation of the governing dimensionless numbers. It is shown that experiments with a reduced scale often will necessitate a fully developed turbulence level of the flow....... Details of the flow from supply openings are very important for the determination of room air distribution. It is in some cases possible to make a simplified supply opening for the model experiment....
Biased Tracers in Redshift Space in the EFT of Large-Scale Structure
Energy Technology Data Exchange (ETDEWEB)
Perko, Ashley [Stanford U., Phys. Dept.; Senatore, Leonardo [KIPAC, Menlo Park; Jennings, Elise [Chicago U., KICP; Wechsler, Risa H. [Stanford U., Phys. Dept.
2016-10-28
The Effective Field Theory of Large-Scale Structure (EFTofLSS) provides a novel formalism that is able to accurately predict the clustering of large-scale structure (LSS) in the mildly non-linear regime. Here we provide the first computation of the power spectrum of biased tracers in redshift space at one loop order, and we make the associated code publicly available. We compare the multipoles $\\ell=0,2$ of the redshift-space halo power spectrum, together with the real-space matter and halo power spectra, with data from numerical simulations at $z=0.67$. For the samples we compare to, which have a number density of $\\bar n=3.8 \\cdot 10^{-2}(h \\ {\\rm Mpc}^{-1})^3$ and $\\bar n=3.9 \\cdot 10^{-4}(h \\ {\\rm Mpc}^{-1})^3$, we find that the calculation at one-loop order matches numerical measurements to within a few percent up to $k\\simeq 0.43 \\ h \\ {\\rm Mpc}^{-1}$, a significant improvement with respect to former techniques. By performing the so-called IR-resummation, we find that the Baryon Acoustic Oscillation peak is accurately reproduced. Based on the results presented here, long-wavelength statistics that are routinely observed in LSS surveys can be finally computed in the EFTofLSS. This formalism thus is ready to start to be compared directly to observational data.
Large Scale Gaussian Processes for Atmospheric Parameter Retrieval and Cloud Screening
Camps-Valls, G.; Gomez-Chova, L.; Mateo, G.; Laparra, V.; Perez-Suay, A.; Munoz-Mari, J.
2017-12-01
Current Earth-observation (EO) applications for image classification have to deal with an unprecedented big amount of heterogeneous and complex data sources. Spatio-temporally explicit classification methods are a requirement in a variety of Earth system data processing applications. Upcoming missions such as the super-spectral Copernicus Sentinels EnMAP and FLEX will soon provide unprecedented data streams. Very high resolution (VHR) sensors like Worldview-3 also pose big challenges to data processing. The challenge is not only attached to optical sensors but also to infrared sounders and radar images which increased in spectral, spatial and temporal resolution. Besides, we should not forget the availability of the extremely large remote sensing data archives already collected by several past missions, such ENVISAT, Cosmo-SkyMED, Landsat, SPOT, or Seviri/MSG. These large-scale data problems require enhanced processing techniques that should be accurate, robust and fast. Standard parameter retrieval and classification algorithms cannot cope with this new scenario efficiently. In this work, we review the field of large scale kernel methods for both atmospheric parameter retrieval and cloud detection using infrared sounding IASI data and optical Seviri/MSG imagery. We propose novel Gaussian Processes (GPs) to train problems with millions of instances and high number of input features. Algorithms can cope with non-linearities efficiently, accommodate multi-output problems, and provide confidence intervals for the predictions. Several strategies to speed up algorithms are devised: random Fourier features and variational approaches for cloud classification using IASI data and Seviri/MSG, and engineered randomized kernel functions and emulation in temperature, moisture and ozone atmospheric profile retrieval from IASI as a proxy to the upcoming MTG-IRS sensor. Excellent compromise between accuracy and scalability are obtained in all applications.
International Nuclear Information System (INIS)
Misra, Aalok; Shukla, Pramod
2010-01-01
We consider type IIB large volume compactifications involving orientifolds of the Swiss Cheese Calabi-Yau WCP 4 [1,1,1,6,9] with a single mobile space-time filling D3-brane and stacks of D7-branes wrapping the 'big' divisor Σ B (as opposed to the 'small' divisor usually done in the literature thus far) as well as supporting D7-brane fluxes. After reviewing our proposal of (Misra and Shukla, 2010) for resolving a long-standing tension between large volume cosmology and phenomenology pertaining to obtaining a 10 12 GeV gravitino in the inflationary era and a TeV gravitino in the present era, and summarizing our results of (Misra and Shukla, 2010) on soft supersymmetry breaking terms and open-string moduli masses, we discuss the one-loop RG running of the squark and slepton masses in mSUGRA-like models (using the running of the gaugino masses) to the EW scale in the large volume limit. Phenomenological constraints and some of the calculated soft SUSY parameters identify the D7-brane Wilson line moduli as the first two generations/families of squarks and sleptons and the D3-brane (restricted to the big divisor) position moduli as the two Higgses for MSSM-like models at TeV scale. We also discuss how the obtained open-string/matter moduli make it easier to impose FCNC constraints, as well as RG flow of off-diagonal squark mass(-squared) matrix elements.
Misra, Aalok; Shukla, Pramod
2010-03-01
We consider type IIB large volume compactifications involving orientifolds of the Swiss Cheese Calabi-Yau WCP[1,1,1,6,9] with a single mobile space-time filling D3-brane and stacks of D7-branes wrapping the “big” divisor ΣB (as opposed to the “small” divisor usually done in the literature thus far) as well as supporting D7-brane fluxes. After reviewing our proposal of [1] (Misra and Shukla, 2010) for resolving a long-standing tension between large volume cosmology and phenomenology pertaining to obtaining a 10 GeV gravitino in the inflationary era and a TeV gravitino in the present era, and summarizing our results of [1] (Misra and Shukla, 2010) on soft supersymmetry breaking terms and open-string moduli masses, we discuss the one-loop RG running of the squark and slepton masses in mSUGRA-like models (using the running of the gaugino masses) to the EW scale in the large volume limit. Phenomenological constraints and some of the calculated soft SUSY parameters identify the D7-brane Wilson line moduli as the first two generations/families of squarks and sleptons and the D3-brane (restricted to the big divisor) position moduli as the two Higgses for MSSM-like models at TeV scale. We also discuss how the obtained open-string/matter moduli make it easier to impose FCNC constraints, as well as RG flow of off-diagonal squark mass(-squared) matrix elements.
International Nuclear Information System (INIS)
Ma Huanfei; Lin Wei
2009-01-01
The existing adaptive synchronization technique based on the stability theory and invariance principle of dynamical systems, though theoretically proved to be valid for parameters identification in specific models, is always showing slow convergence rate and even failed in practice when the number of parameters becomes large. Here, for parameters update, a novel nonlinear adaptive rule is proposed to accelerate the rate. Its feasibility is validated by analytical arguments as well as by specific parameters identification in the Lotka-Volterra model with multiple species. Two adjustable factors in this rule influence the identification accuracy, which means that a proper choice of these factors leads to an optimal performance of this rule. In addition, a feasible method for avoiding the occurrence of the approximate linear dependence among terms with parameters on the synchronized manifold is also proposed.
Salama, Farid; Tan, Xiaofeng; Cami, Jan; Biennier, Ludovic; Remy, Jerome
2006-01-01
Polycyclic Aromatic Hydrocarbons (PAHs) are an important and ubiquitous component of carbon-bearing materials in space. A long-standing and major challenge for laboratory astrophysics has been to measure the spectra of large carbon molecules in laboratory environments that mimic (in a realistic way) the physical conditions that are associated with the interstellar emission and absorption regions [1]. This objective has been identified as one of the critical Laboratory Astrophysics objectives to optimize the data return from space missions [2]. An extensive laboratory program has been developed to assess the properties of PAHs in such environments and to describe how they influence the radiation and energy balance in space. We present and discuss the gas-phase electronic absorption spectra of neutral and ionized PAHs measured in the UV-Visible-NIR range in astrophysically relevant environments and discuss the implications for astrophysics [1]. The harsh physical conditions of the interstellar medium characterized by a low temperature, an absence of collisions and strong VUV radiation fields - have been simulated in the laboratory by associating a pulsed cavity ringdown spectrometer (CRDS) with a supersonic slit jet seeded with PAHs and an ionizing, penning-type, electronic discharge. We have measured for the {\\it first time} the spectra of a series of neutral [3,4] and ionized [5,6] interstellar PAHs analogs in the laboratory. An effort has also been attempted to quantify the mechanisms of ion and carbon nanoparticles production in the free jet expansion and to model our simulation of the diffuse interstellar medium in the laboratory [7]. These experiments provide {\\it unique} information on the spectra of free, large carbon-containing molecules and ions in the gas phase. We are now, for the first time, in the position to directly compare laboratory spectral data on free, cold, PAH ions and carbon nano-sized carbon particles with astronomical observations in the
A future large-aperture UVOIR space observatory: reference designs
Rioux, Norman; Thronson, Harley; Feinberg, Lee; Stahl, H. Philip; Redding, Dave; Jones, Andrew; Sturm, James; Collins, Christine; Liu, Alice
2015-09-01
Our joint NASA GSFC/JPL/MSFC/STScI study team has used community-provided science goals to derive mission needs, requirements, and candidate mission architectures for a future large-aperture, non-cryogenic UVOIR space observatory. We describe the feasibility assessment of system thermal and dynamic stability for supporting coronagraphy. The observatory is in a Sun-Earth L2 orbit providing a stable thermal environment and excellent field of regard. Reference designs include a 36-segment 9.2 m aperture telescope that stows within a five meter diameter launch vehicle fairing. Performance needs developed under the study are traceable to a variety of reference designs including options for a monolithic primary mirror.
Ibrahim, Mohamed
2017-08-28
Molecular dynamics (MD) simulations are crucial to investigating important processes in physics and thermodynamics. The simulated atoms are usually visualized as hard spheres with Phong shading, where individual particles and their local density can be perceived well in close-up views. However, for large-scale simulations with 10 million particles or more, the visualization of large fields-of-view usually suffers from strong aliasing artifacts, because the mismatch between data size and output resolution leads to severe under-sampling of the geometry. Excessive super-sampling can alleviate this problem, but is prohibitively expensive. This paper presents a novel visualization method for large-scale particle data that addresses aliasing while enabling interactive high-quality rendering. We introduce the novel concept of screen-space normal distribution functions (S-NDFs) for particle data. S-NDFs represent the distribution of surface normals that map to a given pixel in screen space, which enables high-quality re-lighting without re-rendering particles. In order to facilitate interactive zooming, we cache S-NDFs in a screen-space mipmap (S-MIP). Together, these two concepts enable interactive, scale-consistent re-lighting and shading changes, as well as zooming, without having to re-sample the particle data. We show how our method facilitates the interactive exploration of real-world large-scale MD simulation data in different scenarios.
Ibrahim, Mohamed; Wickenhauser, Patrick; Rautek, Peter; Reina, Guido; Hadwiger, Markus
2017-01-01
Molecular dynamics (MD) simulations are crucial to investigating important processes in physics and thermodynamics. The simulated atoms are usually visualized as hard spheres with Phong shading, where individual particles and their local density can be perceived well in close-up views. However, for large-scale simulations with 10 million particles or more, the visualization of large fields-of-view usually suffers from strong aliasing artifacts, because the mismatch between data size and output resolution leads to severe under-sampling of the geometry. Excessive super-sampling can alleviate this problem, but is prohibitively expensive. This paper presents a novel visualization method for large-scale particle data that addresses aliasing while enabling interactive high-quality rendering. We introduce the novel concept of screen-space normal distribution functions (S-NDFs) for particle data. S-NDFs represent the distribution of surface normals that map to a given pixel in screen space, which enables high-quality re-lighting without re-rendering particles. In order to facilitate interactive zooming, we cache S-NDFs in a screen-space mipmap (S-MIP). Together, these two concepts enable interactive, scale-consistent re-lighting and shading changes, as well as zooming, without having to re-sample the particle data. We show how our method facilitates the interactive exploration of real-world large-scale MD simulation data in different scenarios.
Major technological innovations introduced in the large antennas of the Deep Space Network
Imbriale, W. A.
2002-01-01
The NASA Deep Space Network (DSN) is the largest and most sensitive scientific, telecommunications and radio navigation network in the world. Its principal responsibilities are to provide communications, tracking, and science services to most of the world's spacecraft that travel beyond low Earth orbit. The network consists of three Deep Space Communications Complexes. Each of the three complexes consists of multiple large antennas equipped with ultra sensitive receiving systems. A centralized Signal Processing Center (SPC) remotely controls the antennas, generates and transmits spacecraft commands, and receives and processes the spacecraft telemetry.
Space-Time Fractional Diffusion-Advection Equation with Caputo Derivative
Directory of Open Access Journals (Sweden)
José Francisco Gómez Aguilar
2014-01-01
Full Text Available An alternative construction for the space-time fractional diffusion-advection equation for the sedimentation phenomena is presented. The order of the derivative is considered as 0<β, γ≤1 for the space and time domain, respectively. The fractional derivative of Caputo type is considered. In the spatial case we obtain the fractional solution for the underdamped, undamped, and overdamped case. In the temporal case we show that the concentration has amplitude which exhibits an algebraic decay at asymptotically large times and also shows numerical simulations where both derivatives are taken in simultaneous form. In order that the equation preserves the physical units of the system two auxiliary parameters σx and σt are introduced characterizing the existence of fractional space and time components, respectively. A physical relation between these parameters is reported and the solutions in space-time are given in terms of the Mittag-Leffler function depending on the parameters β and γ. The generalization of the fractional diffusion-advection equation in space-time exhibits anomalous behavior.
Just-in-time connectivity for large spiking networks.
Lytton, William W; Omurtag, Ahmet; Neymotin, Samuel A; Hines, Michael L
2008-11-01
The scale of large neuronal network simulations is memory limited due to the need to store connectivity information: connectivity storage grows as the square of neuron number up to anatomically relevant limits. Using the NEURON simulator as a discrete-event simulator (no integration), we explored the consequences of avoiding the space costs of connectivity through regenerating connectivity parameters when needed: just in time after a presynaptic cell fires. We explored various strategies for automated generation of one or more of the basic static connectivity parameters: delays, postsynaptic cell identities, and weights, as well as run-time connectivity state: the event queue. Comparison of the JitCon implementation to NEURON's standard NetCon connectivity method showed substantial space savings, with associated run-time penalty. Although JitCon saved space by eliminating connectivity parameters, larger simulations were still memory limited due to growth of the synaptic event queue. We therefore designed a JitEvent algorithm that added items to the queue only when required: instead of alerting multiple postsynaptic cells, a spiking presynaptic cell posted a callback event at the shortest synaptic delay time. At the time of the callback, this same presynaptic cell directly notified the first postsynaptic cell and generated another self-callback for the next delay time. The JitEvent implementation yielded substantial additional time and space savings. We conclude that just-in-time strategies are necessary for very large network simulations but that a variety of alternative strategies should be considered whose optimality will depend on the characteristics of the simulation to be run.
Welter, David E.; Doherty, John E.; Hunt, Randall J.; Muffels, Christopher T.; Tonkin, Matthew J.; Schreuder, Willem A.
2012-01-01
An object-oriented parameter estimation code was developed to incorporate benefits of object-oriented programming techniques for solving large parameter estimation modeling problems. The code is written in C++ and is a formulation and expansion of the algorithms included in PEST, a widely used parameter estimation code written in Fortran. The new code is called PEST++ and is designed to lower the barriers of entry for users and developers while providing efficient algorithms that can accommodate large, highly parameterized problems. This effort has focused on (1) implementing the most popular features of PEST in a fashion that is easy for novice or experienced modelers to use and (2) creating a software design that is easy to extend; that is, this effort provides a documented object-oriented framework designed from the ground up to be modular and extensible. In addition, all PEST++ source code and its associated libraries, as well as the general run manager source code, have been integrated in the Microsoft Visual Studio® 2010 integrated development environment. The PEST++ code is designed to provide a foundation for an open-source development environment capable of producing robust and efficient parameter estimation tools for the environmental modeling community into the future.
Electroweak baryogenesis, large Yukawas and dark matter
International Nuclear Information System (INIS)
Provenza, Alessio; Quiros, Mariano; Ullio, Piero
2005-01-01
It has recently been shown that the electroweak baryogenesis mechanism is feasible in Standard Model extensions containing extra fermions with large Yukawa couplings. We show here that the lightest of these fermionic fields can naturally be a good candidate for cold dark matter. We find regions in the parameter space where the thermal relic abundance of this particle is compatible with the dark matter density of the Universe as determined by the WMAP experiment. We study direct and indirect dark matter detection for this model and compare with current experimental limits and prospects for upcoming experiments. We find, contrary to the standard lore, that indirect detection searches are more promising than direct ones, and they already exclude part of the parameter space
Watson, Judith J.
1992-08-01
An astronaut monorail system (AMS) is presented as a vehicle to transport and position EVA astronauts along large space truss structures. The AMS is proposed specifically as an alternative to the crew and equipment transfer aid for Space Station Freedom. Design considerations for the AMS were discussed and a reference configuration was selected for the study. Equations were developed to characterize the stiffness and frequency behavior of the AMS positioning arm. Experimental data showed that these equations gave a fairly accurate representation of the stiffness and frequency behavior of the arm. A study was presented to show trends for the arm behavior based on varying parameters of the stiffness and frequency equations. An ergonomics study was conducted to provide boundary conditions for tolerable frequency and deflection to be used in developing a design concept for the positioning arm. The feasibility of the AMS positioning arm was examined using equations and working curves developed in this study. It was found that a positioning arm of a length to reach all interior points of the space station truss structure could not be designed to satisfy frequency and deflection constraints. By relaxing the design requirements and the ergonomic boundaries, an arm could be designed which would provide a stable work platform for the EVA astronaut and give him access to over 75 percent of the truss interior.
A morphing technique for signal modelling in a multidimensional space of coupling parameters
The ATLAS collaboration
2015-01-01
This note describes a morphing method that produces signal models for fits to data in which both the affected event yields and kinematic distributions are simultaneously taken into account. The signal model is morphed in a continuous manner through the available multi-dimensional parameter space. Searches for deviations from Standard Model predictions for Higgs boson properties have so far used information either from event yields or kinematic distributions. The combined approach described here is expected to substantially enhance the sensitivity to beyond the Standard Model contributions.
Directory of Open Access Journals (Sweden)
D. Sarsri
2016-03-01
Full Text Available This paper presents a methodological approach to compute the stochastic eigenmodes of large FE models with parameter uncertainties based on coupling of second order perturbation method and component mode synthesis methods. Various component mode synthesis methods are used to optimally reduce the size of the model. The statistical first two moments of dynamic response of the reduced system are obtained by the second order perturbation method. Numerical results illustrating the accuracy and efficiency of the proposed coupled methodological procedures for large FE models with uncertain parameters are presented.
Phase transitions in de Sitter space
Directory of Open Access Journals (Sweden)
Alexander Vilenkin
1983-10-01
Full Text Available An effective potential in de Sitter space is calculated for a model of two interacting scalar fields in one-loop approximation and in a self-consistent approximation which takes into account an infinite set of diagrams. Various approaches to renormalization in de Sitter space are discussed. The results are applied to analyze the phase transition in the Hawking-Moss version of the inflationary universe scenario. Requiring that inflation is sufficiently large, we derive constraints on the parameters of the model.
da Costa, Diogo Ricardo; Hansen, Matheus; Guarise, Gustavo; Medrano-T, Rene O.; Leonel, Edson D.
2016-04-01
We show that extreme orbits, trajectories that connect local maximum and minimum values of one dimensional maps, play a major role in the parameter space of dissipative systems dictating the organization for the windows of periodicity, hence producing sets of shrimp-like structures. Here we solve three fundamental problems regarding the distribution of these sets and give: (i) their precise localization in the parameter space, even for sets of very high periods; (ii) their local and global distributions along cascades; and (iii) the association of these cascades to complicate sets of periodicity. The extreme orbits are proved to be a powerful indicator to investigate the organization of windows of periodicity in parameter planes. As applications of the theory, we obtain some results for the circle map and perturbed logistic map. The formalism presented here can be extended to many other different nonlinear and dissipative systems.
Parameter space of general gauge mediation
International Nuclear Information System (INIS)
Rajaraman, Arvind; Shirman, Yuri; Smidt, Joseph; Yu, Felix
2009-01-01
We study a subspace of General Gauge Mediation (GGM) models which generalize models of gauge mediation. We find superpartner spectra that are markedly different from those of typical gauge and gaugino mediation scenarios. While typical gauge mediation predictions of either a neutralino or stau next-to-lightest supersymmetric particle (NLSP) are easily reproducible with the GGM parameters, chargino and sneutrino NLSPs are generic for many reasonable choices of GGM parameters.
Exploiting large-scale correlations to detect continuous gravitational waves.
Pletsch, Holger J; Allen, Bruce
2009-10-30
Fully coherent searches (over realistic ranges of parameter space and year-long observation times) for unknown sources of continuous gravitational waves are computationally prohibitive. Less expensive hierarchical searches divide the data into shorter segments which are analyzed coherently, then detection statistics from different segments are combined incoherently. The novel method presented here solves the long-standing problem of how best to do the incoherent combination. The optimal solution exploits large-scale parameter-space correlations in the coherent detection statistic. Application to simulated data shows dramatic sensitivity improvements compared with previously available (ad hoc) methods, increasing the spatial volume probed by more than 2 orders of magnitude at lower computational cost.
Impact parameter dynamics in quantum theory in large angle scattering
International Nuclear Information System (INIS)
Andriyanov, A.A.
1975-01-01
High energy behaviour of a free particle Green's function is studied for construction of the scattering amplitude. The main part of the Green's function is determined by eikonal scattering along the mean moment and by the total scattering along the transfered momentum. This ''impact'' approximation may be included as a first approximation in the iteration scheme for the scattering amplitude along the mean momentum, i.e. the ''impact'' perturbation theory. With the help of the ''impact'' approximation an expansion of the scattering amplitude in the impact parameter depending on interaction is obtained. These expansions are more correct than the eikonal expansions at large angle scattering. The results are illustrated grafically foe the exponential and the Yukawa potentials
Advanced Mirror Technology Development for Very Large Space Telescopes
Stahl, H. P.
2014-01-01
Advanced Mirror Technology Development (AMTD) is a NASA Strategic Astrophysics Technology project to mature to TRL-6 the critical technologies needed to produce 4-m or larger flight-qualified UVOIR mirrors by 2018 so that a viable mission can be considered by the 2020 Decadal Review. The developed mirror technology must enable missions capable of both general astrophysics & ultra-high contrast observations of exoplanets. Just as JWST’s architecture was driven by launch vehicle, a future UVOIR mission’s architectures (monolithic, segmented or interferometric) will depend on capacities of future launch vehicles (and budget). Since we cannot predict the future, we must prepare for all potential futures. Therefore, to provide the science community with options, we are pursuing multiple technology paths. AMTD uses a science-driven systems engineering approach. We derived engineering specifications for potential future monolithic or segmented space telescopes based on science needs and implement constraints. And we are maturing six inter-linked critical technologies to enable potential future large aperture UVOIR space telescope: 1) Large-Aperture, Low Areal Density, High Stiffness Mirrors, 2) Support Systems, 3) Mid/High Spatial Frequency Figure Error, 4) Segment Edges, 5) Segment-to-Segment Gap Phasing, and 6) Integrated Model Validation Science Advisory Team and a Systems Engineering Team. We are maturing all six technologies simultaneously because all are required to make a primary mirror assembly (PMA); and, it is the PMA’s on-orbit performance which determines science return. PMA stiffness depends on substrate and support stiffness. Ability to cost-effectively eliminate mid/high spatial figure errors and polishing edges depends on substrate stiffness. On-orbit thermal and mechanical performance depends on substrate stiffness, the coefficient of thermal expansion (CTE) and thermal mass. And, segment-to-segment phasing depends on substrate & structure stiffness
Bayesian parameter inference from continuously monitored quantum systems
DEFF Research Database (Denmark)
Gammelmark, Søren; Mølmer, Klaus
2013-01-01
We review the introduction of likelihood functions and Fisher information in classical estimation theory, and we show how they can be defined in a very similar manner within quantum measurement theory. We show that the stochastic master equations describing the dynamics of a quantum system subject...... to a definite set of measurements provides likelihood functions for unknown parameters in the system dynamics, and we show that the estimation error, given by the Fisher information, can be identified by stochastic master equation simulations. For large parameter spaces we describe and illustrate the efficient...
Schaefer, Andreas; Wenzel, Friedemann
2017-04-01
Subduction zones are generally the sources of the earthquakes with the highest magnitudes. Not only in Japan or Chile, but also in Pakistan, the Solomon Islands or for the Lesser Antilles, subduction zones pose a significant hazard for the people. To understand the behavior of subduction zones, especially to identify their capabilities to produce maximum magnitude earthquakes, various physical models have been developed leading to a large number of various datasets, e.g. from geodesy, geomagnetics, structural geology, etc. There have been various studies to utilize this data for the compilation of a subduction zone parameters database, but mostly concentrating on only the major zones. Here, we compile the largest dataset of subduction zone parameters both in parameter diversity but also in the number of considered subduction zones. In total, more than 70 individual sources have been assessed and the aforementioned parametric data have been combined with seismological data and many more sources have been compiled leading to more than 60 individual parameters. Not all parameters have been resolved for each zone, since the data completeness depends on the data availability and quality for each source. In addition, the 3D down-dip geometry of a majority of the subduction zones has been resolved using historical earthquake hypocenter data and centroid moment tensors where available and additionally compared and verified with results from previous studies. With such a database, a statistical study has been undertaken to identify not only correlations between those parameters to estimate a parametric driven way to identify potentials for maximum possible magnitudes, but also to identify similarities between the sources themselves. This identification of similarities leads to a classification system for subduction zones. Here, it could be expected if two sources share enough common characteristics, other characteristics of interest may be similar as well. This concept
pypet: A Python Toolkit for Data Management of Parameter Explorations.
Meyer, Robert; Obermayer, Klaus
2016-01-01
pypet (Python parameter exploration toolkit) is a new multi-platform Python toolkit for managing numerical simulations. Sampling the space of model parameters is a key aspect of simulations and numerical experiments. pypet is designed to allow easy and arbitrary sampling of trajectories through a parameter space beyond simple grid searches. pypet collects and stores both simulation parameters and results in a single HDF5 file. This collective storage allows fast and convenient loading of data for further analyses. pypet provides various additional features such as multiprocessing and parallelization of simulations, dynamic loading of data, integration of git version control, and supervision of experiments via the electronic lab notebook Sumatra. pypet supports a rich set of data formats, including native Python types, Numpy and Scipy data, Pandas DataFrames, and BRIAN(2) quantities. Besides these formats, users can easily extend the toolkit to allow customized data types. pypet is a flexible tool suited for both short Python scripts and large scale projects. pypet's various features, especially the tight link between parameters and results, promote reproducible research in computational neuroscience and simulation-based disciplines.
A phase transition between small- and large-field models of inflation
International Nuclear Information System (INIS)
Itzhaki, Nissan; Kovetz, Ely D
2009-01-01
We show that models of inflection point inflation exhibit a phase transition from a region in parameter space where they are of large-field type to a region where they are of small-field type. The phase transition is between a universal behavior, with respect to the initial condition, at the large-field region and non-universal behavior at the small-field region. The order parameter is the number of e-foldings. We find integer critical exponents at the transition between the two phases.
A Local Scalable Distributed EM Algorithm for Large P2P Networks
National Aeronautics and Space Administration — his paper describes a local and distributed expectation maximization algorithm for learning parameters of Gaussian mixture models (GMM) in large peer-to-peer (P2P)...
Aerogels Materials as Space Debris Collectors
Directory of Open Access Journals (Sweden)
Thierry Woignier
2013-01-01
Full Text Available Material degradation due to the specific space environment becomes a key parameter for space missions. The use of large surface of brittle materials on satellites can produce, if impacted by hypervelocity particles, ejected volumes of mater 100 times higher than the impacting one. The presented work is devoted to the use of silica aerogels as passive detectors. Aerogels have been exposed to the low earth orbit of the ISS for 18 months. The study describes the aerogels process and the choice of synthesis parameters in such a way to get expected features in terms of porosity, mechanical properties, internal stresses, and transparency. Low-density aerogels (0.09 g·cm−3 have been prepared. The control of transparency necessary to see and identify particles and fragments collected is obtained using a base catalysis during gel synthesis. After return to earth, the aerogels samples have been observed using optical microscopy to detect and quantify craters on the exposed surface. First results obtained on a small part of the aerogels indicate a large number of debris collected in the materials.
A change of coordinates on the large phase space of quantum cohomology
International Nuclear Information System (INIS)
Kabanov, A.
2001-01-01
The Gromov-Witten invariants of a smooth, projective variety V, when twisted by the tautological classes on the moduli space of stable maps, give rise to a family of cohomological field theories and endow the base of the family with coordinates. We prove that the potential functions associated to the tautological ψ classes (the large phase space) and the κ classes are related by a change of coordinates which generalizes a change of basis on the ring of symmetric functions. Our result is a generalization of the work of Manin-Zograf who studied the case where V is a point. We utilize this change of variables to derive the topological recursion relations associated to the κ classes from those associated to the ψ classes. (orig.)
A research on the excavation, support, and environment control of large scale underground space
Energy Technology Data Exchange (ETDEWEB)
Kang, Pil Chong; Kwon, Kwang Soo; Jeong, So Keul [Korea Institute of Geology Mining and Materials, Taejon (Korea, Republic of)
1995-12-01
With the growing necessity of the underground space due to the deficiency of above-ground space, the size and shape of underground structures tend to be complex and diverse. This complexity and variety force the development of new techniques for rock mass classification, excavation and supporting of underground space, monitoring and control of underground environment. All these techniques should be applied together to make the underground space comfortable. To achieve this, efforts have been made on 5 different areas; research on the underground space design and stability analysis, research on the techniques for excavation of rock by controlled blasting, research on the development of monitoring system to forecast the rock behaviour of underground space, research on the environment inspection system in closed space, and research on dynamic analysis of the airflow and environmental control in the large geos-spaces. The 5 main achievements are improvement of the existing structure analysis program(EXCRACK) to consider the deformation and failure characteristics of rock joints, development of new blasting design (SK-cut), prediction of ground vibration through the newly proposed wave propagation equation, development and In-Situ application of rock mass deformation monitoring system and data acquisition software, and trial manufacture of the environment inspection system in closed space. Should these techniques be applied to the development of underground space, prevention of industrial disaster, cut down of construction cost, domestication of monitoring system, improvement of tunnel stability, curtailment of royalty, upgrade of domestic technologies will be brought forth. (Abstract Truncated)
Space Situational Awareness of Large Numbers of Payloads From a Single Deployment
Segerman, A.; Byers, J.; Emmert, J.; Nicholas, A.
2014-09-01
The nearly simultaneous deployment of a large number of payloads from a single vehicle presents a new challenge for space object catalog maintenance and space situational awareness (SSA). Following two cubesat deployments last November, it took five weeks to catalog the resulting 64 orbits. The upcoming Kicksat mission will present an even greater SSA challenge, with its deployment of 128 chip-sized picosats. Although all of these deployments are in short-lived orbits, future deployments will inevitably occur at higher altitudes, with a longer term threat of collision with active spacecraft. With such deployments, individual scientific payload operators require rapid precise knowledge of their satellites' locations. Following the first November launch, the cataloguing did not initially associate a payload with each orbit, leaving this to the satellite operators. For short duration missions, the time required to identify an experiment's specific orbit may easily be a large fraction of the spacecraft's lifetime. For a Kicksat-type deployment, present tracking cannot collect enough observations to catalog each small object. The current approach is to treat the chip cloud as a single catalog object. However, the cloud dissipates into multiple subclouds and, ultimately, tiny groups of untrackable chips. One response to this challenge may be to mandate installation of a transponder on each spacecraft. Directional transponder transmission detections could be used as angle observations for orbit cataloguing. Of course, such an approach would only be employable with cooperative spacecraft. In other cases, a probabilistic association approach may be useful, with the goal being to establish the probability of an element being at a given point in space. This would permit more reliable assessment of the probability of collision of active spacecraft with any cloud element. This paper surveys the cataloguing challenges presented by large scale deployments of small spacecraft
LARGE-SCALE STRUCTURE OF THE UNIVERSE AS A COSMIC STANDARD RULER
International Nuclear Information System (INIS)
Park, Changbom; Kim, Young-Rae
2010-01-01
We propose to use the large-scale structure (LSS) of the universe as a cosmic standard ruler. This is possible because the pattern of large-scale distribution of matter is scale-dependent and does not change in comoving space during the linear-regime evolution of structure. By examining the pattern of LSS in several redshift intervals it is possible to reconstruct the expansion history of the universe, and thus to measure the cosmological parameters governing the expansion of the universe. The features of the large-scale matter distribution that can be used as standard rulers include the topology of LSS and the overall shapes of the power spectrum and correlation function. The genus, being an intrinsic topology measure, is insensitive to systematic effects such as the nonlinear gravitational evolution, galaxy biasing, and redshift-space distortion, and thus is an ideal cosmic ruler when galaxies in redshift space are used to trace the initial matter distribution. The genus remains unchanged as far as the rank order of density is conserved, which is true for linear and weakly nonlinear gravitational evolution, monotonic galaxy biasing, and mild redshift-space distortions. The expansion history of the universe can be constrained by comparing the theoretically predicted genus corresponding to an adopted set of cosmological parameters with the observed genus measured by using the redshift-comoving distance relation of the same cosmological model.
Yang, Eui-Hyeok; Shcheglov, Kirill
2002-01-01
Future concepts of ultra large space telescopes include segmented silicon mirrors and inflatable polymer mirrors. Primary mirrors for these systems cannot meet optical surface figure requirements and are likely to generate over several microns of wavefront errors. In order to correct for these large wavefront errors, high stroke optical quality deformable mirrors are required. JPL has recently developed a new technology for transferring an entire wafer-level mirror membrane from one substrate to another. A thin membrane, 100 mm in diameter, has been successfully transferred without using adhesives or polymers. The measured peak-to-valley surface error of a transferred and patterned membrane (1 mm x 1 mm x 0.016 mm) is only 9 nm. The mirror element actuation principle is based on a piezoelectric unimorph. A voltage applied to the piezoelectric layer induces stress in the longitudinal direction causing the film to deform and pull on the mirror connected to it. The advantage of this approach is that the small longitudinal strains obtainable from a piezoelectric material at modest voltages are thus translated into large vertical displacements. Modeling is performed for a unimorph membrane consisting of clamped rectangular membrane with a PZT layer with variable dimensions. The membrane transfer technology is combined with the piezoelectric bimorph actuator concept to constitute a compact deformable mirror device with a large stroke actuation of a continuous mirror membrane, resulting in a compact A0 systems for use in ultra large space telescopes.
Large-Scale Demonstration of Liquid Hydrogen Storage with Zero Boiloff for In-Space Applications
Hastings, L. J.; Bryant, C. B.; Flachbart, R. H.; Holt, K. A.; Johnson, E.; Hedayat, A.; Hipp, B.; Plachta, D. W.
2010-01-01
Cryocooler and passive insulation technology advances have substantially improved prospects for zero-boiloff cryogenic storage. Therefore, a cooperative effort by NASA s Ames Research Center, Glenn Research Center, and Marshall Space Flight Center (MSFC) was implemented to develop zero-boiloff concepts for in-space cryogenic storage. Described herein is one program element - a large-scale, zero-boiloff demonstration using the MSFC multipurpose hydrogen test bed (MHTB). A commercial cryocooler was interfaced with an existing MHTB spray bar mixer and insulation system in a manner that enabled a balance between incoming and extracted thermal energy.
Wanders, N.; Bierkens, M. F. P.; de Jong, S. M.; de Roo, A.; Karssenberg, D.
2014-08-01
Large-scale hydrological models are nowadays mostly calibrated using observed discharge. As a result, a large part of the hydrological system, in particular the unsaturated zone, remains uncalibrated. Soil moisture observations from satellites have the potential to fill this gap. Here we evaluate the added value of remotely sensed soil moisture in calibration of large-scale hydrological models by addressing two research questions: (1) Which parameters of hydrological models can be identified by calibration with remotely sensed soil moisture? (2) Does calibration with remotely sensed soil moisture lead to an improved calibration of hydrological models compared to calibration based only on discharge observations, such that this leads to improved simulations of soil moisture content and discharge? A dual state and parameter Ensemble Kalman Filter is used to calibrate the hydrological model LISFLOOD for the Upper Danube. Calibration is done using discharge and remotely sensed soil moisture acquired by AMSR-E, SMOS, and ASCAT. Calibration with discharge data improves the estimation of groundwater and routing parameters. Calibration with only remotely sensed soil moisture results in an accurate identification of parameters related to land-surface processes. For the Upper Danube upstream area up to 40,000 km2, calibration on both discharge and soil moisture results in a reduction by 10-30% in the RMSE for discharge simulations, compared to calibration on discharge alone. The conclusion is that remotely sensed soil moisture holds potential for calibration of hydrological models, leading to a better simulation of soil moisture content throughout the catchment and a better simulation of discharge in upstream areas. This article was corrected on 15 SEP 2014. See the end of the full text for details.
Imprint of non-linear effects on HI intensity mapping on large scales
Energy Technology Data Exchange (ETDEWEB)
Umeh, Obinna, E-mail: umeobinna@gmail.com [Department of Physics and Astronomy, University of the Western Cape, Cape Town 7535 (South Africa)
2017-06-01
Intensity mapping of the HI brightness temperature provides a unique way of tracing large-scale structures of the Universe up to the largest possible scales. This is achieved by using a low angular resolution radio telescopes to detect emission line from cosmic neutral Hydrogen in the post-reionization Universe. We use general relativistic perturbation theory techniques to derive for the first time the full expression for the HI brightness temperature up to third order in perturbation theory without making any plane-parallel approximation. We use this result and the renormalization prescription for biased tracers to study the impact of nonlinear effects on the power spectrum of HI brightness temperature both in real and redshift space. We show how mode coupling at nonlinear order due to nonlinear bias parameters and redshift space distortion terms modulate the power spectrum on large scales. The large scale modulation may be understood to be due to the effective bias parameter and effective shot noise.
A Local Scalable Distributed Expectation Maximization Algorithm for Large Peer-to-Peer Networks
National Aeronautics and Space Administration — This paper describes a local and distributed expectation maximization algorithm for learning parameters of Gaussian mixture models (GMM) in large peer-to-peer (P2P)...
Kijvikai, Kittinut; Laguna, M. Pilar; de la Rosette, Jean
2006-01-01
We describe our technique for large renal vein control in the limited dissected space during laparoscopic nephrectomy. This technique is a simple, inexpensive and reliable method, especially for large and short renal vein ligation
Tool Support for Parametric Analysis of Large Software Simulation Systems
Schumann, Johann; Gundy-Burlet, Karen; Pasareanu, Corina; Menzies, Tim; Barrett, Tony
2008-01-01
The analysis of large and complex parameterized software systems, e.g., systems simulation in aerospace, is very complicated and time-consuming due to the large parameter space, and the complex, highly coupled nonlinear nature of the different system components. Thus, such systems are generally validated only in regions local to anticipated operating points rather than through characterization of the entire feasible operational envelope of the system. We have addressed the factors deterring such an analysis with a tool to support envelope assessment: we utilize a combination of advanced Monte Carlo generation with n-factor combinatorial parameter variations to limit the number of cases, but still explore important interactions in the parameter space in a systematic fashion. Additional test-cases, automatically generated from models (e.g., UML, Simulink, Stateflow) improve the coverage. The distributed test runs of the software system produce vast amounts of data, making manual analysis impossible. Our tool automatically analyzes the generated data through a combination of unsupervised Bayesian clustering techniques (AutoBayes) and supervised learning of critical parameter ranges using the treatment learner TAR3. The tool has been developed around the Trick simulation environment, which is widely used within NASA. We will present this tool with a GN&C (Guidance, Navigation and Control) simulation of a small satellite system.
Drummond, Alexei J; Nicholls, Geoff K; Rodrigo, Allen G; Solomon, Wiremu
2002-07-01
Molecular sequences obtained at different sampling times from populations of rapidly evolving pathogens and from ancient subfossil and fossil sources are increasingly available with modern sequencing technology. Here, we present a Bayesian statistical inference approach to the joint estimation of mutation rate and population size that incorporates the uncertainty in the genealogy of such temporally spaced sequences by using Markov chain Monte Carlo (MCMC) integration. The Kingman coalescent model is used to describe the time structure of the ancestral tree. We recover information about the unknown true ancestral coalescent tree, population size, and the overall mutation rate from temporally spaced data, that is, from nucleotide sequences gathered at different times, from different individuals, in an evolving haploid population. We briefly discuss the methodological implications and show what can be inferred, in various practically relevant states of prior knowledge. We develop extensions for exponentially growing population size and joint estimation of substitution model parameters. We illustrate some of the important features of this approach on a genealogy of HIV-1 envelope (env) partial sequences.
Agnew, Donald L.; Vinkey, Victor F.; Runge, Fritz C.
1989-01-01
A study was conducted to determine how the Large Deployable Reflector (LDR) might benefit from the use of the space station for assembly, checkout, deployment, servicing, refurbishment, and technology development. Requirements that must be met by the space station to supply benefits for a selected scenario are summarized. Quantitative and qualitative data are supplied. Space station requirements for LDR which may be utilized by other missions are identified. A technology development mission for LDR is outlined and requirements summarized. A preliminary experiment plan is included. Space Station Data Base SAA 0020 and TDM 2411 are updated.
Agnew, Donald L.; Vinkey, Victor F.; Runge, Fritz C.
1989-04-01
A study was conducted to determine how the Large Deployable Reflector (LDR) might benefit from the use of the space station for assembly, checkout, deployment, servicing, refurbishment, and technology development. Requirements that must be met by the space station to supply benefits for a selected scenario are summarized. Quantitative and qualitative data are supplied. Space station requirements for LDR which may be utilized by other missions are identified. A technology development mission for LDR is outlined and requirements summarized. A preliminary experiment plan is included. Space Station Data Base SAA 0020 and TDM 2411 are updated.
pypet: A Python Toolkit for Data Management of Parameter Explorations
Directory of Open Access Journals (Sweden)
Robert Meyer
2016-08-01
Full Text Available pypet (Python parameter exploration toolkit is a new multi-platform Python toolkit for managing numerical simulations. Sampling the space of model parameters is a key aspect of simulations and numerical experiments. pypet is designed to allow easy and arbitrary sampling of trajectories through a parameter space beyond simple grid searches.pypet collects and stores both simulation parameters and results in a single HDF5 file.This collective storage allows fast and convenient loading of data for further analyses.pypet provides various additional features such as multiprocessing and parallelization of simulations, dynamic loading of data, integration of git version control, and supervision of experiments via the electronic lab notebook Sumatra. pypet supports a rich set of data formats, including native Python types, Numpy and Scipy data, Pandas DataFrames, and BRIAN(2 quantities. Besides these formats, users can easily extend the toolkit to allow customized data types. pypet is a flexible tool suited for both short Python scripts and large scale projects. pypet's various features, especially the tight link between parameters and results, promote reproducible research in computational neuroscience and simulation-based disciplines.
Hopkins, Randall C.; Capizzo, Peter; Fincher, Sharon; Hornsby, Linda S.; Jones, David
2010-01-01
The Advanced Concepts Office at Marshall Space Flight Center completed a brief spacecraft design study for the 8-meter monolithic Advanced Technology Large Aperture Space Telescope (ATLAST-8m). This spacecraft concept provides all power, communication, telemetry, avionics, guidance and control, and thermal control for the observatory, and inserts the observatory into a halo orbit about the second Sun-Earth Lagrange point. The multidisciplinary design team created a simple spacecraft design that enables component and science instrument servicing, employs articulating solar panels for help with momentum management, and provides precise pointing control while at the same time fast slewing for the observatory.
Probing the parameter space of HD 49933: A comparison between global and local methods
Energy Technology Data Exchange (ETDEWEB)
Creevey, O L [Instituto de Astrofisica de Canarias (IAC), E-38200 La Laguna, Tenerife (Spain); Bazot, M, E-mail: orlagh@iac.es, E-mail: bazot@astro.up.pt [Centro de Astrofisica da Universidade do Porto, Rua das Estrelas, 4150-762 Porto (Portugal)
2011-01-01
We present two independent methods for studying the global stellar parameter space (mass M, age, chemical composition X{sub 0}, Z{sub 0}) of HD 49933 with seismic data. Using a local minimization and an MCMC algorithm, we obtain consistent results for the determination of the stellar properties: M 1.1-1.2 M{sub sun} Age {approx} 3.0 Gyr, Z{sub 0} {approx} 0.008. A description of the error ellipses can be defined using Singular Value Decomposition techniques, and this is validated by comparing the errors with those from the MCMC method.
Large-signal analysis of DC motor drive system using state-space averaging technique
International Nuclear Information System (INIS)
Bekir Yildiz, Ali
2008-01-01
The analysis of a separately excited DC motor driven by DC-DC converter is realized by using state-space averaging technique. Firstly, a general and unified large-signal averaged circuit model for DC-DC converters is given. The method converts power electronic systems, which are periodic time-variant because of their switching operation, to unified and time independent systems. Using the averaged circuit model enables us to combine the different topologies of converters. Thus, all analysis and design processes about DC motor can be easily realized by using the unified averaged model which is valid during whole period. Some large-signal variations such as speed and current relating to DC motor, steady-state analysis, large-signal and small-signal transfer functions are easily obtained by using the averaged circuit model
Osculating Spaces of Varieties and Linear Network Codes
DEFF Research Database (Denmark)
Hansen, Johan P.
2013-01-01
We present a general theory to obtain good linear network codes utilizing the osculating nature of algebraic varieties. In particular, we obtain from the osculating spaces of Veronese varieties explicit families of equidimensional vector spaces, in which any pair of distinct vector spaces...... intersects in the same dimension. Linear network coding transmits information in terms of a basis of a vector space and the information is received as a basis of a possible altered vector space. Ralf Koetter and Frank R. Kschischang introduced a metric on the set af vector spaces and showed that a minimal...... distance decoder for this metric achieves correct decoding if the dimension of the intersection of the transmitted and received vector space is sufficiently large. The obtained osculating spaces of Veronese varieties are equidistant in the above metric. The parameters of the resulting linear network codes...
Osculating Spaces of Varieties and Linear Network Codes
DEFF Research Database (Denmark)
Hansen, Johan P.
We present a general theory to obtain good linear network codes utilizing the osculating nature of algebraic varieties. In particular, we obtain from the osculating spaces of Veronese varieties explicit families of equideminsional vector spaces, in which any pair of distinct vector spaces...... intersects in the same dimension. Linear network coding transmits information in terms of a basis of a vector space and the information is received as a basis of a possible altered vector space. Ralf Koetter and Frank R. Kschischang introduced a metric on the set af vector spaces and showed that a minimal...... distance decoder for this metric achieves correct decoding if the dimension of the intersection of the transmitted and received vector space is sufficiently large. The obtained osculating spaces of Veronese varieties are equidistant in the above metric. The parameters of the resulting linear network codes...
Space use of African wild dogs in relation to other large carnivores.
Directory of Open Access Journals (Sweden)
Angela M Darnell
Full Text Available Interaction among species through competition is a principle process structuring ecological communities, affecting behavior, distribution, and ultimately the population dynamics of species. High competition among large African carnivores, associated with extensive diet overlap, manifests in interactions between subordinate African wild dogs (Lycaon pictus and dominant lions (Panthera leo and spotted hyenas (Crocuta crocuta. Using locations of large carnivores in Hluhluwe-iMfolozi Park, South Africa, we found different responses from wild dogs to their two main competitors. Wild dogs avoided lions, particularly during denning, through a combination of spatial and temporal avoidance. However, wild dogs did not exhibit spatial or temporal avoidance of spotted hyenas, likely because wild dog pack sizes were large enough to adequately defend their kills. Understanding that larger carnivores affect the movements and space use of other carnivores is important for managing current small and fragmented carnivore populations, especially as reintroductions and translocations are essential tools used for the survival of endangered species, as with African wild dogs.
Patrick, Brian; Moore, James; Hackenberger, Wesley; Jiang, Xiaoning
2013-01-01
A lightweight, cryogenically capable, scalable, deformable mirror has been developed for space telescopes. This innovation makes use of polymer-based membrane mirror technology to enable large-aperture mirrors that can be easily launched and deployed. The key component of this innovation is a lightweight, large-stroke, cryogenic actuator array that combines the high degree of mirror figure control needed with a large actuator influence function. The latter aspect of the innovation allows membrane mirror figure correction with a relatively low actuator density, preserving the lightweight attributes of the system. The principal components of this technology are lightweight, low-profile, high-stroke, cryogenic-capable piezoelectric actuators based on PMN-PT (piezoelectric lead magnesium niobate-lead titanate) single-crystal configured in a flextensional actuator format; high-quality, low-thermal-expansion polymer membrane mirror materials developed by NeXolve; and electrostatic coupling between the membrane mirror and the piezoelectric actuator assembly to minimize problems such as actuator print-through.
Laboratory simulation of space plasma phenomena*
Amatucci, B.; Tejero, E. M.; Ganguli, G.; Blackwell, D.; Enloe, C. L.; Gillman, E.; Walker, D.; Gatling, G.
2017-12-01
Laboratory devices, such as the Naval Research Laboratory's Space Physics Simulation Chamber, are large-scale experiments dedicated to the creation of large-volume plasmas with parameters realistically scaled to those found in various regions of the near-Earth space plasma environment. Such devices make valuable contributions to the understanding of space plasmas by investigating phenomena under carefully controlled, reproducible conditions, allowing for the validation of theoretical models being applied to space data. By working in collaboration with in situ experimentalists to create realistic conditions scaled to those found during the observations of interest, the microphysics responsible for the observed events can be investigated in detail not possible in space. To date, numerous investigations of phenomena such as plasma waves, wave-particle interactions, and particle energization have been successfully performed in the laboratory. In addition to investigations such as plasma wave and instability studies, the laboratory devices can also make valuable contributions to the development and testing of space plasma diagnostics. One example is the plasma impedance probe developed at NRL. Originally developed as a laboratory diagnostic, the sensor has now been flown on a sounding rocket, is included on a CubeSat experiment, and will be included on the DoD Space Test Program's STP-H6 experiment on the International Space Station. In this presentation, we will describe several examples of the laboratory investigation of space plasma waves and instabilities and diagnostic development. *This work supported by the NRL Base Program.
Directory of Open Access Journals (Sweden)
Shujing Su
2015-01-01
Full Text Available For the characteristics of parameters dispersion in large factories, storehouses, and other applications, a distributed parameter measurement system is designed that is based on the ring network. The structure of the system and the circuit design of the master-slave node are described briefly. The basic protocol architecture about transmission communication is introduced, and then this paper comes up with two kinds of distributed transmission control methods. Finally, the reliability, extendibility, and control characteristic of these two methods are tested through a series of experiments. Moreover, the measurement results are compared and discussed.
Evaluation of linear DC motor actuators for control of large space structures
Ide, Eric Nelson
1988-01-01
This thesis examines the use of a linear DC motor as a proof mass actuator for the control of large space structures. A model for the actuator, including the current and force compensation used, is derived. Because of the force compensation, the actuator is unstable when placed on a structure. Relative position feedback is used for actuator stabilization. This method of compensation couples the actuator to the mast in a feedback configuration. Three compensator designs are prop...
Alberts, Samantha J.
The investigation of microgravity fluid dynamics emerged out of necessity with the advent of space exploration. In particular, capillary research took a leap forward in the 1960s with regards to liquid settling and interfacial dynamics. Due to inherent temperature variations in large spacecraft liquid systems, such as fuel tanks, forces develop on gas-liquid interfaces which induce thermocapillary flows. To date, thermocapillary flows have been studied in small, idealized research geometries usually under terrestrial conditions. The 1 to 3m lengths in current and future large tanks and hardware are designed based on hardware rather than research, which leaves spaceflight systems designers without the technological tools to effectively create safe and efficient designs. This thesis focused on the design and feasibility of a large length-scale thermocapillary flow experiment, which utilizes temperature variations to drive a flow. The design of a helical channel geometry ranging from 1 to 2.5m in length permits a large length-scale thermocapillary flow experiment to fit in a seemingly small International Space Station (ISS) facility such as the Fluids Integrated Rack (FIR). An initial investigation determined the proposed experiment produced measurable data while adhering to the FIR facility limitations. The computational portion of this thesis focused on the investigation of functional geometries of fuel tanks and depots using Surface Evolver. This work outlines the design of a large length-scale thermocapillary flow experiment for the ISS FIR. The results from this work improve the understanding thermocapillary flows and thus improve technological tools for predicting heat and mass transfer in large length-scale thermocapillary flows. Without the tools to understand the thermocapillary flows in these systems, engineers are forced to design larger, heavier vehicles to assure safety and mission success.
Moduli space of Calabi-Yau manifolds
International Nuclear Information System (INIS)
Candelas, P.; De la Ossa, X.C.
1991-01-01
We present an accessible account of the local geometry of the parameter space of Calabi-Yau manifolds. It is shown that the parameter space decomposes, at least locally, into a product with the space of parameters of the complex structure as one factor and a complex extension of the parameter space of the Kaehler class as the other. It is also shown that each of these spaces is itself a Kaehler manifold and is moreover a Kaehler manifold of restricted type. There is a remarkable symmetry in the intrinsic structures of the two parameter spaces and the relevance of this to the conjectured existence of mirror manifolds is discussed. The two parameter spaces behave differently with respect to modular transformations and it is argued that the role of quantum corrections is to restore the symmetry between the two types of parameters so as to enforce modular invariance. (orig.)
Water quality modeling requires across-scale support of combined digital soil elements and simulation parameters. This paper presents the unprecedented development of a large spatial scale (1:250,000) ArcGIS geodatabase coverage designed as a functional repository of soil-parameters for modeling an...
Structural-electromagnetic bidirectional coupling analysis of space large film reflector antennas
Zhang, Xinghua; Zhang, Shuxin; Cheng, ZhengAi; Duan, Baoyan; Yang, Chen; Li, Meng; Hou, Xinbin; Li, Xun
2017-10-01
As used for energy transmission, a space large film reflector antenna (SLFRA) is characterized by large size and enduring high power density. The structural flexibility and the microwave radiation pressure (MRP) will lead to the phenomenon of structural-electromagnetic bidirectional coupling (SEBC). In this paper, the SEBC model of SLFRA is presented, then the deformation induced by the MRP and the corresponding far field pattern deterioration are simulated. Results show that, the direction of the MRP is identical to the normal of the reflector surface, and the magnitude is proportional to the power density and the square of cosine incident angle. For a typical cosine function distributed electric field, the MRP is a square of cosine distributed across the diameter. The maximum deflections of SLFRA linearly increase with the increasing microwave power densities and the square of the reflector diameters, and vary inversely with the film thicknesses. When the reflector diameter becomes 100 m large and the microwave power density exceeds 102 W/cm2, the gain loss of the 6.3 μm-thick reflector goes beyond 0.75 dB. When the MRP-induced deflection degrades the reflector performance, the SEBC should be taken into account.
Directory of Open Access Journals (Sweden)
Mika Tanda
2015-01-01
Full Text Available We compute alien derivatives of the WKB solutions of the Gauss hypergeometric differential equation with a large parameter and discuss the singularity structures of the Borel transforms of the WKB solution expressed in terms of its alien derivatives.
Algorithm sensitivity analysis and parameter tuning for tissue image segmentation pipelines
Kurç, Tahsin M.; Taveira, Luís F. R.; Melo, Alba C. M. A.; Gao, Yi; Kong, Jun; Saltz, Joel H.
2017-01-01
Abstract Motivation: Sensitivity analysis and parameter tuning are important processes in large-scale image analysis. They are very costly because the image analysis workflows are required to be executed several times to systematically correlate output variations with parameter changes or to tune parameters. An integrated solution with minimum user interaction that uses effective methodologies and high performance computing is required to scale these studies to large imaging datasets and expensive analysis workflows. Results: The experiments with two segmentation workflows show that the proposed approach can (i) quickly identify and prune parameters that are non-influential; (ii) search a small fraction (about 100 points) of the parameter search space with billions to trillions of points and improve the quality of segmentation results (Dice and Jaccard metrics) by as much as 1.42× compared to the results from the default parameters; (iii) attain good scalability on a high performance cluster with several effective optimizations. Conclusions: Our work demonstrates the feasibility of performing sensitivity analyses, parameter studies and auto-tuning with large datasets. The proposed framework can enable the quantification of error estimations and output variations in image segmentation pipelines. Availability and Implementation: Source code: https://github.com/SBU-BMI/region-templates/. Contact: teodoro@unb.br Supplementary information: Supplementary data are available at Bioinformatics online. PMID:28062445
Determining global parameters of the oscillations of solar-like stars
DEFF Research Database (Denmark)
Mathur, S.; García, R. A.; Régulo, C.
2010-01-01
Context. Helioseismology has enabled us to better understand the solar interior, while also allowing us to better constrain solar models. But now is a tremendous epoch for asteroseismology as space missions dedicated to studying stellar oscillations have been launched within the last years (MOST....... Aims. The goal of this research work is to estimate the global parameters of any solar-like oscillating target in an automatic manner. We want to determine the global parameters of the acoustic modes (large separation, range of excited pressure modes, maximum amplitude, and its corresponding frequency...
Concept for a power system controller for large space electrical power systems
Lollar, L. F.; Lanier, J. R., Jr.; Graves, J. R.
1981-01-01
The development of technology for a fail-operatonal power system controller (PSC) utilizing microprocessor technology for managing the distribution and power processor subsystems of a large multi-kW space electrical power system is discussed. The specific functions which must be performed by the PSC, the best microprocessor available to do the job, and the feasibility, cost savings, and applications of a PSC were determined. A limited function breadboard version of a PSC was developed to demonstrate the concept and potential cost savings.
The Design Space of the Embryonic Cell Cycle Oscillator.
Mattingly, Henry H; Sheintuch, Moshe; Shvartsman, Stanislav Y
2017-08-08
One of the main tasks in the analysis of models of biomolecular networks is to characterize the domain of the parameter space that corresponds to a specific behavior. Given the large number of parameters in most models, this is no trivial task. We use a model of the embryonic cell cycle to illustrate the approaches that can be used to characterize the domain of parameter space corresponding to limit cycle oscillations, a regime that coordinates periodic entry into and exit from mitosis. Our approach relies on geometric construction of bifurcation sets, numerical continuation, and random sampling of parameters. We delineate the multidimensional oscillatory domain and use it to quantify the robustness of periodic trajectories. Although some of our techniques explore the specific features of the chosen system, the general approach can be extended to other models of the cell cycle engine and other biomolecular networks. Copyright © 2017 Biophysical Society. Published by Elsevier Inc. All rights reserved.
The topological susceptibility in the large-N limit of SU(N) Yang-Mills theory
Energy Technology Data Exchange (ETDEWEB)
Ce, Marco [Scuola Normale Superiore, Pisa (Italy); Istituto Nazionale di Fisica Nucleare, Pisa (Italy); Garcia Vera, Miguel [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Humboldt-Universitaet, Berlin (Germany). Inst. fuer Physik; Giusti, Leonardo [Milano-Bicocca Univ. (Italy); INFN, Milano (Italy); Schaefer, Stefan [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC
2016-07-15
We compute the topological susceptibility of the SU(N) Yang-Mills theory in the large-N limit with a percent level accuracy. This is achieved by measuring the gradient-flow definition of the susceptibility at three values of the lattice spacing for N=3,4,5,6. Thanks to this coverage of parameter space, we can extrapolate the results to the large-N and continuum limits with confidence. Open boundary conditions are instrumental to make simulations feasible on the finer lattices at the larger N.
Can we close large prosthetic space with orthodontics?
Mesko, Mauro Elias; Skupien, Jovito Adiel; Valentini, Fernanda; Pereira-Cenci, Tatiana
2013-01-01
For years, the treatment for the replacement of a missing tooth was a fixed dental prosthesis. Currently, implants are indicated to replace missing teeth due to high clinical success and with the advantage of not performing preparations in the adjacent tooth. Another option for space closure is the use of orthodontics associated to miniscrews for anchorage allowing better control of the orthodontic biomechanics and especially making possible closure of larger prosthetic spaces. Thus, this article describes two cases with indications and discussion of the advantages and disadvantages of using orthodontics for prosthetic spaces closure. The cases herein presented show that it is possible to close an space when there are available teeth in the adjacent area. It can be concluded that when a malocclusion is present there will be a strong trend to indicate space closure by orthodontic movement as it preserves natural teeth and seems a more physiological approach.
Marinescu, O.; Bociort, F.; Braat, J.
2004-01-01
When Extreme Ultraviolet mirror systems having several high-order aspheric surfaces are optimized, the configurations often enter into highly unstable regions of the parameter space. Small changes of system parameters lead then to large changes in ray paths, and therefore optimization algorithms
Exploitation of ISAR Imagery in Euler Parameter Space
National Research Council Canada - National Science Library
Baird, Christopher; Kersey, W. T; Giles, R; Nixon, W. E
2005-01-01
.... The Euler parameters have potential value in target classification but have historically met with limited success due to ambiguities that arise in decomposition as well as the parameters' sensitivity...
Braak, ter C.J.F.
2006-01-01
Differential Evolution (DE) is a simple genetic algorithm for numerical optimization in real parameter spaces. In a statistical context one would not just want the optimum but also its uncertainty. The uncertainty distribution can be obtained by a Bayesian analysis (after specifying prior and
Design and analysis of throttle orifice applying to small space with large pressure drop
International Nuclear Information System (INIS)
Li Yan; Lu Daogang; Zeng Xiaokang
2013-01-01
Throttle orifices are widely used in various pipe systems of nuclear power plants. Improper placement of orifices would aggravate the vibration of the pipe with strong noise, damaging the structure of the pipe and the completeness of the system. In this paper, effects of orifice diameter, thickness, eccentric distance and chamfering on the throttling are analyzed applying CFD software. Based on that, we propose the throttle orifices which apply to small space with large pressure drop are multiple eccentric orifices. The results show that the multiple eccentric orifices can effectively restrain the cavitation and flash distillation, while generating a large pressure drop. (authors)
Litvinenko, Alexander
2017-09-26
The main goal of this article is to introduce the parallel hierarchical matrix library HLIBpro to the statistical community. We describe the HLIBCov package, which is an extension of the HLIBpro library for approximating large covariance matrices and maximizing likelihood functions. We show that an approximate Cholesky factorization of a dense matrix of size $2M\\\\times 2M$ can be computed on a modern multi-core desktop in few minutes. Further, HLIBCov is used for estimating the unknown parameters such as the covariance length, variance and smoothness parameter of a Mat\\\\\\'ern covariance function by maximizing the joint Gaussian log-likelihood function. The computational bottleneck here is expensive linear algebra arithmetics due to large and dense covariance matrices. Therefore covariance matrices are approximated in the hierarchical ($\\\\H$-) matrix format with computational cost $\\\\mathcal{O}(k^2n \\\\log^2 n/p)$ and storage $\\\\mathcal{O}(kn \\\\log n)$, where the rank $k$ is a small integer (typically $k<25$), $p$ the number of cores and $n$ the number of locations on a fairly general mesh. We demonstrate a synthetic example, where the true values of known parameters are known. For reproducibility we provide the C++ code, the documentation, and the synthetic data.
Litvinenko, Alexander
2017-09-24
The main goal of this article is to introduce the parallel hierarchical matrix library HLIBpro to the statistical community. We describe the HLIBCov package, which is an extension of the HLIBpro library for approximating large covariance matrices and maximizing likelihood functions. We show that an approximate Cholesky factorization of a dense matrix of size $2M\\\\times 2M$ can be computed on a modern multi-core desktop in few minutes. Further, HLIBCov is used for estimating the unknown parameters such as the covariance length, variance and smoothness parameter of a Mat\\\\\\'ern covariance function by maximizing the joint Gaussian log-likelihood function. The computational bottleneck here is expensive linear algebra arithmetics due to large and dense covariance matrices. Therefore covariance matrices are approximated in the hierarchical ($\\\\mathcal{H}$-) matrix format with computational cost $\\\\mathcal{O}(k^2n \\\\log^2 n/p)$ and storage $\\\\mathcal{O}(kn \\\\log n)$, where the rank $k$ is a small integer (typically $k<25$), $p$ the number of cores and $n$ the number of locations on a fairly general mesh. We demonstrate a synthetic example, where the true values of known parameters are known. For reproducibility we provide the C++ code, the documentation, and the synthetic data.
Jiao, C. F.; Engel, J.; Holt, J. D.
2017-11-01
We use the generator-coordinate method (GCM) with realistic shell-model interactions to closely approximate full shell-model calculations of the matrix elements for the neutrinoless double-β decay of 48Ca, 76Ge, and 82Se. We work in one major shell for the first isotope, in the f5 /2p g9 /2 space for the second and third, and finally in two major shells for all three. Our coordinates include not only the usual axial deformation parameter β , but also the triaxiality angle γ and neutron-proton pairing amplitudes. In the smaller model spaces our matrix elements agree well with those of full shell-model diagonalization, suggesting that our Hamiltonian-based GCM captures most of the important valence-space correlations. In two major shells, where exact diagonalization is not currently possible, our matrix elements are only slightly different from those in a single shell.
Bolcar, Matthew R.; Balasubramanian, Kunjithapatham; Clampin, Mark; Crooke, Julie; Feinberg, Lee; Postman, Marc; Quijada, Manuel; Rauscher, Bernard; Redding, David; Rioux, Norman; Shaklan, Stuart; Stahl, H. Philip; Stahle, Carl; Thronson, Harley
2015-09-01
The Advanced Technology Large Aperture Space Telescope (ATLAST) team has identified five key technologies to enable candidate architectures for the future large-aperture ultraviolet/optical/infrared (LUVOIR) space observatory envisioned by the NASA Astrophysics 30-year roadmap, Enduring Quests, Daring Visions. The science goals of ATLAST address a broad range of astrophysical questions from early galaxy and star formation to the processes that contributed to the formation of life on Earth, combining general astrophysics with direct-imaging and spectroscopy of habitable exoplanets. The key technologies are: internal coronagraphs, starshades (or external occulters), ultra-stable large-aperture telescopes, detectors, and mirror coatings. Selected technology performance goals include: 1x10-10 raw contrast at an inner working angle of 35 milli-arcseconds, wavefront error stability on the order of 10 pm RMS per wavefront control step, autonomous on-board sensing and control, and zero-read-noise single-photon detectors spanning the exoplanet science bandpass between 400 nm and 1.8 μm. Development of these technologies will provide significant advances over current and planned observatories in terms of sensitivity, angular resolution, stability, and high-contrast imaging. The science goals of ATLAST are presented and flowed down to top-level telescope and instrument performance requirements in the context of a reference architecture: a 10-meter-class, segmented aperture telescope operating at room temperature (~290 K) at the sun-Earth Lagrange-2 point. For each technology area, we define best estimates of required capabilities, current state-of-the-art performance, and current Technology Readiness Level (TRL) - thus identifying the current technology gap. We report on current, planned, or recommended efforts to develop each technology to TRL 5.
QCD corrections to squark production in e+ e- annihilation in the MSSM with complex parameters
International Nuclear Information System (INIS)
Nguyen Thi Thu Huong; Ha Huy Bang; Nguyen Chinh Cuong; Dao Thi Le Thuy
2004-11-01
We discuss the pair production of scalar quarks in e + e - annihilation within the MSSM with complex parameters. We calculate the SUSY-QCD corrections to the cross section e + e - → q-tilde i q-bar-tilde j (i, j 1, 2) and show that the effect of the CP phases of these complex parameters on the cross section can be quite strong in a large region of the MSSM parameter space. This could have important implications for squarks searches and the MSSM parameter determination in future collider experiments. (author)
Low rank approximation methods for MR fingerprinting with large scale dictionaries.
Yang, Mingrui; Ma, Dan; Jiang, Yun; Hamilton, Jesse; Seiberlich, Nicole; Griswold, Mark A; McGivney, Debra
2018-04-01
This work proposes new low rank approximation approaches with significant memory savings for large scale MR fingerprinting (MRF) problems. We introduce a compressed MRF with randomized singular value decomposition method to significantly reduce the memory requirement for calculating a low rank approximation of large sized MRF dictionaries. We further relax this requirement by exploiting the structures of MRF dictionaries in the randomized singular value decomposition space and fitting them to low-degree polynomials to generate high resolution MRF parameter maps. In vivo 1.5T and 3T brain scan data are used to validate the approaches. T 1 , T 2 , and off-resonance maps are in good agreement with that of the standard MRF approach. Moreover, the memory savings is up to 1000 times for the MRF-fast imaging with steady-state precession sequence and more than 15 times for the MRF-balanced, steady-state free precession sequence. The proposed compressed MRF with randomized singular value decomposition and dictionary fitting methods are memory efficient low rank approximation methods, which can benefit the usage of MRF in clinical settings. They also have great potentials in large scale MRF problems, such as problems considering multi-component MRF parameters or high resolution in the parameter space. Magn Reson Med 79:2392-2400, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Reconciling Planck with the local value of H0 in extended parameter space
Directory of Open Access Journals (Sweden)
Eleonora Di Valentino
2016-10-01
Full Text Available The recent determination of the local value of the Hubble constant by Riess et al., 2016 (hereafter R16 is now 3.3 sigma higher than the value derived from the most recent CMB anisotropy data provided by the Planck satellite in a ΛCDM model. Here we perform a combined analysis of the Planck and R16 results in an extended parameter space, varying simultaneously 12 cosmological parameters instead of the usual 6. We find that a phantom-like dark energy component, with effective equation of state w=−1.29−0.12+0.15 at 68% c.l. can solve the current tension between the Planck dataset and the R16 prior in an extended ΛCDM scenario. On the other hand, the neutrino effective number is fully compatible with standard expectations. This result is confirmed when including cosmic shear data from the CFHTLenS survey and CMB lensing constraints from Planck. However, when BAO measurements are included we find that some of the tension with R16 remains, as also is the case when we include the supernova type Ia luminosity distances from the JLA catalog.
On the testing fast response NPP's valves of large nominal bores and high parameters
International Nuclear Information System (INIS)
Majorov, A.P.; Ostretsov, I.N.
1990-01-01
Investigation technique for valves of large norminal bores and high parameters which is based on application of simulation effect for operation and accident loadings during movement of valve lock at bench tests with medium flow rate by 100-1000 times less than during operation is given. Loading simulation technique is provided using simulator of lock loading. Investigation results are essential to make decision concerning advisability of serial production of fittings without full-scale test conducting
Optimization of Performance Parameters for Large Area Silicon Photomultipliers
Janzen, Kathryn
2008-10-01
The goal of the GlueX experiment is to search for exotic hybrid mesons as evidence of gluonic excitations in an effort to better understand confinement. A key component of the GlueX detector is the electromagnetic barrel calorimeter (BCAL) located immediately inside a superconducting solenoid of approximately 2.5T. Because of this arrangement, traditional vacuum photomultiplier tubes (PMTs) which are affected significantly by magnetic fields cannot be used on the BCAL. The use of Silicon photomultipliers (SiPMs) as front-end detectors has been proposed. While the largest SiPMs that have been previously employed by other experiments are 1x1 mm^2, GlueX proposes to use large area SiPMs each composed of 16 - 3x3 mm^2 cells in a 4x4 array. This puts the GlueX collaboration in the unique position of driving the technology for larger area sensors. In this talk I will discuss tests done in Regina regarding performance parameters of prototype SiPM arrays delivered by SensL, a photonics research and development company based in Ireland, as well as sample 1x1 mm^2 and 3x3 mm^2 SiPMs.
Space-charge calculations in synchrotrons
Energy Technology Data Exchange (ETDEWEB)
Machida, S.
1993-05-01
One obvious bottleneck of achieving high luminosity in hadron colliders, such as the Superconducting Super Collider (SSC), is the beam emittance growth, due to space-charge effects in low energy injector synchrotrons. Although space-charge effects have been recognized since the alternating-gradient synchrotron was invented, and the Laslett tune shift usually calculated to quantify these effects, our understanding of the effects is limited, especially when the Laslett tune shift becomes a large fraction of the integer. Using the Simpsons tracking code, which we developed to study emittance preservation issues in proton synchrotrons, we investigated space-charge effects in the SSC Low Energy Booster (LEB). We observed detailed dependence on parameters such as beam intensity, initial emittance, injection energy, lattice function, and longitudinal motion. A summary of those findings, as well as the tracking technique we developed for the study, are presented.
Moran, Xose Anxelu G.
2015-01-09
Annual variability of photosynthetic parameters and primary production (PP), with a special focus on large (i.e. >2μm) phytoplankton was assessed by monthly photosynthesis-irradiance experiments at two depths of the southern Bay of Biscay continental shelf in 2003. Integrated chl a (22-198mgm-2) was moderately dominated by large cells on an annual basis. The March through May dominance of diatoms was replaced by similar shares of dinoflagellates and other flagellates during the rest of the year. Variability of photosynthetic parameters was similar for total and large phytoplankton, but stratification affected the initial slope αB [0.004-0.049mgCmg chl a-1h-1 (μmol photons m-2s-1)-1] and maximum photosynthetic rates PmB (0.1-10.7mgCmg chl a-1h-1) differently. PmB, correlated positively with αB only for the large fraction. PmB tended to respond faster to ambient irradiance than αB, which was negatively correlated with diatom abundance in the >2μm fraction. Integrated PP rates were relatively low, averaging 387 (132-892) for the total and 207 (86-629) mg C m-2d-1 for the large fraction, probably the result of inorganic nutrient limitation. Although similar mean annual contributions of large phytoplankton to total values were found for biomass and PP (~58%), water-column production to biomass ratios (2-26mgCmg chl-1d-1) and light utilization efficiency of the >2μm fraction (0.09-0.84gCg chl-1mol photons-1m2) were minimum during the spring bloom. Our results indicate that PP peaks in the area are not necessarily associated to maximum standing stocks.
Moran, Xose Anxelu G.; Scharek, Renate
2015-01-01
Annual variability of photosynthetic parameters and primary production (PP), with a special focus on large (i.e. >2μm) phytoplankton was assessed by monthly photosynthesis-irradiance experiments at two depths of the southern Bay of Biscay continental shelf in 2003. Integrated chl a (22-198mgm-2) was moderately dominated by large cells on an annual basis. The March through May dominance of diatoms was replaced by similar shares of dinoflagellates and other flagellates during the rest of the year. Variability of photosynthetic parameters was similar for total and large phytoplankton, but stratification affected the initial slope αB [0.004-0.049mgCmg chl a-1h-1 (μmol photons m-2s-1)-1] and maximum photosynthetic rates PmB (0.1-10.7mgCmg chl a-1h-1) differently. PmB, correlated positively with αB only for the large fraction. PmB tended to respond faster to ambient irradiance than αB, which was negatively correlated with diatom abundance in the >2μm fraction. Integrated PP rates were relatively low, averaging 387 (132-892) for the total and 207 (86-629) mg C m-2d-1 for the large fraction, probably the result of inorganic nutrient limitation. Although similar mean annual contributions of large phytoplankton to total values were found for biomass and PP (~58%), water-column production to biomass ratios (2-26mgCmg chl-1d-1) and light utilization efficiency of the >2μm fraction (0.09-0.84gCg chl-1mol photons-1m2) were minimum during the spring bloom. Our results indicate that PP peaks in the area are not necessarily associated to maximum standing stocks.
International Nuclear Information System (INIS)
Gazoya, E.D.K.; Prempeh, E.; Banini, G.K.
2015-01-01
The relationship between the spin transformations of the special linear group of order 2, SL (2, C) and the aggregate SO(3) of the three-dimensional pure rotations when considered as a group in itself (and not as a subgroup of the Lorentz group), is investigated. It is shown, by the spinor map X - → AXA ct which is all action of SL(2. C) on the space of Hermitian matrices, that the one- parameter subgroup of rotations generated are precisely those of angles which are multiples 2π. (au)
Derivation of Delaware Bay tidal parameters from space shuttle photography
International Nuclear Information System (INIS)
Zheng, Quanan; Yan, Xiaohai; Klemas, V.
1993-01-01
The tide-related parameters of the Delaware Bay are derived from space shuttle time-series photographs. The water areas in the bay are measured from interpretation maps of the photographs with a CALCOMP 9100 digitizer and ERDAS Image Processing System. The corresponding tidal levels are calculated using the exposure time annotated on the photographs. From these data, an approximate function relating the water area to the tidal level at a reference point is determined. Based on the function, the water areas of the Delaware Bay at mean high water (MHW) and mean low water (MLW), below 0 m, and for the tidal zone are inferred. With MHW and MLW areas and the mean tidal range, the authors calculate the tidal influx of the Delaware Bay, which is 2.76 x 1O 9 m 3 . Furthermore, the velocity of flood tide at the bay mouth is determined using the tidal flux and an integral of the velocity distribution function at the cross section between Cape Henlopen and Cape May. The result is 132 cm/s, which compares well with the data on tidal current charts
A BRDF statistical model applying to space target materials modeling
Liu, Chenghao; Li, Zhi; Xu, Can; Tian, Qichen
2017-10-01
In order to solve the problem of poor effect in modeling the large density BRDF measured data with five-parameter semi-empirical model, a refined statistical model of BRDF which is suitable for multi-class space target material modeling were proposed. The refined model improved the Torrance-Sparrow model while having the modeling advantages of five-parameter model. Compared with the existing empirical model, the model contains six simple parameters, which can approximate the roughness distribution of the material surface, can approximate the intensity of the Fresnel reflectance phenomenon and the attenuation of the reflected light's brightness with the azimuth angle changes. The model is able to achieve parameter inversion quickly with no extra loss of accuracy. The genetic algorithm was used to invert the parameters of 11 different samples in the space target commonly used materials, and the fitting errors of all materials were below 6%, which were much lower than those of five-parameter model. The effect of the refined model is verified by comparing the fitting results of the three samples at different incident zenith angles in 0° azimuth angle. Finally, the three-dimensional modeling visualizations of these samples in the upper hemisphere space was given, in which the strength of the optical scattering of different materials could be clearly shown. It proved the good describing ability of the refined model at the material characterization as well.
Large Space Structures Fielding Plan
1991-01-01
15830 STS PAYLOARE SYSTESETY 3C (A %AA IASB STS DAYLCODSICARGO SRORM 1PVFR! PR 111L 5 SOL? CIE. JR-012 SAFETY 19LENEVIASO PLA PSOR 1, ,I -1 AR S’EATIOR...support/safety measures in space will interface. Although these features can be developed to some degree as stated objectives, many must be designed from...continuity 7. Check system for mechanical continuity 8. Verify LSS assembly continuity B. Productivity Measurements 1. Note duration of assembly activities
Gambicorti, Lisa; D'Amato, Francesco; Vettore, Christian; Duò, Fabrizio; Guercia, Alessio; Patauner, Christian; Biasi, Roberto; Lisi, Franco; Riccardi, Armando; Gallieni, Daniele; Lazzarini, Paolo; Tintori, Matteo; Zuccaro Marchi, Alessandro; Pereira do Carmo, Joao
2017-11-01
The aim of this work is to describe the latest results of new technological concepts for Large Aperture Telescopes Technology (LATT) using thin deployable lightweight active mirrors. This technology is developed under the European Space Agency (ESA) Technology Research Program and can be exploited in all the applications based on the use of primary mirrors of space telescopes with large aperture, segmented lightweight telescopes with wide Field of View (FOV) and low f/#, and LIDAR telescopes. The reference mission application is a potential future ESA mission, related to a space borne DIAL (Differential Absorption Lidar) instrument operating around 935.5 nm with the goal to measure water vapor profiles in atmosphere. An Optical BreadBoard (OBB) for LATT has been designed for investigating and testing two critical aspects of the technology: 1) control accuracy in the mirror surface shaping. 2) mirror survivability to launch. The aim is to evaluate the effective performances of the long stroke smart-actuators used for the mirror control and to demonstrate the effectiveness and the reliability of the electrostatic locking (EL) system to restraint the thin shell on the mirror backup structure during launch. The paper presents a comprehensive vision of the breadboard focusing on how the requirements have driven the design of the whole system and of the various subsystems. The manufacturing process of the thin shell is also presented.
International Nuclear Information System (INIS)
Barnes, G.D.
1982-01-01
The feasibility of a polygeneration plant at Kennedy Space Center was studied. Liquid hydrogen and gaseous nitrogen are the two principal products in consideration. Environmental parameters (air quality, water quality, biological diversity and hazardous waste disposal) necessary for the feasibility study were investigated. A National Environmental Policy Act (NEPA) project flow sheet was to be formulated for the environmental impact statement. Water quality criteria for Florida waters were to be established
Directory of Open Access Journals (Sweden)
Zhang Peiguo
2011-01-01
Full Text Available Abstract By obtaining intervals of the parameter λ, this article investigates the existence of a positive solution for a class of nonlinear boundary value problems of second-order differential equations with integral boundary conditions in abstract spaces. The arguments are based upon a specially constructed cone and the fixed point theory in cone for a strict set contraction operator. MSC: 34B15; 34B16.
Sepehry-Fard, F.; Coulthard, Maurice H.
1995-01-01
The objective of this publication is to introduce the enhancement methods for the overall reliability and maintainability methods of assessment on the International Space Station. It is essential that the process to predict the values of the maintenance time dependent variable parameters such as mean time between failure (MTBF) over time do not in themselves generate uncontrolled deviation in the results of the ILS analysis such as life cycle costs, spares calculation, etc. Furthermore, the very acute problems of micrometeorite, Cosmic rays, flares, atomic oxygen, ionization effects, orbital plumes and all the other factors that differentiate maintainable space operations from non-maintainable space operations and/or ground operations must be accounted for. Therefore, these parameters need be subjected to a special and complex process. Since reliability and maintainability strongly depend on the operating conditions that are encountered during the entire life of the International Space Station, it is important that such conditions are accurately identified at the beginning of the logistics support requirements process. Environmental conditions which exert a strong influence on International Space Station will be discussed in this report. Concurrent (combined) space environments may be more detrimental to the reliability and maintainability of the International Space Station than the effects of a single environment. In characterizing the logistics support requirements process, the developed design/test criteria must consider both the single and/or combined environments in anticipation of providing hardware capability to withstand the hazards of the International Space Station profile. The effects of the combined environments (typical) in a matrix relationship on the International Space Station will be shown. The combinations of the environments where the total effect is more damaging than the cumulative effects of the environments acting singly, may include a
Kalanov, Temur Z.
2003-04-01
A new theory of space is suggested. It represents the new point of view which has arisen from the critical analysis of the foundations of physics (in particular the theory of relativity and quantum mechanics), mathematics, cosmology and philosophy. The main idea following from the analysis is that the concept of movement represents a key to understanding of the essence of space. The starting-point of the theory is represented by the following philosophical (dialectical materialistic) principles. (a) The principle of the materiality (of the objective reality) of the Nature: the Nature (the Universe) is a system (a set) of material objects (particles, bodies, fields); each object has properties, features, and the properties, the features are inseparable characteristics of material object and belong only to material object. (b) The principle of the existence of material object: an object exists as the objective reality, and movement is a form of existence of object. (c) The principle (definition) of movement of object: the movement is change (i.e. transition of some states into others) in general; the movement determines a direction, and direction characterizes the movement. (d) The principle of existence of time: the time exists as the parameter of the system of reference. These principles lead to the following statements expressing the essence of space. (1) There is no space in general, and there exist space only as a form of existence of the properties and features of the object. It means that the space is a set of the measures of the object (the measure is the philosophical category meaning unity of the qualitative and quantitative determinacy of the object). In other words, the space of the object is a set of the states of the object. (2) The states of the object are manifested only in a system of reference. The main informational property of the unitary system researched physical object + system of reference is that the system of reference determines (measures
International Nuclear Information System (INIS)
Guerrero, M; Li, X Allen
2003-01-01
Numerous studies of early-stage breast cancer treated with breast conserving surgery (BCS) and radiotherapy (RT) have been published in recent years. Both external beam radiotherapy (EBRT) and/or brachytherapy (BT) with different fractionation schemes are currently used. The present RT practice is largely based on empirical experience and it lacks a reliable modelling tool to compare different RT modalities or to design new treatment strategies. The purpose of this work is to derive a plausible set of radiobiological parameters that can be used for RT treatment planning. The derivation is based on existing clinical data and is consistent with the analysis of a large number of published clinical studies on early-stage breast cancer. A large number of published clinical studies on the treatment of early breast cancer with BCS plus RT (including whole breast EBRT with or without a boost to the tumour bed, whole breast EBRT alone, brachytherapy alone) and RT alone are compiled and analysed. The linear quadratic (LQ) model is used in the analysis. Three of these clinical studies are selected to derive a plausible set of LQ parameters. The potential doubling time is set a priori in the derivation according to in vitro measurements from the literature. The impact of considering lower or higher T pot is investigated. The effects of inhomogeneous dose distributions are considered using clinically representative dose volume histograms. The derived LQ parameters are used to compare a large number of clinical studies using different regimes (e.g., RT modality and/or different fractionation schemes with different prescribed dose) in order to validate their applicability. The values of the equivalent uniform dose (EUD) and biologically effective dose (BED) are used as a common metric to compare the biological effectiveness of each treatment regime. We have obtained a plausible set of radiobiological parameters for breast cancer. This set of parameters is consistent with in vitro
Parameter estimation in stochastic differential equations
Bishwal, Jaya P N
2008-01-01
Parameter estimation in stochastic differential equations and stochastic partial differential equations is the science, art and technology of modelling complex phenomena and making beautiful decisions. The subject has attracted researchers from several areas of mathematics and other related fields like economics and finance. This volume presents the estimation of the unknown parameters in the corresponding continuous models based on continuous and discrete observations and examines extensively maximum likelihood, minimum contrast and Bayesian methods. Useful because of the current availability of high frequency data is the study of refined asymptotic properties of several estimators when the observation time length is large and the observation time interval is small. Also space time white noise driven models, useful for spatial data, and more sophisticated non-Markovian and non-semimartingale models like fractional diffusions that model the long memory phenomena are examined in this volume.
Measurements of Dune Parameters on Titan Suggest Differences in Sand Availability
Stewart, Brigitte W.; Radebaugh, Jani
2014-11-01
The equatorial region of Saturn’s moon Titan has five large sand seas with dunes similar to large linear dunes on Earth. Cassini Radar SAR swaths have high enough resolution (300 m) to measure dune parameters such as width and spacing, which helps inform us about formation conditions and long-term evolution of the sand dunes. Previous measurements in locations scattered across Titan have revealed an average width of 1.3 km and spacing of 2.7 km, with variations by location. We have taken over 1200 new measurements of dune width and spacing in the T8 swath, a region on the leading hemisphere of Titan in the Belet Sand Sea, between -5 and -9 degrees latitude. We have also taken over 500 measurements in the T44 swath, located on the anti-Saturn hemisphere in the Shangri-La Sand Sea, between 0 and 20 degrees latitude. We correlated each group of 50 measurements with the average distance from the edge of the dune field to obtain an estimate of how position within a dune field affects dune parameters. We found that in general, the width and spacing of dunes decreases with distance from the edge of the dune field, consistent with similar measurements in sand seas on Earth. We suggest that this correlation is due to the lesser availability of sand at the edges of dune fields. These measurements and correlations could be helpful in determining differences in sand availability across different dune fields, and along the entire equatorial region of Titan.
Use of stochastic methods for robust parameter extraction from impedance spectra
International Nuclear Information System (INIS)
Bueschel, Paul; Troeltzsch, Uwe; Kanoun, Olfa
2011-01-01
The fitting of impedance models to measured data is an essential step in impedance spectroscopy (IS). Due to often complicated, nonlinear models, big number of parameters, large search spaces and presence of noise, an automated determination of the unknown parameters is a challenging task. The stronger the nonlinear behavior of a model, the weaker is the convergence of the corresponding regression and the probability to trap into local minima increases during parameter extraction. For fast measurements or automatic measurement systems these problems became the limiting factors of use. We compared the usability of stochastic algorithms, evolution, simulated annealing and particle filter with the widely used tool LEVM for parameter extraction for IS. The comparison is based on one reference model by J.R. Macdonald and a battery model used with noisy measurement data. The results show different performances of the algorithms for these two problems depending on the search space and the model used for optimization. The obtained results by particle filter were the best for both models. This method delivers the most reliable result for both cases even for the ill posed battery model.
FORECASTING COSMOLOGICAL PARAMETER CONSTRAINTS FROM NEAR-FUTURE SPACE-BASED GALAXY SURVEYS
International Nuclear Information System (INIS)
Pavlov, Anatoly; Ratra, Bharat; Samushia, Lado
2012-01-01
The next generation of space-based galaxy surveys is expected to measure the growth rate of structure to a level of about one percent over a range of redshifts. The rate of growth of structure as a function of redshift depends on the behavior of dark energy and so can be used to constrain parameters of dark energy models. In this work, we investigate how well these future data will be able to constrain the time dependence of the dark energy density. We consider parameterizations of the dark energy equation of state, such as XCDM and ωCDM, as well as a consistent physical model of time-evolving scalar field dark energy, φCDM. We show that if the standard, specially flat cosmological model is taken as a fiducial model of the universe, these near-future measurements of structure growth will be able to constrain the time dependence of scalar field dark energy density to a precision of about 10%, which is almost an order of magnitude better than what can be achieved from a compilation of currently available data sets.
Energy Technology Data Exchange (ETDEWEB)
Fatima, Zareen; Motosugi, Utaroh; Ishigame, Keiichi; Araki, Tsutomu [University of Yamanashi, Department of Radiology, Chuo-shi, Yamanashi (Japan); Waqar, Ahmed Bilal [University of Yamanashi, Department of Molecular Pathology, Interdisciplinary Graduate School of Medicine and Engineering, Chuo-shi, Yamanashi (Japan); Hori, Masaaki [Juntendo University, Department of Radiology, School of Medicine, Tokyo (Japan); Oishi, Naoki; Katoh, Ryohei [University of Yamanashi, Department of Pathology, Chuo-shi, Yamanashi (Japan); Onodera, Toshiyuki; Yagi, Kazuo [Tokyo Metropolitan University, Department of Radiological Sciences, Graduate School of Human Health Sciences, Tokyo (Japan)
2013-08-15
The purposes of this MR-based study were to calculate q-space imaging (QSI)-derived mean displacement (MDP) in meningiomas, to evaluate the correlation of MDP values with apparent diffusion coefficient (ADC) and to investigate the relationships among these diffusion parameters, tumour cell count (TCC) and MIB-1 labelling index (LI). MRI, including QSI and conventional diffusion-weighted imaging (DWI), was performed in 44 meningioma patients (52 lesions). ADC and MDP maps were acquired from post-processing of the data. Quantitative analyses of these maps were performed by applying regions of interest. Pearson correlation coefficients were calculated for ADC and MDP in all lesions and for ADC and TCC, MDP and TCC, ADC and MIB-1 LI, and MDP and MIB-1 LI in 17 patients who underwent subsequent surgery. ADC and MDP values were found to have a strong correlation: r = 0.78 (P = <0.0001). Both ADC and MDP values had a significant negative association with TCC: r = -0.53 (p = 0.02) and -0.48 (P = 0.04), respectively. MIB-1 LI was not, however, found to have a significant association with these diffusion parameters. In meningiomas, both ADC and MDP may be representative of cell density. (orig.)
International Nuclear Information System (INIS)
Fatima, Zareen; Motosugi, Utaroh; Ishigame, Keiichi; Araki, Tsutomu; Waqar, Ahmed Bilal; Hori, Masaaki; Oishi, Naoki; Katoh, Ryohei; Onodera, Toshiyuki; Yagi, Kazuo
2013-01-01
The purposes of this MR-based study were to calculate q-space imaging (QSI)-derived mean displacement (MDP) in meningiomas, to evaluate the correlation of MDP values with apparent diffusion coefficient (ADC) and to investigate the relationships among these diffusion parameters, tumour cell count (TCC) and MIB-1 labelling index (LI). MRI, including QSI and conventional diffusion-weighted imaging (DWI), was performed in 44 meningioma patients (52 lesions). ADC and MDP maps were acquired from post-processing of the data. Quantitative analyses of these maps were performed by applying regions of interest. Pearson correlation coefficients were calculated for ADC and MDP in all lesions and for ADC and TCC, MDP and TCC, ADC and MIB-1 LI, and MDP and MIB-1 LI in 17 patients who underwent subsequent surgery. ADC and MDP values were found to have a strong correlation: r = 0.78 (P = <0.0001). Both ADC and MDP values had a significant negative association with TCC: r = -0.53 (p = 0.02) and -0.48 (P = 0.04), respectively. MIB-1 LI was not, however, found to have a significant association with these diffusion parameters. In meningiomas, both ADC and MDP may be representative of cell density. (orig.)
International Nuclear Information System (INIS)
Haddad, K; Alopoor, H
2016-01-01
Purpose: Recently, the multileaf collimators (MLC) have become an important part of any LINAC collimation systems because they reduce the treatment planning time and improves the conformity. Important factors that affects the MLCs collimation performance are leaves material composition and their thickness. In this study, we investigate the main dosimetric parameters of 120-leaf Millennium MLC including dose in the buildup point, physical penumbra as well as average and end leaf leakages. Effects of the leaves geometry and density on these parameters are evaluated Methods: From EGSnrc Monte Carlo code, BEAMnrc and DOSXYZnrc modules are used to evaluate the dosimetric parameters of a water phantom exposed to a Varian xi for 100cm SSD. Using IAEA phasespace data just above MLC (Z=46cm) and BEAMnrc, for the modified 120-leaf Millennium MLC a new phase space data at Z=52cm is produces. The MLC is modified both in leaf thickness and material composition. EGSgui code generates 521ICRU library for tungsten alloys. DOSXYZnrc with the new phase space evaluates the dose distribution in a water phantom of 60×60×20 cm3 with voxel size of 4×4×2 mm3. Using DOSXYZnrc dose distributions for open beam and closed beam as well as the leakages definition, end leakage, average leakage and physical penumbra are evaluated. Results: A new MLC with improved dosimetric parameters is proposed. The physical penumbra for proposed MLC is 4.7mm compared to 5.16 mm for Millennium. Average leakage in our design is reduced to 1.16% compared to 1.73% for Millennium, the end leaf leakage suggested design is also reduced to 4.86% compared to 7.26% of Millennium. Conclusion: The results show that the proposed MLC with enhanced dosimetric parameters could improve the conformity of treatment planning.
Energy Technology Data Exchange (ETDEWEB)
Haddad, K; Alopoor, H [Shiraz University, Shiraz, I.R. Iran (Iran, Islamic Republic of)
2016-06-15
Purpose: Recently, the multileaf collimators (MLC) have become an important part of any LINAC collimation systems because they reduce the treatment planning time and improves the conformity. Important factors that affects the MLCs collimation performance are leaves material composition and their thickness. In this study, we investigate the main dosimetric parameters of 120-leaf Millennium MLC including dose in the buildup point, physical penumbra as well as average and end leaf leakages. Effects of the leaves geometry and density on these parameters are evaluated Methods: From EGSnrc Monte Carlo code, BEAMnrc and DOSXYZnrc modules are used to evaluate the dosimetric parameters of a water phantom exposed to a Varian xi for 100cm SSD. Using IAEA phasespace data just above MLC (Z=46cm) and BEAMnrc, for the modified 120-leaf Millennium MLC a new phase space data at Z=52cm is produces. The MLC is modified both in leaf thickness and material composition. EGSgui code generates 521ICRU library for tungsten alloys. DOSXYZnrc with the new phase space evaluates the dose distribution in a water phantom of 60×60×20 cm3 with voxel size of 4×4×2 mm3. Using DOSXYZnrc dose distributions for open beam and closed beam as well as the leakages definition, end leakage, average leakage and physical penumbra are evaluated. Results: A new MLC with improved dosimetric parameters is proposed. The physical penumbra for proposed MLC is 4.7mm compared to 5.16 mm for Millennium. Average leakage in our design is reduced to 1.16% compared to 1.73% for Millennium, the end leaf leakage suggested design is also reduced to 4.86% compared to 7.26% of Millennium. Conclusion: The results show that the proposed MLC with enhanced dosimetric parameters could improve the conformity of treatment planning.
International Nuclear Information System (INIS)
Murthy, S.N.
1990-01-01
The nature of hazardous effects from radio-frequency (RF), light, infrared, and nuclear radiation on human and other biological species in the advent of large-scale space commercialization is considered. Attention is focused on RF/microwave radiation from earth antennas and domestic picture phone communication links, exposure to microwave radiation from space solar-power satellites, and the continuous transmission of information from spacecraft as well as laser radiation from space. Measures for preventing and/or reducing these effects are suggested, including the use of interlocks for cutting off radiation toward ground, off-pointing microwave energy beams in cases of altitude failure, limiting the satellite off-axis gain data-rate product, the use of reflective materials on buildings and in personnel clothing to protect from space-borne lasers, and underwater colonies in cases of high-power lasers. For nuclear-power satellites, deposition in stable points in the solar system is proposed. 12 refs
Wainwright, Charlotte E.; Bonin, Timothy A.; Chilson, Phillip B.; Gibbs, Jeremy A.; Fedorovich, Evgeni; Palmer, Robert D.
2015-05-01
Small-scale turbulent fluctuations of temperature are known to affect the propagation of both electromagnetic and acoustic waves. Within the inertial-subrange scale, where the turbulence is locally homogeneous and isotropic, these temperature perturbations can be described, in a statistical sense, using the structure-function parameter for temperature, . Here we investigate different methods of evaluating , using data from a numerical large-eddy simulation together with atmospheric observations collected by an unmanned aerial system and a sodar. An example case using data from a late afternoon unmanned aerial system flight on April 24 2013 and corresponding large-eddy simulation data is presented and discussed.
Large Scale System Safety Integration for Human Rated Space Vehicles
Massie, Michael J.
2005-12-01
Since the 1960s man has searched for ways to establish a human presence in space. Unfortunately, the development and operation of human spaceflight vehicles carry significant safety risks that are not always well understood. As a result, the countries with human space programs have felt the pain of loss of lives in the attempt to develop human space travel systems. Integrated System Safety is a process developed through years of experience (since before Apollo and Soyuz) as a way to assess risks involved in space travel and prevent such losses. The intent of Integrated System Safety is to take a look at an entire program and put together all the pieces in such a way that the risks can be identified, understood and dispositioned by program management. This process has many inherent challenges and they need to be explored, understood and addressed.In order to prepare truly integrated analysis safety professionals must gain a level of technical understanding of all of the project's pieces and how they interact. Next, they must find a way to present the analysis so the customer can understand the risks and make decisions about managing them. However, every organization in a large-scale project can have different ideas about what is or is not a hazard, what is or is not an appropriate hazard control, and what is or is not adequate hazard control verification. NASA provides some direction on these topics, but interpretations of those instructions can vary widely.Even more challenging is the fact that every individual/organization involved in a project has different levels of risk tolerance. When the discrete hazard controls of the contracts and agreements cannot be met, additional risk must be accepted. However, when one has left the arena of compliance with the known rules, there can be no longer be specific ground rules on which to base a decision as to what is acceptable and what is not. The integrator must find common grounds between all parties to achieve
A hybrid method of estimating pulsating flow parameters in the space-time domain
Pałczyński, Tomasz
2017-05-01
This paper presents a method for estimating pulsating flow parameters in partially open pipes, such as pipelines, internal combustion engine inlets, exhaust pipes and piston compressors. The procedure is based on the method of characteristics, and employs a combination of measurements and simulations. An experimental test rig is described, which enables pressure, temperature and mass flow rate to be measured within a defined cross section. The second part of the paper discusses the main assumptions of a simulation algorithm elaborated in the Matlab/Simulink environment. The simulation results are shown as 3D plots in the space-time domain, and compared with proposed models of phenomena relating to wave propagation, boundary conditions, acoustics and fluid mechanics. The simulation results are finally compared with acoustic phenomena, with an emphasis on the identification of resonant frequencies.
Growth Chambers on the International Space Station for Large Plants
Massa, Gioia D.; Wheeler, Raymond M.; Morrow, Robert C.; Levine, Howard G.
2016-01-01
The International Space Station (ISS) now has platforms for conducting research on horticultural plant species under LED (Light Emitting Diodes) lighting, and those capabilities continue to expand. The Veggie vegetable production system was deployed to the ISS as an applied research platform for food production in space. Veggie is capable of growing a wide array of horticultural crops. It was designed for low power usage, low launch mass and stowage volume, and minimal crew time requirements. The Veggie flight hardware consists of a light cap containing red (630 nanometers), blue, (455 nanometers) and green (530 nanometers) LEDs. Interfacing with the light cap is an extendable bellowsbaseplate for enclosing the plant canopy. A second large plant growth chamber, the Advanced Plant Habitat (APH), is will fly to the ISS in 2017. APH will be a fully controllable environment for high-quality plant physiological research. APH will control light (quality, level, and timing), temperature, CO2, relative humidity, and irrigation, while scrubbing any cabin or plant-derived ethylene and other volatile organic compounds. Additional capabilities include sensing of leaf temperature and root zone moisture, root zone temperature, and oxygen concentration. The light cap will have red (630 nm), blue (450 nm), green (525 nm), far red (730 nm) and broad spectrum white LEDs (4100K). There will be several internal cameras (visible and IR) to monitor and record plant growth and operations. Veggie and APH are available for research proposals.
8 Meter Advanced Technology Large-Aperture Space Telescope (ATLAST-8m)
Stahl, H. Philip
2010-01-01
ATLAST-8m (Advanced Technology Large Aperture Space Telescope) is a proposed 8-meter monolithic UV/optical/NIR space observatory (wavelength range 110 to 2500 nm) to be placed in orbit at Sun-Earth L2 by NASA's planned Ares V heavy lift vehicle. Given its very high angular resolution (15 mas @ 500 nm), sensitivity and performance stability, ATLAST-8m is capable of achieving breakthroughs in a broad range of astrophysics including: Is there life elsewhere in the Galaxy? An 8-meter UVOIR observatory has the performance required to detect habitability (H2O, atmospheric column density) and biosignatures (O2, O3, CH4) in terrestrial exoplanet atmospheres, to reveal the underlying physics that drives star formation, and to trace the complex interactions between dark matter, galaxies, and intergalactic medium. The ATLAST Astrophysics Strategic Mission Concept Study developed a detailed point design for an 8-m monolithic observatory including optical design; structural design/analysis including primary mirror support structure, sun shade and secondary mirror support structure; thermal analysis; spacecraft including structure, propulsion, GN&C, avionics, power systems and reaction wheels; mass and power budgets; and system cost. The results of which were submitted by invitation to NRC's 2010 Astronomy & Astrophysics Decadal Survey.
DEFF Research Database (Denmark)
Damgaard, Birthe Marie; Studnitz, Merete; Jensen, Karin Hjelholt
2009-01-01
The consequences of an ‘all in-all out' static group of uniform age vs. a continuously dynamic group with litter introduction and exit every third week were examined with respect to stress response and haematological parameters in large groups of 60 pigs. The experiment included a total of 480 pigs...... from weaning at the age of 4 weeks to the age of 18 weeks after weaning. Limited differences were found in stress and haematological parameters between pigs in dynamic and static groups. The cortisol response to the stress test was increasing with the duration of the stress test in pigs from...... the dynamic group while it was decreasing in the static group. The health condition and the growth performance were reduced in the dynamic groups compared with the static groups. In the dynamic groups the haematological parameters indicated an activation of the immune system characterised by an increased...
Fan, M.
2015-03-29
Parameter estimation is a challenging computational problemin the reverse engineering of biological systems. Because advances in biotechnology have facilitated wide availability of time-series gene expression data, systematic parameter esti- mation of gene circuitmodels fromsuch time-series mRNA data has become an importantmethod for quantitatively dissecting the regulation of gene expression. By focusing on themodeling of gene circuits, we examine here the perform- ance of three types of state-of-the-art parameter estimation methods: population-basedmethods, onlinemethods and model-decomposition-basedmethods. Our results show that certain population-basedmethods are able to generate high- quality parameter solutions. The performance of thesemethods, however, is heavily dependent on the size of the param- eter search space, and their computational requirements substantially increase as the size of the search space increases. In comparison, onlinemethods andmodel decomposition-basedmethods are computationally faster alternatives and are less dependent on the size of the search space. Among other things, our results show that a hybrid approach that augments computationally fastmethods with local search as a subsequent refinement procedure can substantially increase the qual- ity of their parameter estimates to the level on par with the best solution obtained fromthe population-basedmethods whilemaintaining high computational speed. These suggest that such hybridmethods can be a promising alternative to themore commonly used population-basedmethods for parameter estimation of gene circuit models when limited prior knowledge about the underlying regulatorymechanismsmakes the size of the parameter search space vastly large. © The Author 2015. Published by Oxford University Press.
Gene flow analysis method, the D-statistic, is robust in a wide parameter space.
Zheng, Yichen; Janke, Axel
2018-01-08
We evaluated the sensitivity of the D-statistic, a parsimony-like method widely used to detect gene flow between closely related species. This method has been applied to a variety of taxa with a wide range of divergence times. However, its parameter space and thus its applicability to a wide taxonomic range has not been systematically studied. Divergence time, population size, time of gene flow, distance of outgroup and number of loci were examined in a sensitivity analysis. The sensitivity study shows that the primary determinant of the D-statistic is the relative population size, i.e. the population size scaled by the number of generations since divergence. This is consistent with the fact that the main confounding factor in gene flow detection is incomplete lineage sorting by diluting the signal. The sensitivity of the D-statistic is also affected by the direction of gene flow, size and number of loci. In addition, we examined the ability of the f-statistics, [Formula: see text] and [Formula: see text], to estimate the fraction of a genome affected by gene flow; while these statistics are difficult to implement to practical questions in biology due to lack of knowledge of when the gene flow happened, they can be used to compare datasets with identical or similar demographic background. The D-statistic, as a method to detect gene flow, is robust against a wide range of genetic distances (divergence times) but it is sensitive to population size. The D-statistic should only be applied with critical reservation to taxa where population sizes are large relative to branch lengths in generations.
A simple model for the initial phase of a water plasma cloud about a large structure in space
International Nuclear Information System (INIS)
Hastings, D.E.; Gatsonis, N.A.; Mogstad, T.
1988-01-01
Large structures in the ionosphere will outgas or eject neutral water and perturb the ambient neutral environment. This water can undergo charge exchange with the ambient oxygen ions and form a water plasma cloud. Additionally, water dumps or thruster firings can create a water plasma cloud. A simple model for the evolution of a water plasma cloud about a large space structure is obtained. It is shown that if the electron density around a large space structure is substantially enhanced above the ambient density then the plasma cloud will move away from the structure. As the cloud moves away, it will become unstable and will eventually break up into filaments. A true steady state will exist only if the total electron density is unperturbed from the ambient density. When the water density is taken to be consistent with shuttle-based observations, the cloud is found to slowly drift away on a time scale of many tens of milliseconds. This time is consistent with the shuttle observations
Creating unstable velocity-space distributions with barium injections
International Nuclear Information System (INIS)
Pongratz, M.B.
1983-01-01
Large Debye lengths relative to detector dimensions and the absence of confining walls makes space an attractive laboratory for studying fundamental theories of plasma instabilities. However, natural space plasmas are rarely found displaced from equilibrium enough to permit isolation and diagnosis of the controlling parameters and driving conditions. Furthermore, any plasma or field response to the departure from equilibrium can be masked by noise in the natural system. Active experiments provide a technique for addressing the chicken or egg dilemma. Early thermite barium releases were generally conducted at low altitudes from sounding rockets to trace electric fields passively or to study configuration-space instabilities. One can also study velocity-space instabilities with barium releases. Neutral barium vapor releases wherein a typical speed greatly exceeds the thermal speed can be used to produce barium ion velocity-space distributions that should be subject to a number of microinstabilities. We examine the ion velocity-space distributions resulting from barium injections from orbiting spacecraft and shaped-charges
International Nuclear Information System (INIS)
Funk, J.G.; Sykes, G.F. Jr.
1989-04-01
The effects of simulated space environmental parameters on microdamage induced by the environment in a series of commercially available graphite-fiber-reinforced composite materials were determined. Composites with both thermoset and thermoplastic resin systems were studied. Low-Earth-Orbit (LEO) exposures were simulated by thermal cycling; geosynchronous-orbit (GEO) exposures were simulated by electron irradiation plus thermal cycling. The thermal cycling temperature range was -250 F to either 200 F or 150 F. The upper limits of the thermal cycles were different to ensure that an individual composite material was not cycled above its glass transition temperature. Material response was characterized through assessment of the induced microcracking and its influence on mechanical property changes at both room temperature and -250 F. Microdamage was induced in both thermoset and thermoplastic advanced composite materials exposed to the simulated LEO environment. However, a 350 F cure single-phase toughened epoxy composite was not damaged during exposure to the LEO environment. The simuated GEO environment produced microdamage in all materials tested
Least square regularized regression in sum space.
Xu, Yong-Li; Chen, Di-Rong; Li, Han-Xiong; Liu, Lu
2013-04-01
This paper proposes a least square regularized regression algorithm in sum space of reproducing kernel Hilbert spaces (RKHSs) for nonflat function approximation, and obtains the solution of the algorithm by solving a system of linear equations. This algorithm can approximate the low- and high-frequency component of the target function with large and small scale kernels, respectively. The convergence and learning rate are analyzed. We measure the complexity of the sum space by its covering number and demonstrate that the covering number can be bounded by the product of the covering numbers of basic RKHSs. For sum space of RKHSs with Gaussian kernels, by choosing appropriate parameters, we tradeoff the sample error and regularization error, and obtain a polynomial learning rate, which is better than that in any single RKHS. The utility of this method is illustrated with two simulated data sets and five real-life databases.
Fan, Ming; Kuwahara, Hiroyuki; Wang, Xiaolei; Wang, Suojin; Gao, Xin
2015-11-01
Parameter estimation is a challenging computational problem in the reverse engineering of biological systems. Because advances in biotechnology have facilitated wide availability of time-series gene expression data, systematic parameter estimation of gene circuit models from such time-series mRNA data has become an important method for quantitatively dissecting the regulation of gene expression. By focusing on the modeling of gene circuits, we examine here the performance of three types of state-of-the-art parameter estimation methods: population-based methods, online methods and model-decomposition-based methods. Our results show that certain population-based methods are able to generate high-quality parameter solutions. The performance of these methods, however, is heavily dependent on the size of the parameter search space, and their computational requirements substantially increase as the size of the search space increases. In comparison, online methods and model decomposition-based methods are computationally faster alternatives and are less dependent on the size of the search space. Among other things, our results show that a hybrid approach that augments computationally fast methods with local search as a subsequent refinement procedure can substantially increase the quality of their parameter estimates to the level on par with the best solution obtained from the population-based methods while maintaining high computational speed. These suggest that such hybrid methods can be a promising alternative to the more commonly used population-based methods for parameter estimation of gene circuit models when limited prior knowledge about the underlying regulatory mechanisms makes the size of the parameter search space vastly large. © The Author 2015. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
On variations of space-heating energy use in office buildings
International Nuclear Information System (INIS)
Lin, Hung-Wen; Hong, Tianzhen
2013-01-01
Highlights: • Space heating is the largest energy end use in the U.S. building sector. • A key design and operational parameters have the most influence on space heating. • Simulated results were benchmarked against actual results to analyze discrepancies. • Yearly weather changes have significant impact on space heating energy use. • Findings enable stakeholders to make better decisions on energy efficiency. - Abstract: Space heating is the largest energy end use, consuming more than seven quintillion joules of site energy annually in the U.S. building sector. A few recent studies showed discrepancies in simulated space-heating energy use among different building energy modeling programs, and the simulated results are suspected to be underpredicting reality. While various uncertainties are associated with building simulations, especially when simulations are performed by different modelers using different simulation programs for buildings with different configurations, it is crucial to identify and evaluate key driving factors to space-heating energy use in order to support the design and operation of low-energy buildings. In this study, 10 design and operation parameters for space-heating systems of two prototypical office buildings in each of three U.S. heating climates are identified and evaluated, using building simulations with EnergyPlus, to determine the most influential parameters and their impacts on variations of space-heating energy use. The influence of annual weather change on space-heating energy is also investigated using 30-year actual weather data. The simulated space-heating energy use is further benchmarked against those from similar actual office buildings in two U.S. commercial-building databases to better understand the discrepancies between simulated and actual energy use. In summary, variations of both the simulated and actual space-heating energy use of office buildings in all three heating climates can be very large. However
Directory of Open Access Journals (Sweden)
Li Ke
2014-12-01
Full Text Available A large-scale high altitude environment simulation test cabin was developed to accurately control temperatures and pressures encountered at high altitudes. The system was developed to provide slope-tracking dynamic control of the temperature–pressure two-parameter and overcome the control difficulties inherent to a large inertia lag link with a complex control system which is composed of turbine refrigeration device, vacuum device and liquid nitrogen cooling device. The system includes multi-parameter decoupling of the cabin itself to avoid equipment damage of air refrigeration turbine caused by improper operation. Based on analysis of the dynamic characteristics and modeling for variations in temperature, pressure and rotation speed, an intelligent controller was implemented that includes decoupling and fuzzy arithmetic combined with an expert PID controller to control test parameters by decoupling and slope tracking control strategy. The control system employed centralized management in an open industrial ethernet architecture with an industrial computer at the core. The simulation and field debugging and running results show that this method can solve the problems of a poor anti-interference performance typical for a conventional PID and overshooting that can readily damage equipment. The steady-state characteristics meet the system requirements.
Large non-Gaussianity from two-component hybrid inflation
International Nuclear Information System (INIS)
Byrnes, Christian T.; Choi, Ki-Young; Hall, Lisa M.H.
2009-01-01
We study the generation of non-Gaussianity in models of hybrid inflation with two inflaton fields, (2-brid inflation). We analyse the region in the parameter and the initial condition space where a large non-Gaussianity may be generated during slow-roll inflation which is generally characterised by a large f NL , τ NL and a small g NL . For certain parameter values we can satisfy τ NL >> f NL 2 . The bispectrum is of the local type but may have a significant scale dependence. We show that the loop corrections to the power spectrum and bispectrum are suppressed during inflation, if one assume that the fields follow a classical background trajectory. We also include the effect of the waterfall field, which can lead to a significant change in the observables after the waterfall field is destabilised, depending on the couplings between the waterfall and inflaton fields
Martynenko, S.; Rozumenko, V.; Tyrnov, O.; Manson, A.; Meek, C.
The large V/m electric fields inherent in the mesosphere play an essential role in lower ionospheric electrodynamics. They must be the cause of large variations in the electron temperature and the electron collision frequency at D region altitudes, and consequently the ionospheric plasma in the lower part of the D region undergoes a transition into a nonisothermal state. This study is based on the databases on large mesospheric electric fields collected with the 2.2-MHz radar of the Institute of Space and Atmospheric Studies, University of Saskatchewan, Canada (52°N geographic latitude, 60.4°N geomagnetic latitude) and with the 2.3-MHz radar of the Kharkiv V. Karazin National University (49.6°N geographic latitude, 45.6°N geomagnetic latitude). The statistical analysis of these data is presented in Meek, C. E., A. H. Manson, S. I. Martynenko, V. T. Rozumenko, O. F. Tyrnov, Remote sensing of mesospheric electric fields using MF radars, Journal of Atmospheric and Solar-Terrestrial Physics, in press. The large mesospheric electric fields is experimentally established to follow a Rayleigh distribution in the interval 0
GLAST, the Gamma-ray Large Area Space Telescope
De Angelis, A
2001-01-01
GLAST, a detector for cosmic gamma rays in the range from 20 MeV to 300 GeV, will be launched in space in 2005. Breakthroughs are expected in particular in the study of particle acceleration mechanisms in space and of gamma ray bursts, and maybe on the search for cold dark matter; but of course the most exciting discoveries could come from the unexpected.
Black hole dynamics at large D
CERN. Geneva
2016-01-01
We demonstrate that the classical dynamics of black holes can be reformulated as a dynamical problem of a codimension one membrane moving in flat space. This membrane - roughly the black hole event horizon - carries a conserved charge current and stress tensor which source radiation. This `membrane paradigm' may be viewed as a simplification of the equations of general relativity at large D, and suggests the possibility of using 1/D as a useful expansion parameter in the analysis of complicated four dimensional solutions of general relativity, for instance the collision between two black holes.
Directory of Open Access Journals (Sweden)
Zhang Guowei
2014-01-01
Full Text Available Based on a full-scale bookcase fire experiment, a fire development model is proposed for the whole process of localized fires in large-space buildings. We found that for localized fires in large-space buildings full of wooden combustible materials the fire growing phases can be simplified into a t2 fire with a 0.0346 kW/s2 fire growth coefficient. FDS technology is applied to study the smoke temperature curve for a 2 MW to 25 MW fire occurring within a large space with a height of 6 m to 12 m and a building area of 1 500 m2 to 10 000 m2 based on the proposed fire development model. Through the analysis of smoke temperature in various fire scenarios, a new approach is proposed to predict the smoke temperature curve. Meanwhile, a modified model of steel temperature development in localized fire is built. In the modified model, the localized fire source is treated as a point fire source to evaluate the flame net heat flux to steel. The steel temperature curve in the whole process of a localized fire could be accurately predicted by the above findings. These conclusions obtained in this paper could provide valuable reference to fire simulation, hazard assessment, and fire protection design.
A new approach to reduce uncertainties in space radiation cancer risk predictions.
Directory of Open Access Journals (Sweden)
Francis A Cucinotta
Full Text Available The prediction of space radiation induced cancer risk carries large uncertainties with two of the largest uncertainties being radiation quality and dose-rate effects. In risk models the ratio of the quality factor (QF to the dose and dose-rate reduction effectiveness factor (DDREF parameter is used to scale organ doses for cosmic ray proton and high charge and energy (HZE particles to a hazard rate for γ-rays derived from human epidemiology data. In previous work, particle track structure concepts were used to formulate a space radiation QF function that is dependent on particle charge number Z, and kinetic energy per atomic mass unit, E. QF uncertainties where represented by subjective probability distribution functions (PDF for the three QF parameters that described its maximum value and shape parameters for Z and E dependences. Here I report on an analysis of a maximum QF parameter and its uncertainty using mouse tumor induction data. Because experimental data for risks at low doses of γ-rays are highly uncertain which impacts estimates of maximum values of relative biological effectiveness (RBEmax, I developed an alternate QF model, denoted QFγAcute where QFs are defined relative to higher acute γ-ray doses (0.5 to 3 Gy. The alternate model reduces the dependence of risk projections on the DDREF, however a DDREF is still needed for risk estimates for high-energy protons and other primary or secondary sparsely ionizing space radiation components. Risk projections (upper confidence levels (CL for space missions show a reduction of about 40% (CL∼50% using the QFγAcute model compared the QFs based on RBEmax and about 25% (CL∼35% compared to previous estimates. In addition, I discuss how a possible qualitative difference leading to increased tumor lethality for HZE particles compared to low LET radiation and background tumors remains a large uncertainty in risk estimates.
Development of a large scale Chimera grid system for the Space Shuttle Launch Vehicle
Pearce, Daniel G.; Stanley, Scott A.; Martin, Fred W., Jr.; Gomez, Ray J.; Le Beau, Gerald J.; Buning, Pieter G.; Chan, William M.; Chiu, Ing-Tsau; Wulf, Armin; Akdag, Vedat
1993-01-01
The application of CFD techniques to large problems has dictated the need for large team efforts. This paper offers an opportunity to examine the motivations, goals, needs, problems, as well as the methods, tools, and constraints that defined NASA's development of a 111 grid/16 million point grid system model for the Space Shuttle Launch Vehicle. The Chimera approach used for domain decomposition encouraged separation of the complex geometry into several major components each of which was modeled by an autonomous team. ICEM-CFD, a CAD based grid generation package, simplified the geometry and grid topology definition by provoding mature CAD tools and patch independent meshing. The resulting grid system has, on average, a four inch resolution along the surface.
Yin, Lucy; Andrews, Jennifer; Heaton, Thomas
2018-05-01
Earthquake parameter estimations using nearest neighbor searching among a large database of observations can lead to reliable prediction results. However, in the real-time application of Earthquake Early Warning (EEW) systems, the accurate prediction using a large database is penalized by a significant delay in the processing time. We propose to use a multidimensional binary search tree (KD tree) data structure to organize large seismic databases to reduce the processing time in nearest neighbor search for predictions. We evaluated the performance of KD tree on the Gutenberg Algorithm, a database-searching algorithm for EEW. We constructed an offline test to predict peak ground motions using a database with feature sets of waveform filter-bank characteristics, and compare the results with the observed seismic parameters. We concluded that large database provides more accurate predictions of the ground motion information, such as peak ground acceleration, velocity, and displacement (PGA, PGV, PGD), than source parameters, such as hypocenter distance. Application of the KD tree search to organize the database reduced the average searching process by 85% time cost of the exhaustive method, allowing the method to be feasible for real-time implementation. The algorithm is straightforward and the results will reduce the overall time of warning delivery for EEW.
Design and Initial Tests of the Tracker-Converter ofthe Gamma-ray Large Area Space Telescope
Energy Technology Data Exchange (ETDEWEB)
Atwood, W.B.; Bagagli, R.; Baldini, L.; Bellazzini, R.; Barbiellini, G.; Belli, F.; Borden, T.; Brez, A.; Brigida, M.; Caliandro, G.A.; Cecchi, C.; Cohen-Tanugi, J.; De; Drell, P.; Favuzzi, C.; Fukazawa, Y.; Fusco, P.; Gargano, F.; Germani, S.; Giannitrapani, R.; Giglietto, N.; /UC, Santa Cruz /INFN, Pisa /Pisa U. /INFN, Trieste /INFN,
2007-04-16
The Tracker subsystem of the Large Area Telescope (LAT) science instrument of the Gamma-ray Large Area Space Telescope (GLAST) mission has been completed and tested. It is the central detector subsystem of the LAT and serves both to convert an incident gamma-ray into an electron-positron pair and to track the pair in order to measure the gamma-ray direction. It also provides the principal trigger for the LAT. The Tracker uses silicon strip detectors, read out by custom electronics, to detect charged particles. The detectors and electronics are packaged, along with tungsten converter foils, in 16 modular, high-precision carbon-composite structures. It is the largest silicon-strip detector system ever built for launch into space, and its aggressive design emphasizes very low power consumption, passive cooling, low noise, high efficiency, minimal dead area, and a structure that is highly transparent to charged particles. The test program has demonstrated that the system meets or surpasses all of its performance specifications as well as environmental requirements. It is now installed in the completed LAT, which is being prepared for launch in early 2008.
International Nuclear Information System (INIS)
Reid, Beth A.; Spergel, David N.; Bode, Paul
2009-01-01
The nontrivial relationship between observations of galaxy positions in redshift space and the underlying matter field complicates our ability to determine the linear theory power spectrum and extract cosmological information from galaxy surveys. The Sloan Digital Sky Survey (SDSS) luminous red galaxy (LRG) catalog has the potential to place powerful constraints on cosmological parameters. LRGs are bright, highly biased tracers of large-scale structure. However, because they are highly biased, the nonlinear contribution of satellite galaxies to the galaxy power spectrum is large and fingers-of-God (FOGs) are significant. The combination of these effects leads to a ∼10% correction in the underlying power spectrum at k = 0.1 h Mpc -1 and ∼40% correction at k = 0.2 h Mpc -1 in the LRG P(k) analysis of Tegmark et al., thereby compromising the cosmological constraints when this potentially large correction is left as a free parameter. We propose an alternative approach to recovering the matter field from galaxy observations. Our approach is to use halos rather than galaxies to trace the underlying mass distribution. We identify FOGs and replace each FOG with a single halo object. This removes the nonlinear contribution of satellite galaxies, the one-halo term. We test our method on a large set of high-fidelity mock SDSS LRG catalogs and find that the power spectrum of the reconstructed halo density field deviates from the underlying matter power spectrum at the ≤1% level for k ≤ 0.1 h Mpc -1 and ≤4% at k = 0.2 h Mpc -1 . The reconstructed halo density field also removes the bias in the measurement of the redshift space distortion parameter β induced by the FOG smearing of the linear redshift space distortions.
Assumptions of the primordial spectrum and cosmological parameter estimation
International Nuclear Information System (INIS)
Shafieloo, Arman; Souradeep, Tarun
2011-01-01
The observables of the perturbed universe, cosmic microwave background (CMB) anisotropy and large structures depend on a set of cosmological parameters, as well as the assumed nature of primordial perturbations. In particular, the shape of the primordial power spectrum (PPS) is, at best, a well-motivated assumption. It is known that the assumed functional form of the PPS in cosmological parameter estimation can affect the best-fit-parameters and their relative confidence limits. In this paper, we demonstrate that a specific assumed form actually drives the best-fit parameters into distinct basins of likelihood in the space of cosmological parameters where the likelihood resists improvement via modifications to the PPS. The regions where considerably better likelihoods are obtained allowing free-form PPS lie outside these basins. In the absence of a preferred model of inflation, this raises a concern that current cosmological parameter estimates are strongly prejudiced by the assumed form of PPS. Our results strongly motivate approaches toward simultaneous estimation of the cosmological parameters and the shape of the primordial spectrum from upcoming cosmological data. It is equally important for theorists to keep an open mind towards early universe scenarios that produce features in the PPS. (paper)
Concept for an International Standard related to Space Weather Effects on Space Systems
Tobiska, W. Kent; Tomky, Alyssa
There is great interest in developing an international standard related to space weather in order to specify the tools and parameters needed for space systems operations. In particular, a standard is important for satellite operators who may not be familiar with space weather. In addition, there are others who participate in space systems operations that would also benefit from such a document. For example, the developers of software systems that provide LEO satellite orbit determination, radio communication availability for scintillation events (GEO-to-ground L and UHF bands), GPS uncertainties, and the radiation environment from ground-to-space for commercial space tourism. These groups require recent historical data, current epoch specification, and forecast of space weather events into their automated or manual systems. Other examples are national government agencies that rely on space weather data provided by their organizations such as those represented in the International Space Environment Service (ISES) group of 14 national agencies. Designers, manufacturers, and launchers of space systems require real-time, operational space weather parameters that can be measured, monitored, or built into automated systems. Thus, a broad scope for the document will provide a useful international standard product to a variety of engineering and science domains. The structure of the document should contain a well-defined scope, consensus space weather terms and definitions, and internationally accepted descriptions of the main elements of space weather, its sources, and its effects upon space systems. Appendices will be useful for describing expanded material such as guidelines on how to use the standard, how to obtain specific space weather parameters, and short but detailed descriptions such as when best to use some parameters and not others; appendices provide a path for easily updating the standard since the domain of space weather is rapidly changing with new advances
Tsutagawa, Michael H.; Michael, Sherif
2009-01-01
This paper presents the design parameters for a triple junction InGaP/GaAs/Ge space solar cell with a simulated maximum efficiency of 36.28% using Silvaco ATLAS Virtual Wafer Fabrication tool. Design parameters include the layer material, doping concentration, and thicknesses.
Cryogenic techniques for large superconducting magnets in space
Green, M. A.
1989-01-01
A large superconducting magnet is proposed for use in a particle astrophysics experiment, ASTROMAG, which is to be mounted on the United States Space Station. This experiment will have a two-coil superconducting magnet with coils which are 1.3 to 1.7 meters in diameter. The two-coil magnet will have zero net magnetic dipole moment. The field 15 meters from the magnet will approach earth's field in low earth orbit. The issue of high Tc superconductor will be discussed in the paper. The reasons for using conventional niobium-titanium superconductor cooled with superfluid helium will be presented. Since the purpose of the magnet is to do particle astrophysics, the superconducting coils must be located close to the charged particle detectors. The trade off between the particle physics possible and the cryogenic insulation around the coils is discussed. As a result, the ASTROMAG magnet coils will be operated outside of the superfluid helium storage tank. The fountain effect pumping system which will be used to cool the coil is described in the report. Two methods for extending the operating life of the superfluid helium dewar are discussed. These include: operation with a third shield cooled to 90 K with a sterling cycle cryocooler, and a hybrid cryogenic system where there are three hydrogen-cooled shields and cryostat support heat intercept points.
Examining a Thermodynamic Order Parameter of Protein Folding.
Chong, Song-Ho; Ham, Sihyun
2018-05-08
Dimensionality reduction with a suitable choice of order parameters or reaction coordinates is commonly used for analyzing high-dimensional time-series data generated by atomistic biomolecular simulations. So far, geometric order parameters, such as the root mean square deviation, fraction of native amino acid contacts, and collective coordinates that best characterize rare or large conformational transitions, have been prevailing in protein folding studies. Here, we show that the solvent-averaged effective energy, which is a thermodynamic quantity but unambiguously defined for individual protein conformations, serves as a good order parameter of protein folding. This is illustrated through the application to the folding-unfolding simulation trajectory of villin headpiece subdomain. We rationalize the suitability of the effective energy as an order parameter by the funneledness of the underlying protein free energy landscape. We also demonstrate that an improved conformational space discretization is achieved by incorporating the effective energy. The most distinctive feature of this thermodynamic order parameter is that it works in pointing to near-native folded structures even when the knowledge of the native structure is lacking, and the use of the effective energy will also find applications in combination with methods of protein structure prediction.
On line surveillance of large systems: applications to nuclear and chemical plant
International Nuclear Information System (INIS)
Zwingelstein, G.
1978-01-01
An on line surveillance method for large scale and distributed parameter systems is achieved by comparing in real time the internal physical parameter values to the reference values. It is shown that the following steps are necessary: modeling, model validation using dynamic testing and on line estimation of parameters. For large scale systems where only few outputs are measurable, an estimation algorithm was developed, selecting the measurable output giving the minimum variance of the physical parameters. This estimation scheme uses a quasilinearization technique associated to the sensitivity equation and the recursive least squares techniques. For large scale systems of order greater than 100, two versions of the estimation scheme are proposed to decrease the computation time. An application to a nuclear reactor core (state variable model of order 29) is proposed and used real data. For distributed systems the estimation scheme was developed with either measurements at fixed time or at fixed space. The estimation algorithm selects the set of measurements that gives the minimum variance of the estimates. An application to a liquid-liquid extraction column, modelized by a set of four coupled partial differential equations, demonstrates the efficiency of the method
Equilibrium phase-space distributions and space charge limits in linacs
International Nuclear Information System (INIS)
Lysenko, W.P.
1977-10-01
Limits on beam current and emittance in proton and heavy ion linear accelerators resulting from space charge forces are calculated. The method involves determining equilibrium distributions in phase space using a continuous focusing, no acceleration, model in two degrees of freedom using the coordinates r and z. A nonlinear Poisson equation must be solved numerically. This procedure is a matching between the longitudinal and transverse directions to minimize the effect of longitudinal-transverse coupling which is believed to be the main problem in emittance growth due to space charge in linacs. Limits on the Clinton P. Anderson Meson Physics Facility (LAMPF) accelerator performance are calculated as an example. The beam physics is described by a few space charge parameters so that accelerators with different physical parameters can be compared in a natural way. The main result of this parameter study is that the requirement of a high-intensity beam is best fulfilled with a low-frequency accelerator whereas the requirement of a high-brightness beam is best fulfilled with a high-frequency accelerator
Buncher system parameter optimization
International Nuclear Information System (INIS)
Wadlinger, E.A.
1981-01-01
A least-squares algorithm is presented to calculate the RF amplitudes and cavity spacings for a series of buncher cavities each resonating at a frequency that is a multiple of a fundamental frequency of interest. The longitudinal phase-space distribution, obtained by particle tracing through the bunching system, is compared to a desired distribution function of energy and phase. The buncher cavity parameters are adjusted to minimize the difference between these two distributions. Examples are given for zero space charge. The manner in which the method can be extended to include space charge using the 3-D space-charge calculation procedure is indicated
Harnessing solar pressure to slew and point large infrared space telescopes
Errico, Simona; Angel, Roger P.; Calvert, Paul D.; Woof, Neville
2003-03-01
Large astronomical Gossamer telescopes in space will need to employ large solar shields to safeguard the optics from solar radiation. These types of telescopes demand accurate controls to maintain telescope pointing over long integration periods. We propose an active solar shield system that harnesses radiation pressure to accurately slew and acquire new targets without the need for reaction wheels or thrusters. To provide the required torques, the solar shield is configured as an inverted, 4-sided pyramidal roof. The sloped roof interior surfaces are covered with hinged “tiles” made from piezoelectric film bimorphs with specular metallized surfaces. Nominally, the tiles lie flat against the roof and the sunlight is reflected outward equally from all sloped surfaces. However, when the tiles on one roof pitch are raised, the pressure balance is upset and the sunshade is pushed to one side. By judicious selection of the tiles and control of their lift angle, the solar pressure can be harvested to stabilize the spacecraft orientation or to change its angular momentum. A first order conceptual design performance analysis and the results from the experimental design, fabrication and testing of piezoelectric bimorph hinge elements will be presented. Next phase challenges in engineering design, materials technology, and systems testing will be discussed.
A new Bayesian recursive technique for parameter estimation
Kaheil, Yasir H.; Gill, M. Kashif; McKee, Mac; Bastidas, Luis
2006-08-01
The performance of any model depends on how well its associated parameters are estimated. In the current application, a localized Bayesian recursive estimation (LOBARE) approach is devised for parameter estimation. The LOBARE methodology is an extension of the Bayesian recursive estimation (BARE) method. It is applied in this paper on two different types of models: an artificial intelligence (AI) model in the form of a support vector machine (SVM) application for forecasting soil moisture and a conceptual rainfall-runoff (CRR) model represented by the Sacramento soil moisture accounting (SAC-SMA) model. Support vector machines, based on statistical learning theory (SLT), represent the modeling task as a quadratic optimization problem and have already been used in various applications in hydrology. They require estimation of three parameters. SAC-SMA is a very well known model that estimates runoff. It has a 13-dimensional parameter space. In the LOBARE approach presented here, Bayesian inference is used in an iterative fashion to estimate the parameter space that will most likely enclose a best parameter set. This is done by narrowing the sampling space through updating the "parent" bounds based on their fitness. These bounds are actually the parameter sets that were selected by BARE runs on subspaces of the initial parameter space. The new approach results in faster convergence toward the optimal parameter set using minimum training/calibration data and fewer sets of parameter values. The efficacy of the localized methodology is also compared with the previously used BARE algorithm.
Online State Space Model Parameter Estimation in Synchronous Machines
Directory of Open Access Journals (Sweden)
Z. Gallehdari
2014-06-01
The suggested approach is evaluated for a sample synchronous machine model. Estimated parameters are tested for different inputs at different operating conditions. The effect of noise is also considered in this study. Simulation results show that the proposed approach provides good accuracy for parameter estimation.
Space and energy. [space systems for energy generation, distribution and control
Bekey, I.
1976-01-01
Potential contributions of space to energy-related activities are discussed. Advanced concepts presented include worldwide energy distribution to substation-sized users using low-altitude space reflectors; powering large numbers of large aircraft worldwide using laser beams reflected from space mirror complexes; providing night illumination via sunlight-reflecting space mirrors; fine-scale power programming and monitoring in transmission networks by monitoring millions of network points from space; prevention of undetected hijacking of nuclear reactor fuels by space tracking of signals from tagging transmitters on all such materials; and disposal of nuclear power plant radioactive wastes in space.
DRAGON solutions to the 3D transport benchmark over a range in parameter space
International Nuclear Information System (INIS)
Martin, Nicolas; Hebert, Alain; Marleau, Guy
2010-01-01
DRAGON solutions to the 'NEA suite of benchmarks for 3D transport methods and codes over a range in parameter space' are discussed in this paper. A description of the benchmark is first provided, followed by a detailed review of the different computational models used in the lattice code DRAGON. Two numerical methods were selected for generating the required quantities for the 729 configurations of this benchmark. First, S N calculations were performed using fully symmetric angular quadratures and high-order diamond differencing for spatial discretization. To compare S N results with those of another deterministic method, the method of characteristics (MoC) was also considered for this benchmark. Comparisons between reference solutions, S N and MoC results illustrate the advantages and drawbacks of each methods for this 3-D transport problem.
The Large Area Telescope on the Fermi Gamma-ray Space Telescope Mission
Energy Technology Data Exchange (ETDEWEB)
Atwood, W.B.; /UC, Santa Cruz; Abdo, Aous A.; /Naval Research Lab, Wash., D.C.; Ackermann, M.; /Stanford U., HEPL /KIPAC, Menlo Park /Stanford U., Phys. Dept.; Anderson, B. /UC, Santa Cruz; Axelsson, M.; /Stockholm U.; Baldini, L.; /INFN, Pisa; Ballet, J.; /DAPNIA, Saclay; Band, D.L.; /NASA, Goddard /NASA, Goddard; Barbiellini, Guido; /INFN, Trieste /Trieste U.; Bartelt, J.; /Stanford U., HEPL /KIPAC, Menlo Park /Stanford U., Phys. Dept.; Bastieri, Denis; /INFN, Padua /Padua U.; Baughman, B.M.; /Ohio State U.; Bechtol, K.; /Stanford U., HEPL /KIPAC, Menlo Park /Stanford U., Phys. Dept.; Bederede, D.; /DAPNIA, Saclay; Bellardi, F.; /INFN, Pisa; Bellazzini, R.; /INFN, Pisa; Berenji, B.; /Stanford U., HEPL /KIPAC, Menlo Park /Stanford U., Phys. Dept.; Bignami, G.F.; /Pavia U.; Bisello, D.; /INFN, Padua /Padua U.; Bissaldi, E.; /Garching, Max Planck Inst., MPE; Blandford, R.D.; /Stanford U., HEPL /KIPAC, Menlo Park /Stanford U., Phys. Dept. /Stanford U., HEPL /KIPAC, Menlo Park /Stanford U., Phys. Dept. /Stanford U., HEPL /KIPAC, Menlo Park /Stanford U., Phys. Dept. /INFN, Perugia /Perugia U. /NASA, Goddard /Stanford U., HEPL /KIPAC, Menlo Park /Stanford U., Phys. Dept. /Stanford U., HEPL /KIPAC, Menlo Park /Stanford U., Phys. Dept. /INFN, Pisa /INFN, Pisa /Bari U. /INFN, Bari /Ecole Polytechnique /Washington U., Seattle /INFN, Padua /Padua U. /Bari U. /INFN, Bari /Stanford U., HEPL /KIPAC, Menlo Park /Stanford U., Phys. Dept. /IASF, Milan /IASF, Milan /Kalmar U. /Royal Inst. Tech., Stockholm /DAPNIA, Saclay /ASI, Rome /INFN, Pisa /INFN, Perugia /Perugia U. /Stanford U., HEPL /KIPAC, Menlo Park /Stanford U., Phys. Dept. /George Mason U. /Naval Research Lab, Wash., D.C. /NASA, Goddard /Stanford U., HEPL /KIPAC, Menlo Park /Stanford U., Phys. Dept. /DAPNIA, Saclay /NASA, Goddard /INFN, Perugia /Perugia U. /Stanford U., HEPL /KIPAC, Menlo Park /Stanford U., Phys. Dept. /Montpellier U. /Stanford U., HEPL /KIPAC, Menlo Park /Stanford U., Phys. Dept.; /more authors..
2009-05-15
The Large Area Telescope (Fermi/LAT, hereafter LAT), the primary instrument on the Fermi Gamma-ray Space Telescope (Fermi) mission, is an imaging, wide field-of-view (FoV), high-energy {gamma}-ray telescope, covering the energy range from below 20 MeV to more than 300 GeV. The LAT was built by an international collaboration with contributions from space agencies, high-energy particle physics institutes, and universities in France, Italy, Japan, Sweden, and the United States. This paper describes the LAT, its preflight expected performance, and summarizes the key science objectives that will be addressed. On-orbit performance will be presented in detail in a subsequent paper. The LAT is a pair-conversion telescope with a precision tracker and calorimeter, each consisting of a 4 x 4 array of 16 modules, a segmented anticoincidence detector that covers the tracker array, and a programmable trigger and data acquisition system. Each tracker module has a vertical stack of 18 (x, y) tracking planes, including two layers (x and y) of single-sided silicon strip detectors and high-Z converter material (tungsten) per tray. Every calorimeter module has 96 CsI(Tl) crystals, arranged in an eight-layer hodoscopic configuration with a total depth of 8.6 radiation lengths, giving both longitudinal and transverse information about the energy deposition pattern. The calorimeter's depth and segmentation enable the high-energy reach of the LAT and contribute significantly to background rejection. The aspect ratio of the tracker (height/width) is 0.4, allowing a large FoV (2.4 sr) and ensuring that most pair-conversion showers initiated in the tracker will pass into the calorimeter for energy measurement. Data obtained with the LAT are intended to (1) permit rapid notification of high-energy {gamma}-ray bursts and transients and facilitate monitoring of variable sources, (2) yield an extensive catalog of several thousand high-energy sources obtained from an all-sky survey, (3
Exploring cosmic origins with CORE: Cosmological parameters
Di Valentino, E.; Brinckmann, T.; Gerbino, M.; Poulin, V.; Bouchet, F. R.; Lesgourgues, J.; Melchiorri, A.; Chluba, J.; Clesse, S.; Delabrouille, J.; Dvorkin, C.; Forastieri, F.; Galli, S.; Hooper, D. C.; Lattanzi, M.; Martins, C. J. A. P.; Salvati, L.; Cabass, G.; Caputo, A.; Giusarma, E.; Hivon, E.; Natoli, P.; Pagano, L.; Paradiso, S.; Rubiño-Martin, J. A.; Achúcarro, A.; Ade, P.; Allison, R.; Arroja, F.; Ashdown, M.; Ballardini, M.; Banday, A. J.; Banerji, R.; Bartolo, N.; Bartlett, J. G.; Basak, S.; Baumann, D.; de Bernardis, P.; Bersanelli, M.; Bonaldi, A.; Bonato, M.; Borrill, J.; Boulanger, F.; Bucher, M.; Burigana, C.; Buzzelli, A.; Cai, Z.-Y.; Calvo, M.; Carvalho, C. S.; Castellano, G.; Challinor, A.; Charles, I.; Colantoni, I.; Coppolecchia, A.; Crook, M.; D'Alessandro, G.; De Petris, M.; De Zotti, G.; Diego, J. M.; Errard, J.; Feeney, S.; Fernandez-Cobos, R.; Ferraro, S.; Finelli, F.; de Gasperis, G.; Génova-Santos, R. T.; González-Nuevo, J.; Grandis, S.; Greenslade, J.; Hagstotz, S.; Hanany, S.; Handley, W.; Hazra, D. K.; Hernández-Monteagudo, C.; Hervias-Caimapo, C.; Hills, M.; Kiiveri, K.; Kisner, T.; Kitching, T.; Kunz, M.; Kurki-Suonio, H.; Lamagna, L.; Lasenby, A.; Lewis, A.; Liguori, M.; Lindholm, V.; Lopez-Caniego, M.; Luzzi, G.; Maffei, B.; Martin, S.; Martinez-Gonzalez, E.; Masi, S.; Matarrese, S.; McCarthy, D.; Melin, J.-B.; Mohr, J. J.; Molinari, D.; Monfardini, A.; Negrello, M.; Notari, A.; Paiella, A.; Paoletti, D.; Patanchon, G.; Piacentini, F.; Piat, M.; Pisano, G.; Polastri, L.; Polenta, G.; Pollo, A.; Quartin, M.; Remazeilles, M.; Roman, M.; Ringeval, C.; Tartari, A.; Tomasi, M.; Tramonte, D.; Trappe, N.; Trombetti, T.; Tucker, C.; Väliviita, J.; van de Weygaert, R.; Van Tent, B.; Vennin, V.; Vermeulen, G.; Vielva, P.; Vittorio, N.; Young, K.; Zannoni, M.
2018-04-01
We forecast the main cosmological parameter constraints achievable with the CORE space mission which is dedicated to mapping the polarisation of the Cosmic Microwave Background (CMB). CORE was recently submitted in response to ESA's fifth call for medium-sized mission proposals (M5). Here we report the results from our pre-submission study of the impact of various instrumental options, in particular the telescope size and sensitivity level, and review the great, transformative potential of the mission as proposed. Specifically, we assess the impact on a broad range of fundamental parameters of our Universe as a function of the expected CMB characteristics, with other papers in the series focusing on controlling astrophysical and instrumental residual systematics. In this paper, we assume that only a few central CORE frequency channels are usable for our purpose, all others being devoted to the cleaning of astrophysical contaminants. On the theoretical side, we assume ΛCDM as our general framework and quantify the improvement provided by CORE over the current constraints from the Planck 2015 release. We also study the joint sensitivity of CORE and of future Baryon Acoustic Oscillation and Large Scale Structure experiments like DESI and Euclid. Specific constraints on the physics of inflation are presented in another paper of the series. In addition to the six parameters of the base ΛCDM, which describe the matter content of a spatially flat universe with adiabatic and scalar primordial fluctuations from inflation, we derive the precision achievable on parameters like those describing curvature, neutrino physics, extra light relics, primordial helium abundance, dark matter annihilation, recombination physics, variation of fundamental constants, dark energy, modified gravity, reionization and cosmic birefringence. In addition to assessing the improvement on the precision of individual parameters, we also forecast the post-CORE overall reduction of the allowed
Ravindranath, Swara; Ho, Luis C.; Peng, Chien Y.; Filippenko, Alexei V.; Sargent, Wallace L. W.
2001-08-01
We present surface photometry for the central regions of a sample of 33 early-type (E, S0, and S0/a) galaxies observed at 1.6 μm (H band) using the Hubble Space Telescope. Dust absorption has less of an impact on the galaxy morphologies in the near-infrared than found in previous work based on observations at optical wavelengths. When present, dust seems to be most commonly associated with optical line emission. We employ a new technique of two-dimensional fitting to extract quantitative parameters for the bulge light distribution and nuclear point sources, taking into consideration the effects of the point-spread function. By parameterizing the bulge profile with a Nuker law, we confirm that the central surface brightness distributions largely fall into two categories, each of which correlates with the global properties of the galaxies. ``Core'' galaxies tend to be luminous elliptical galaxies with boxy or pure elliptical isophotes, whereas ``power-law'' galaxies are preferentially lower luminosity systems with disky isophotes. The infrared surface brightness profiles are very similar to those in the optical, with notable exceptions being very dusty objects. Similar to the study of Faber et al., based on optical data, we find that galaxy cores obey a set of fundamental plane relations wherein more luminous galaxies with higher central stellar velocity dispersions generally possess larger cores with lower surface brightnesses. Unlike most previous studies, however, we do not find a clear gap in the distribution of inner cusp slopes; several objects have inner cusp slopes (0.3law galaxies. The nature of these intermediate objects is unclear. We draw attention to two objects in the sample that appear to be promising cases of galaxies with isothermal cores that are not the brightest members of a cluster. Unresolved nuclear point sources are found in ~50% of the sample galaxies, roughly independent of profile type, with magnitudes in the range mnucH=12.8 to 17.4 mag
International Nuclear Information System (INIS)
Kamiya, Y.; Katoh, M.; Honjo, I.
1987-01-01
A future ring with a low emittance and large circumference, specifically dedicated to a synchrotron light source, will have a large chromaticity, so that it is important to employ a sophisticated sextupole correction as well as the design of linear lattice to obtain the stable beam. The authors tried a method of sextupole correction for a lattice with a large chromaticity and small dispersion function. In such a lattice the sextupole magnets are obliged to become large in strength to compensate the chromaticity. Then the nonlinear effects of the sextupole magnets will become more serious than their chromatic effects. Furthermore, a ring with strong quadrupole magnets to get a very small emittance and with strong sextupole magnets to compensate the generated chromaticity will be very sensitive to their magnetic errors. The authors also present simple formulae to evaluate the effects on the beam parameters. The details will appear in a KEK Report
Phases of a stack of membranes in a large number of dimensions of configuration space
Borelli, M. E.; Kleinert, H.
2001-05-01
The phase diagram of a stack of tensionless membranes with nonlinear curvature energy and vertical harmonic interaction is calculated exactly in a large number of dimensions of configuration space. At low temperatures, the system forms a lamellar phase with spontaneously broken translational symmetry in the vertical direction. At a critical temperature, the stack disorders vertically in a meltinglike transition. The critical temperature is determined as a function of the interlayer separation l.
Wong, Sun; Del Genio, Anthony; Wang, Tao; Kahn, Brian; Fetzer, Eric J.; L'Ecuyer, Tristan S.
2015-01-01
Goals: Water budget-related dynamical phase space; Connect large-scale dynamical conditions to atmospheric water budget (including precipitation); Connect atmospheric water budget to cloud type distributions.
A statistical survey of heat input parameters into the cusp thermosphere
Moen, J. I.; Skjaeveland, A.; Carlson, H. C.
2017-12-01
Based on three winters of observational data, we present those ionosphere parameters deemed most critical to realistic space weather ionosphere and thermosphere representation and prediction, in regions impacted by variability in the cusp. The CHAMP spacecraft revealed large variability in cusp thermosphere densities, measuring frequent satellite drag enhancements, up to doublings. The community recognizes a clear need for more realistic representation of plasma flows and electron densities near the cusp. Existing average-value models produce order of magnitude errors in these parameters, resulting in large under estimations of predicted drag. We fill this knowledge gap with statistics-based specification of these key parameters over their range of observed values. The EISCAT Svalbard Radar (ESR) tracks plasma flow Vi , electron density Ne, and electron, ion temperatures Te, Ti , with consecutive 2-3 minute windshield-wipe scans of 1000x500 km areas. This allows mapping the maximum Ti of a large area within or near the cusp with high temporal resolution. In magnetic field-aligned mode the radar can measure high-resolution profiles of these plasma parameters. By deriving statistics for Ne and Ti , we enable derivation of thermosphere heating deposition under background and frictional-drag-dominated magnetic reconnection conditions. We separate our Ne and Ti profiles into quiescent and enhanced states, which are not closely correlated due to the spatial structure of the reconnection foot point. Use of our data-based parameter inputs can make order of magnitude corrections to input data driving thermosphere models, enabling removal of previous two fold drag errors.
Cryogenic techniques for large superconducting magnets in space
International Nuclear Information System (INIS)
Green, M.A.
1988-12-01
A large superconducting magnet is proposed for use in a particle astrophysics experiment, ASTROMAG, which is to be mounted on the United States Space Station. This experiment will have a two-coil superconducting magnet with coils which are 1.3 to 1.7 meters in diameter. The two-coil magnet will have zero net magnetic dipole moment. The field 15 meters from the magnet will approach earth's field in low earth orbit. The issue of high Tc superconductor will be discussed in the paper. The reasons for using conventional niobium-titanium superconductor cooled with superfluid helium will be presented. Since the purpose of the magnet is to do particle astrophysics, the superconducting coils must be located close to the charged particle detectors. The trade off between the particle physics possible and the cryogenic insulation around the coils is discussed. As a result, the ASTROMAG magnet coils will be operated outside of the superfluid helium storage tank. The fountain effect pumping system which will be used to cool the coil is described in the report. Two methods for extending the operating life of the superfluid helium dewar are discussed. These include: operation with a third shield cooled to 90 K with a sterling cycle cryocooler, and a hybrid cryogenic system where there are three hydrogen-cooled shields and cryostat support heat intercept points. Both of these methods will extend the ASTROMAG cryogenic operating life from 2 years to almost 4 years. 14 refs., 8 figs., 4 tabs
Study of heat treatment parameters for large-scale hydraulic steel gate track
Directory of Open Access Journals (Sweden)
Ping-zhou Cao
2013-10-01
Full Text Available In order to enhance external hardness and strength, a large-scale hydraulic gate track should go through heat treatment. The current design method of hydraulic gate wheels and tracks is based on Hertz contact linear elastic theory, and does not take into account the changes in mechanical properties of materials caused by heat treatment. In this study, the heat treatment parameters were designed and analyzed according to the bearing mechanisms of the wheel and track. The quenching process of the track was simulated by the ANSYS program, and the temperature variation, residual stress, and deformation were obtained and analyzed. The metallurgical structure field after heat treatment was predicted by the method based on time-temperature-transformation (TTT curves. The results show that the analysis method and designed track heat treatment process are feasible, and can provide a reference for practical projects.
ANALYSIS OF RADAR AND OPTICAL SPACE BORNE DATA FOR LARGE SCALE TOPOGRAPHICAL MAPPING
Directory of Open Access Journals (Sweden)
W. Tampubolon
2015-03-01
Full Text Available Normally, in order to provide high resolution 3 Dimension (3D geospatial data, large scale topographical mapping needs input from conventional airborne campaigns which are in Indonesia bureaucratically complicated especially during legal administration procedures i.e. security clearance from military/defense ministry. This often causes additional time delays besides technical constraints such as weather and limited aircraft availability for airborne campaigns. Of course the geospatial data quality is an important issue for many applications. The increasing demand of geospatial data nowadays consequently requires high resolution datasets as well as a sufficient level of accuracy. Therefore an integration of different technologies is required in many cases to gain the expected result especially in the context of disaster preparedness and emergency response. Another important issue in this context is the fast delivery of relevant data which is expressed by the term “Rapid Mapping”. In this paper we present first results of an on-going research to integrate different data sources like space borne radar and optical platforms. Initially the orthorectification of Very High Resolution Satellite (VHRS imagery i.e. SPOT-6 has been done as a continuous process to the DEM generation using TerraSAR-X/TanDEM-X data. The role of Ground Control Points (GCPs from GNSS surveys is mandatory in order to fulfil geometrical accuracy. In addition, this research aims on providing suitable processing algorithm of space borne data for large scale topographical mapping as described in section 3.2. Recently, radar space borne data has been used for the medium scale topographical mapping e.g. for 1:50.000 map scale in Indonesian territories. The goal of this on-going research is to increase the accuracy of remote sensing data by different activities, e.g. the integration of different data sources (optical and radar or the usage of the GCPs in both, the optical and the
Gamma Ray Large Area Space Telescope (GLAST) Balloon Flight Engineering Model: Overview
Thompson, D. J.; Godfrey, G.; Williams, S. M.; Grove, J. E.; Mizuno, T.; Sadrozinski, H. F.-W.; Kamae, T.; Ampe, J.; Briber, Stuart; Dann, James;
2001-01-01
The Gamma Ray Large Area Space Telescope (GLAST) Large Area Telescope (LAT) is a pair-production high-energy (greater than 20 MeV) gamma-ray telescope being built by an international partnership of astrophysicists and particle physicists for a satellite launch in 2006, designed to study a wide variety of high-energy astrophysical phenomena. As part of the development effort, the collaboration has built a Balloon Flight Engineering Model (BFEM) for flight on a high-altitude scientific balloon. The BFEM is approximately the size of one of the 16 GLAST-LAT towers and contains all the components of the full instrument: plastic scintillator anticoincidence system (ACD), high-Z foil/Si strip pair-conversion tracker (TKR), CsI hodoscopic calorimeter (CAL), triggering and data acquisition electronics (DAQ), commanding system, power distribution, telemetry, real-time data display, and ground data processing system. The principal goal of the balloon flight was to demonstrate the performance of this instrument configuration under conditions similar to those expected in orbit. Results from a balloon flight from Palestine, Texas, on August 4, 2001, show that the BFEM successfully obtained gamma-ray data in this high-background environment.
Constraining statistical-model parameters using fusion and spallation reactions
Directory of Open Access Journals (Sweden)
Charity Robert J.
2011-10-01
Full Text Available The de-excitation of compound nuclei has been successfully described for several decades by means of statistical models. However, such models involve a large number of free parameters and ingredients that are often underconstrained by experimental data. We show how the degeneracy of the model ingredients can be partially lifted by studying different entrance channels for de-excitation, which populate different regions of the parameter space of the compound nucleus. Fusion reactions, in particular, play an important role in this strategy because they ﬁx three out of four of the compound-nucleus parameters (mass, charge and total excitation energy. The present work focuses on ﬁssion and intermediate-mass-fragment emission cross sections. We prove how equivalent parameter sets for fusion-ﬁssion reactions can be resolved using another entrance channel, namely spallation reactions. Intermediate-mass-fragment emission can be constrained in a similar way. An interpretation of the best-ﬁt IMF barriers in terms of the Wigner energies of the nascent fragments is discussed.
Venkatachari, Balaji Shankar; Streett, Craig L.; Chang, Chau-Lyan; Friedlander, David J.; Wang, Xiao-Yen; Chang, Sin-Chung
2016-01-01
Despite decades of development of unstructured mesh methods, high-fidelity time-accurate simulations are still predominantly carried out on structured, or unstructured hexahedral meshes by using high-order finite-difference, weighted essentially non-oscillatory (WENO), or hybrid schemes formed by their combinations. In this work, the space-time conservation element solution element (CESE) method is used to simulate several flow problems including supersonic jet/shock interaction and its impact on launch vehicle acoustics, and direct numerical simulations of turbulent flows using tetrahedral meshes. This paper provides a status report for the continuing development of the space-time conservation element solution element (CESE) numerical and software framework under the Revolutionary Computational Aerosciences (RCA) project. Solution accuracy and large-scale parallel performance of the numerical framework is assessed with the goal of providing a viable paradigm for future high-fidelity flow physics simulations.
International Nuclear Information System (INIS)
Hawley, J.T.; Chiu, C.; Todreas, N.E.; Rohsenow, W.M.
1980-01-01
Correlations are presented for subchannel and bundle friction factors and flowsplit parameters for laminar, transition and turbulent longitudinal flows in wire wrap spaced hexagonal arrays. These results are obtained from pressure drop models of flow in individual subchannels. For turbulent flow, an existing pressure drop model for flow in edge subchannels is extended, and the resulting edge subchannel friction factor is identified. Using the expressions for flowsplit parameters and the equal pressure drops assumption, the interior subchannel and bundle friction factors are obtained. For laminar flow, models are developed for pressure drops of individual subchannels. From these models, expressions for the subchannel friction factors are identified and expressions for the flowsplit parameters are derived
Energy Technology Data Exchange (ETDEWEB)
Hawley, J.T.; Chiu, C.; Rohsenow, W.M.; Todreas, N.E.
1980-08-01
Correlations are presented for subchannel and bundle friction factors and flowsplit parameters for laminar, transition and turbulent longitudinal flows in wire wrap spaced hexagonal arrays. These results are obtained from pressure drop models of flow in individual subchannels. For turbulent flow, an existing pressure drop model for flow in edge subchannels is extended, and the resulting edge subchannel friction factor is identified. Using the expressions for flowsplit parameters and the equal pressured drop assumption, the interior subchannel and bundle friction factors are obtained. For laminar flow, models are developed for pressure drops of individual subchannels. From these models, expressions for the subchannel friction factors are identified and expressions for the flowsplit parameters are derived.
Valach, F.; Revallo, M.; Hejda, P.; Bochníček, J.
2010-12-01
Our modern society with its advanced technology is becoming increasingly vulnerable to the Earth's system disorders originating in explosive processes on the Sun. Coronal mass ejections (CMEs) blasted into interplanetary space as gigantic clouds of ionized gas can hit Earth within a few hours or days and cause, among other effects, geomagnetic storms - perhaps the best known manifestation of solar wind interaction with Earth's magnetosphere. Solar energetic particles (SEP), accelerated to near relativistic energy during large solar storms, arrive at the Earth's orbit even in few minutes and pose serious risk to astronauts traveling through the interplanetary space. These and many other threats are the reason why experts pay increasing attention to space weather and its predictability. For research on space weather, it is typically necessary to examine a large number of parameters which are interrelated in a complex non-linear way. One way to cope with such a task is to use an artificial neural network for space weather modeling, a tool originally developed for artificial intelligence. In our contribution, we focus on practical aspects of the neural networks application to modeling and forecasting selected space weather parameters.
Dynamic Model Averaging in Large Model Spaces Using Dynamic Occam's Window.
Onorante, Luca; Raftery, Adrian E
2016-01-01
Bayesian model averaging has become a widely used approach to accounting for uncertainty about the structural form of the model generating the data. When data arrive sequentially and the generating model can change over time, Dynamic Model Averaging (DMA) extends model averaging to deal with this situation. Often in macroeconomics, however, many candidate explanatory variables are available and the number of possible models becomes too large for DMA to be applied in its original form. We propose a new method for this situation which allows us to perform DMA without considering the whole model space, but using a subset of models and dynamically optimizing the choice of models at each point in time. This yields a dynamic form of Occam's window. We evaluate the method in the context of the problem of nowcasting GDP in the Euro area. We find that its forecasting performance compares well with that of other methods.
Nooij, S. A. E.; Bos, J. E.; Groen, E. L.; Bles, W.; Ockels, W. J.
2007-09-01
During the first days in space, i.e., after a transition from 1G to 0G, more than 50% of the astro- (and cosmonauts) suffer from the Space Adaptation Syndrome (SAS).The symptoms of SAS, like nausea and dizziness, are especially provoked by head movements. Astronauts have mentioned close similarities between the symptoms of SAS and the symptoms they experienced after a 1 hour centrifuge run on Earth, i.e., after a transition from 3G to 1G (denoted by Sickness Induced by Centrifugation, SIC). During several space missions, we related susceptibility to SAS and to SIC in 11 astronauts and found 4 of them being susceptible to both SIC and SAS, and 7 being not susceptible to SIC nor to SAS. This correspondence in susceptibility suggests that SIC and SAS share the same underlying mechanism. To further study this mechanism, several vestibular parameters have been investigated (e.g. postural stability, vestibularly driven eye movements, subjective vertical). We found some striking changes in individual cases that are possibly due to the centrifuge run. However, the variability between subjects generally is very large, making physiological links to SIC and SAS still hard to find.
Impact of relativistic effects on cosmological parameter estimation
Lorenz, Christiane S.; Alonso, David; Ferreira, Pedro G.
2018-01-01
Future surveys will access large volumes of space and hence very long wavelength fluctuations of the matter density and gravitational field. It has been argued that the set of secondary effects that affect the galaxy distribution, relativistic in nature, will bring new, complementary cosmological constraints. We study this claim in detail by focusing on a subset of wide-area future surveys: Stage-4 cosmic microwave background experiments and photometric redshift surveys. In particular, we look at the magnification lensing contribution to galaxy clustering and general-relativistic corrections to all observables. We quantify the amount of information encoded in these effects in terms of the tightening of the final cosmological constraints as well as the potential bias in inferred parameters associated with neglecting them. We do so for a wide range of cosmological parameters, covering neutrino masses, standard dark-energy parametrizations and scalar-tensor gravity theories. Our results show that, while the effect of lensing magnification to number counts does not contain a significant amount of information when galaxy clustering is combined with cosmic shear measurements, this contribution does play a significant role in biasing estimates on a host of parameter families if unaccounted for. Since the amplitude of the magnification term is controlled by the slope of the source number counts with apparent magnitude, s (z ), we also estimate the accuracy to which this quantity must be known to avoid systematic parameter biases, finding that future surveys will need to determine s (z ) to the ˜5 %- 10 % level. On the contrary, large-scale general-relativistic corrections are irrelevant both in terms of information content and parameter bias for most cosmological parameters but significant for the level of primordial non-Gaussianity.
International Nuclear Information System (INIS)
Sivasakthivel, T.; Murugesan, K.; Thomas, H.R.
2014-01-01
Highlights: • Ground Source Heat Pump (GSHP) technology is suitable for both heating and cooling. • Important parameters that affect the GSHP performance has been listed. • Parameters of GSHP system has been optimized for heating and cooling mode. • Taguchi technique and utility concept are developed for GSHP optimization. - Abstract: Use of ground source energy for space heating applications through Ground Source Heat pump (GSHP) has been established as an efficient thermodynamic process. The electricity input to the GSHP can be reduced by increasing the COP of the system. However, the COP of a GSHP system will be different for heating and cooling mode operations. Hence in order to reduce the electricity input to the GSHP, an optimum value of COP has to be determined when GSHP is operated in both heating and cooling modes. In the present research, a methodology is proposed to optimize the operating parameters of a GSHP system which will operate on both heating and cooling modes. Condenser inlet temperature, condenser outlet temperature, dryness fraction at evaporator inlet and evaporator outlet temperature are considered as the influencing parameters of the heat pump. Optimization of these parameters for only heating or only cooling mode operation is achieved by employing Taguchi method for three level variations of the above parameters using an L 9 (3 4 ) orthogonal array. Higher the better concept has been used to get a higher COP. A computer program in FORTAN has been developed to carry out the computations and the results have been analyzed for the optimum conditions using Signal-to-Noise (SN) ratio and Analysis Of Variance (ANOVA) method. Based on this analysis, the maximum COP for only heating and only cooling operation are obtained as 4.25 and 3.32 respectively. By making use of the utility concept both the higher values of COP obtained for heating and cooling modes are optimized to get a single optimum COP for heating and cooling modes. A single
Goretzki, Nora; Inbar, Nimrod; Siebert, Christian; Möller, Peter; Rosenthal, Eliyahu; Schneider, Michael; Magri, Fabien
2015-04-01
Salty and thermal springs exist along the lakeshore of the Sea of Galilee, which covers most of the Tiberias Basin (TB) in the northern Jordan- Dead Sea Transform, Israel/Jordan. As it is the only freshwater reservoir of the entire area, it is important to study the salinisation processes that pollute the lake. Simulations of thermohaline flow along a 35 km NW-SE profile show that meteoric and relic brines are flushed by the regional flow from the surrounding heights and thermally induced groundwater flow within the faults (Magri et al., 2015). Several model runs with trial and error were necessary to calibrate the hydraulic conductivity of both faults and major aquifers in order to fit temperature logs and spring salinity. It turned out that the hydraulic conductivity of the faults ranges between 30 and 140 m/yr whereas the hydraulic conductivity of the Upper Cenomanian aquifer is as high as 200 m/yr. However, large-scale transport processes are also dependent on other physical parameters such as thermal conductivity, porosity and fluid thermal expansion coefficient, which are hardly known. Here, inverse problems (IP) are solved along the NW-SE profile to better constrain the physical parameters (a) hydraulic conductivity, (b) thermal conductivity and (c) thermal expansion coefficient. The PEST code (Doherty, 2010) is applied via the graphical interface FePEST in FEFLOW (Diersch, 2014). The results show that both thermal and hydraulic conductivity are consistent with the values determined with the trial and error calibrations. Besides being an automatic approach that speeds up the calibration process, the IP allows to cover a wide range of parameter values, providing additional solutions not found with the trial and error method. Our study shows that geothermal systems like TB are more comprehensively understood when inverse models are applied to constrain coupled fluid flow processes over large spatial scales. References Diersch, H.-J.G., 2014. FEFLOW Finite
Komendera, Erik E.; Dorsey, John T.
2017-01-01
Developing a capability for the assembly of large space structures has the potential to increase the capabilities and performance of future space missions and spacecraft while reducing their cost. One such application is a megawatt-class solar electric propulsion (SEP) tug, representing a critical transportation ability for the NASA lunar, Mars, and solar system exploration missions. A series of robotic assembly experiments were recently completed at Langley Research Center (LaRC) that demonstrate most of the assembly steps for the SEP tug concept. The assembly experiments used a core set of robotic capabilities: long-reach manipulation and dexterous manipulation. This paper describes cross-cutting capabilities and technologies for in-space assembly (ISA), applies the ISA approach to a SEP tug, describes the design and development of two assembly demonstration concepts, and summarizes results of two sets of assembly experiments that validate the SEP tug assembly steps.
Rojdev, Kristina; Koontz, Steve; Reddell, Brandon; Atwell, William; Boeder, Paul
2015-01-01
NASA's exploration goals are focused on deep space travel and Mars surface operations. To accomplish these goals, large structures will be necessary to transport crew and logistics in the initial stages, and NASA will need to keep the crew and the vehicle safe during transport and any surface activities. One of the major challenges of deep space travel is the space radiation environment and its impacts on the crew, the electronics, and the vehicle materials. The primary radiation from the sun (solar particle events) and from outside the solar system (galactic cosmic rays) interact with materials of the vehicle. These interactions lead to some of the primary radiation being absorbed, being modified, or producing secondary radiation (primarily neutrons). With all vehicles, the high energy primary radiation is of most concern. However, with larger vehicles that have large shielding masses, there is more opportunity for secondary radiation production, and this secondary radiation can be significant enough to cause concern. When considering surface operations, there is also a secondary radiation source from the surface of the planet, known as albedo, with neutrons being one of the most significant species. Given new vehicle designs for deep space and Mars missions, the secondary radiation environment and the implications of that environment is currently not well understood. Thus, several studies are necessary to fill the knowledge gaps of this secondary radiation environment. In this paper, we put forth the initial steps to increasing our understanding of neutron production from large vehicles by comparing the neutron production resulting from our radiation transport codes and providing a preliminary validation of our results against flight data. This paper will review the details of these results and discuss the finer points of the analysis.
Contact parameters in two dimensions for general three-body systems
DEFF Research Database (Denmark)
F. Bellotti, F.; Frederico, T.; T. Yamashita, M.
2014-01-01
a subsystem is composed of two identical non-interacting particles. We also show that the three-body contact parameter is negligible in the case of one non-interacting subsystem compared to the situation where all subsystem are bound. As example, we present results for mixtures of Lithium with two Cesium......We study the two dimensional three-body problem in the general case of three distinguishable particles interacting through zero-range potentials. The Faddeev decomposition is used to write the momentum-space wave function. We show that the large-momentum asymptotic spectator function has the same...... to obtain two- and three-body contact parameters. We specialize from the general cases to examples of two identical, interacting or non-interacting, particles. We find that the two-body contact parameter is not a universal constant in the general case and show that the universality is recovered when...
Pinem, M.; Fauzi, R.
2018-02-01
One technique for ensuring continuity of wireless communication services and keeping a smooth transition on mobile communication networks is the soft handover technique. In the Soft Handover (SHO) technique the inclusion and reduction of Base Station from the set of active sets is determined by initiation triggers. One of the initiation triggers is based on the strong reception signal. In this paper we observed the influence of parameters of large-scale radio propagation models to improve the performance of mobile communications. The observation parameters for characterizing the performance of the specified mobile system are Drop Call, Radio Link Degradation Rate and Average Size of Active Set (AS). The simulated results show that the increase in altitude of Base Station (BS) Antenna and Mobile Station (MS) Antenna contributes to the improvement of signal power reception level so as to improve Radio Link quality and increase the average size of Active Set and reduce the average Drop Call rate. It was also found that Hata’s propagation model contributed significantly to improvements in system performance parameters compared to Okumura’s propagation model and Lee’s propagation model.
Stray light field dependence for large astronomical space telescopes
Lightsey, Paul A.; Bowers, Charles W.
2017-09-01
Future large astronomical telescopes in space will have architectures that expose the optics to large angular extents of the sky. Options for reducing stray light coming from the sky range from enclosing the telescope in a tubular baffle to having an open telescope structure with a large sunshield to eliminate solar illumination. These two options are considered for an on-axis telescope design to explore stray light considerations. A tubular baffle design will limit the sky exposure to the solid angle of the cone in front of the telescope set by the aspect ratio of the baffle length to Primary Mirror (PM) diameter. Illumination from this portion of the sky will be limited to the PM and structures internal to the tubular baffle. Alternatively, an open structure design will allow a large portion of the sky to directly illuminate the PM and Secondary Mirror (SM) as well as illuminating sunshield and other structure surfaces which will reflect or scatter light onto the PM and SM. Portions of this illumination of the PM and SM will be scattered into the optical train as stray light. A Radiance Transfer Function (RTF) is calculated for the open architecture that determines the ratio of the stray light background radiance in the image contributed by a patch of sky having unit radiance. The full 4π steradian of sky is divided into a grid of patches, with the location of each patch defined in the telescope coordinate system. By rotating the celestial sky radiance maps into the telescope coordinate frame for a given pointing direction of the telescope, the RTF may be applied to the sky brightness and the results integrated to get the total stray light from the sky for that pointing direction. The RTF data generated for the open architecture may analyzed as a function of the expanding cone angle about the pointing direction. In this manner, the open architecture data may be used to directly compare to a tubular baffle design parameterized by allowed cone angle based on the
Roberts, Arthur; Lhuillier, Andrew; Liu, Yi; Ruggiu, Alessandra; Shi, Yufang
Elucidation of the effects of space flight on the immune system of astronauts and other animal species is important for the survival and success of manned space flight, especially long-term missions. Space flight exposes astronauts to microgravity, galactic cosmic radiation (GCR), and various psycho-social stressors. Blood samples from astronauts returning from space flight have shown changes in the numbers and types of circulating leukocytes. Similarly, normal lym-phocyte homeostasis has been shown to be severely affected in mice using ground-based models of microgravity and GCR exposure, as demonstrated by profound effects on several immuno-logical parameters examined by other investigators and ourselves. In particular, lymphocyte numbers are significantly reduced and subpopulation distribution is altered in the spleen, thy-mus, and peripheral blood following hindlimb unloading (HU) in mice. Lymphocyte depletion was found to be mediated through corticosteroid-induced apoptosis, although the molecular mechanism of apoptosis induction is still under investigation. The proliferative capacity of TCR-stimulated lymphocytes was also inhibited after HU. We have similarly shown that mice exposed to high-energy 56Fe ion radiation have decreased lymphocyte numbers and perturba-tions in proportions of various subpopulations, including CD4+ and CD8+ T cells, and B cells in the spleen, and maturation stages of immature T cells in the thymus. To compare these ground-based results to the effects of actual space-flight, fresh spleen and thymus samples were recently obtained from normal and transgenic mice immediately after 90 d. space-flight in the MDS, and identically-housed ground control mice. Total leukocyte numbers in each organ were enumerated, and subpopulation distribution was examined by flow cytometric analysis of CD3, CD4, CD8, CD19, CD25, DX-5, and CD11b. Splenic T cells were stimulated with anti-CD3 and assessed for proliferation after 2-4 d., and production of
Directory of Open Access Journals (Sweden)
R. Talebitooti
Full Text Available In this paper the effect of quadratic and cubic non-linearities of the system consisting of the crankshaft and torsional vibration damper (TVD is taken into account. TVD consists of non-linear elastomer material used for controlling the torsional vibration of crankshaft. The method of multiple scales is used to solve the governing equations of the system. Meanwhile, the frequency response of the system for both harmonic and sub-harmonic resonances is extracted. In addition, the effects of detuning parameters and other dimensionless parameters for a case of harmonic resonance are investigated. Moreover, the external forces including both inertia and gas forces are simultaneously applied into the model. Finally, in order to study the effectiveness of the parameters, the dimensionless governing equations of the system are solved, considering the state space method. Then, the effects of the torsional damper as well as all corresponding parameters of the system are discussed.
International Nuclear Information System (INIS)
Miller, G.; Martz, H.; Bertelli, L.; Melo, D.
2008-01-01
A simplified biokinetic model for 137 Cs has six parameters representing transfer of material to and from various compartments. Using a Bayesian analysis, the joint probability distribution of these six parameters is determined empirically for two cases with quite a lot of bioassay data. The distribution is found to be a multivariate log-normal. Correlations between different parameters are obtained. The method utilises a fairly large number of pre-determined forward biokinetic calculations, whose results are stored in interpolation tables. Four different methods to sample the multidimensional parameter space with a limited number of samples are investigated: random, stratified, Latin Hypercube sampling with a uniform distribution of parameters and importance sampling using a lognormal distribution that approximates the posterior distribution. The importance sampling method gives much smaller sampling uncertainty. No sampling method-dependent differences are perceptible for the uniform distribution methods. (authors)
Identification of strategy parameters for particle swarm optimizer through Taguchi method
Institute of Scientific and Technical Information of China (English)
KHOSLA Arun; KUMAR Shakti; AGGARWAL K.K.
2006-01-01
Particle swarm optimization (PSO), like other evolutionary algorithms is a population-based stochastic algorithm inspired from the metaphor of social interaction in birds, insects, wasps, etc. It has been used for finding promising solutions in complex search space through the interaction of particles in a swarm. It is a well recognized fact that the performance of evolutionary algorithms to a great extent depends on the choice of appropriate strategy/operating parameters like population size,crossover rate, mutation rate, crossover operator, etc. Generally, these parameters are selected through hit and trial process, which is very unsystematic and requires rigorous experimentation. This paper proposes a systematic based on Taguchi method reasoning scheme for rapidly identifying the strategy parameters for the PSO algorithm. The Taguchi method is a robust design approach using fractional factorial design to study a large number of parameters with small number of experiments. Computer simulations have been performed on two benchmark functions-Rosenbrock function and Griewank function-to validate the approach.
Capabilities of a Laser Guide Star for a Large Segmented Space Telescope
Clark, James R.; Carlton, Ashley; Douglas, Ewan S.; Males, Jared R.; Lumbres, Jennifer; Feinberg, Lee; Guyon, Olivier; Marlow, Weston; Cahoy, Kerri L.
2018-01-01
Large segmented mirror telescopes are planned for future space telescope missions such as LUVOIR (Large UV Optical Infrared Surveyor) to enable the improvement in resolution and contrast necessary to directly image Earth-like exoplanets, in addition to making contributions to general astrophysics. The precision surface control of these complex, large optical systems, which may have over a hundred meter-sized segments, is a challenge. Our initial simulations show that imaging a star of 2nd magnitude or brighter with a Zernike wavefront sensor should relax the segment stability requirements by factors between 10 and 50 depending on the wavefront control strategy. Fewer than fifty stars brighter than magnitude 2 can be found in the sky. A laser guide star (LGS) on a companion spacecraft will allow the telescope to target a dimmer science star and achieve wavefront control to the required stability without requiring slew or repointing maneuvers.We present initial results for one possible mission architecture, with a LGS flying at 100,000 km range from the large telescope in an L2 halo orbit, using a laser transmit power of 8 days) for an expenditure of system, it can be accommodated in a 6U CubeSat bus, but may require an extended period of time to transition between targets and match velocities with the telescope (e.g. 6 days to transit 10 degrees). If the LGS uses monopropellant propulsion, it must use at least a 27U bus to achieve the the same delta-V capability, but can transition between targets much more rapidly (flight are being refined. A low-cost prototype mission (e.g. between a small satellite in LEO and an LGS in GEO) to validate the feasibility is in development.
Exploring the large-scale structure of Taylor–Couette turbulence through Large-Eddy Simulations
Ostilla-Mónico, Rodolfo; Zhu, Xiaojue; Verzicco, Roberto
2018-04-01
Large eddy simulations (LES) of Taylor-Couette (TC) flow, the flow between two co-axial and independently rotating cylinders are performed in an attempt to explore the large-scale axially-pinned structures seen in experiments and simulations. Both static and dynamic LES models are used. The Reynolds number is kept fixed at Re = 3.4 · 104, and the radius ratio η = ri /ro is set to η = 0.909, limiting the effects of curvature and resulting in frictional Reynolds numbers of around Re τ ≈ 500. Four rotation ratios from Rot = ‑0.0909 to Rot = 0.3 are simulated. First, the LES of TC is benchmarked for different rotation ratios. Both the Smagorinsky model with a constant of cs = 0.1 and the dynamic model are found to produce reasonable results for no mean rotation and cyclonic rotation, but deviations increase for increasing rotation. This is attributed to the increasing anisotropic character of the fluctuations. Second, “over-damped” LES, i.e. LES with a large Smagorinsky constant is performed and is shown to reproduce some features of the large-scale structures, even when the near-wall region is not adequately modeled. This shows the potential for using over-damped LES for fast explorations of the parameter space where large-scale structures are found.
Massive data compression for parameter-dependent covariance matrices
Heavens, Alan F.; Sellentin, Elena; de Mijolla, Damien; Vianello, Alvise
2017-12-01
We show how the massive data compression algorithm MOPED can be used to reduce, by orders of magnitude, the number of simulated data sets which are required to estimate the covariance matrix required for the analysis of Gaussian-distributed data. This is relevant when the covariance matrix cannot be calculated directly. The compression is especially valuable when the covariance matrix varies with the model parameters. In this case, it may be prohibitively expensive to run enough simulations to estimate the full covariance matrix throughout the parameter space. This compression may be particularly valuable for the next generation of weak lensing surveys, such as proposed for Euclid and Large Synoptic Survey Telescope, for which the number of summary data (such as band power or shear correlation estimates) is very large, ∼104, due to the large number of tomographic redshift bins which the data will be divided into. In the pessimistic case where the covariance matrix is estimated separately for all points in an Monte Carlo Markov Chain analysis, this may require an unfeasible 109 simulations. We show here that MOPED can reduce this number by a factor of 1000, or a factor of ∼106 if some regularity in the covariance matrix is assumed, reducing the number of simulations required to a manageable 103, making an otherwise intractable analysis feasible.
Cosmological parameters from CMB and other data: A Monte Carlo approach
International Nuclear Information System (INIS)
Lewis, Antony; Bridle, Sarah
2002-01-01
We present a fast Markov chain Monte Carlo exploration of cosmological parameter space. We perform a joint analysis of results from recent cosmic microwave background (CMB) experiments and provide parameter constraints, including σ 8 , from the CMB independent of other data. We next combine data from the CMB, HST Key Project, 2dF galaxy redshift survey, supernovae type Ia and big-bang nucleosynthesis. The Monte Carlo method allows the rapid investigation of a large number of parameters, and we present results from 6 and 9 parameter analyses of flat models, and an 11 parameter analysis of non-flat models. Our results include constraints on the neutrino mass (m ν < or approx. 3 eV), equation of state of the dark energy, and the tensor amplitude, as well as demonstrating the effect of additional parameters on the base parameter constraints. In a series of appendixes we describe the many uses of importance sampling, including computing results from new data and accuracy correction of results generated from an approximate method. We also discuss the different ways of converting parameter samples to parameter constraints, the effect of the prior, assess the goodness of fit and consistency, and describe the use of analytic marginalization over normalization parameters
International Nuclear Information System (INIS)
Svoren, J.
1982-01-01
The present statistical analysis is based on a sample of long-period comets selected according to two criteria: (1) availability of photometric observations made at large distances from the Sun and covering an orbital arc long enough for a reliable determination of the photometric parameters, and (2) availability of a well determined orbit making it possible to classify the comet as new or old in Oort's (1950) sense. The selection was confined to comets with nearly parabolic orbits. 67 objects were found to satisfy the selection criteria. Photometric data referring to heliocentric distances of r > 2.5 AU were only used, yielding a total of 2,842 individual estimates and measurements. (Auth.)
Sokolis, Dimitrios P; Orfanidis, Ioannis K; Peroulis, Michalis
2011-12-01
The function of the large bowel is to absorb water from the remaining indigestible food matter and subsequently pass useless waste material from the body, but there has been only a small amount of data in the literature on its biomechanical characteristics that would facilitate our understanding of its transport function. Our study aims to fill this gap by affording comprehensive inflation/extension data of intestinal segments from distinct areas, spanning a physiologically relevant deformation range (100-130% axial stretches and 0-15 mmHg lumen pressures). These data were characterized by the Fung-type exponential model in the thick-walled setting, showing reasonable agreement, i.e. root-mean-square error ~30%. Based on optimized material parameters, i.e. a(1)testing and material characterization results for the large intestine of healthy young animals are expected to aid in comprehending the adaptation/remodeling that occurs with ageing, pathological conditions and surgical procedures, as well as for the development of suitable biomaterials for replacement.
Ogilvie, Karen; Olde Daalhuis, Adri B.
2015-11-01
By application of the theory for second-order linear differential equations with two turning points developed in [Olver F.W.J., Philos. Trans. Roy. Soc. London Ser. A 278 (1975), 137-174], uniform asymptotic approximations are obtained in the first part of this paper for the Lamé and Mathieu functions with a large real parameter. These approximations are expressed in terms of parabolic cylinder functions, and are uniformly valid in their respective real open intervals. In all cases explicit bounds are supplied for the error terms associated with the approximations. Approximations are also obtained for the large order behaviour for the respective eigenvalues. We restrict ourselves to a two term uniform approximation. Theoretically more terms in these approximations could be computed, but the coefficients would be very complicated. In the second part of this paper we use a simplified method to obtain uniform asymptotic expansions for these functions. The coefficients are just polynomials and satisfy simple recurrence relations. The price to pay is that these asymptotic expansions hold only in a shrinking interval as their respective parameters become large; this interval however encapsulates all the interesting oscillatory behaviour of the functions. This simplified method also gives many terms in asymptotic expansions for these eigenvalues, derived simultaneously with the coefficients in the function expansions. We provide rigorous realistic error bounds for the function expansions when truncated and order estimates for the error when the eigenvalue expansions are truncated. With this paper we confirm that many of the formal results in the literature are correct.
Neutrino oscillation parameter sampling with MonteCUBES
Blennow, Mattias; Fernandez-Martinez, Enrique
2010-01-01
We present MonteCUBES ("Monte Carlo Utility Based Experiment Simulator"), a software package designed to sample the neutrino oscillation parameter space through Markov Chain Monte Carlo algorithms. MonteCUBES makes use of the GLoBES software so that the existing experiment definitions for GLoBES, describing long baseline and reactor experiments, can be used with MonteCUBES. MonteCUBES consists of two main parts: The first is a C library, written as a plug-in for GLoBES, implementing the Markov Chain Monte Carlo algorithm to sample the parameter space. The second part is a user-friendly graphical Matlab interface to easily read, analyze, plot and export the results of the parameter space sampling. Program summaryProgram title: MonteCUBES (Monte Carlo Utility Based Experiment Simulator) Catalogue identifier: AEFJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFJ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public Licence No. of lines in distributed program, including test data, etc.: 69 634 No. of bytes in distributed program, including test data, etc.: 3 980 776 Distribution format: tar.gz Programming language: C Computer: MonteCUBES builds and installs on 32 bit and 64 bit Linux systems where GLoBES is installed Operating system: 32 bit and 64 bit Linux RAM: Typically a few MBs Classification: 11.1 External routines: GLoBES [1,2] and routines/libraries used by GLoBES Subprograms used:Cat Id ADZI_v1_0, Title GLoBES, Reference CPC 177 (2007) 439 Nature of problem: Since neutrino masses do not appear in the standard model of particle physics, many models of neutrino masses also induce other types of new physics, which could affect the outcome of neutrino oscillation experiments. In general, these new physics imply high-dimensional parameter spaces that are difficult to explore using classical methods such as multi-dimensional projections and minimizations, such as those
Large-size space debris flyby in low earth orbits
Baranov, A. A.; Grishko, D. A.; Razoumny, Y. N.
2017-09-01
the analysis of NORAD catalogue of space objects executed with respect to the overall sizes of upper-stages and last stages of carrier rockets allows the classification of 5 groups of large-size space debris (LSSD). These groups are defined according to the proximity of orbital inclinations of the involved objects. The orbits within a group have various values of deviations in the Right Ascension of the Ascending Node (RAAN). It is proposed to use the RAANs deviations' evolution portrait to clarify the orbital planes' relative spatial distribution in a group so that the RAAN deviations should be calculated with respect to the concrete precessing orbital plane of the concrete object. In case of the first three groups (inclinations i = 71°, i = 74°, i = 81°) the straight lines of the RAAN relative deviations almost do not intersect each other. So the simple, successive flyby of group's elements is effective, but the significant value of total Δ V is required to form drift orbits. In case of the fifth group (Sun-synchronous orbits) these straight lines chaotically intersect each other for many times due to the noticeable differences in values of semi-major axes and orbital inclinations. The intersections' existence makes it possible to create such a flyby sequence for LSSD group when the orbit of one LSSD object simultaneously serves as the drift orbit to attain another LSSD object. This flyby scheme requiring less Δ V was called "diagonal." The RAANs deviations' evolution portrait built for the fourth group (to be studied in the paper) contains both types of lines, so the simultaneous combination of diagonal and successive flyby schemes is possible. The value of total Δ V and temporal costs were calculated to cover all the elements of the 4th group. The article is also enriched by the results obtained for the flyby problem solution in case of all the five mentioned LSSD groups. The general recommendations are given concerned with the required reserve of total
Ghattas, O.; Petra, N.; Cui, T.; Marzouk, Y.; Benjamin, P.; Willcox, K.
2016-12-01
Model-based projections of the dynamics of the polar ice sheets play a central role in anticipating future sea level rise. However, a number of mathematical and computational challenges place significant barriers on improving predictability of these models. One such challenge is caused by the unknown model parameters (e.g., in the basal boundary conditions) that must be inferred from heterogeneous observational data, leading to an ill-posed inverse problem and the need to quantify uncertainties in its solution. In this talk we discuss the problem of estimating the uncertainty in the solution of (large-scale) ice sheet inverse problems within the framework of Bayesian inference. Computing the general solution of the inverse problem--i.e., the posterior probability density--is intractable with current methods on today's computers, due to the expense of solving the forward model (3D full Stokes flow with nonlinear rheology) and the high dimensionality of the uncertain parameters (which are discretizations of the basal sliding coefficient field). To overcome these twin computational challenges, it is essential to exploit problem structure (e.g., sensitivity of the data to parameters, the smoothing property of the forward model, and correlations in the prior). To this end, we present a data-informed approach that identifies low-dimensional structure in both parameter space and the forward model state space. This approach exploits the fact that the observations inform only a low-dimensional parameter space and allows us to construct a parameter-reduced posterior. Sampling this parameter-reduced posterior still requires multiple evaluations of the forward problem, therefore we also aim to identify a low dimensional state space to reduce the computational cost. To this end, we apply a proper orthogonal decomposition (POD) approach to approximate the state using a low-dimensional manifold constructed using ``snapshots'' from the parameter reduced posterior, and the discrete
Viereck, R. A.; Azeem, S. I.
2017-12-01
One of the goals of the National Space Weather Action Plan is to establish extreme event benchmarks. These benchmarks are estimates of environmental parameters that impact technologies and systems during extreme space weather events. Quantitative assessment of anticipated conditions during these extreme space weather event will enable operators and users of affected technologies to develop plans for mitigating space weather risks and improve preparedness. The ionosphere is one of the most important regions of space because so many applications either depend on ionospheric space weather for their operation (HF communication, over-the-horizon radars), or can be deleteriously affected by ionospheric conditions (e.g. GNSS navigation and timing, UHF satellite communications, synthetic aperture radar, HF communications). Since the processes that influence the ionosphere vary over time scales from seconds to years, it continues to be a challenge to adequately predict its behavior in many circumstances. Estimates with large uncertainties, in excess of 100%, may result in operators of impacted technologies over or under preparing for such events. The goal of the next phase of the benchmarking activity is to reduce these uncertainties. In this presentation, we will focus on the sources of uncertainty in the ionospheric response to extreme geomagnetic storms. We will then discuss various research efforts required to better understand the underlying processes of ionospheric variability and how the uncertainties in ionospheric response to extreme space weather could be reduced and the estimates improved.
Estimation of gloss from rough surface parameters
Simonsen, Ingve; Larsen, Åge G.; Andreassen, Erik; Ommundsen, Espen; Nord-Varhaug, Katrin
2005-12-01
Gloss is a quantity used in the optical industry to quantify and categorize materials according to how well they scatter light specularly. With the aid of phase perturbation theory, we derive an approximate expression for this quantity for a one-dimensional randomly rough surface. It is demonstrated that gloss depends in an exponential way on two dimensionless quantities that are associated with the surface randomness: the root-mean-square roughness times the perpendicular momentum transfer for the specular direction, and a correlation function dependent factor times a lateral momentum variable associated with the collection angle. Rigorous Monte Carlo simulations are used to access the quality of this approximation, and good agreement is observed over large regions of parameter space.
Ragusa, J. M.
1975-01-01
An optimum hypothetical organizational structure was studied for a large earth-orbiting, multidisciplinary research and applications space base manned by a crew of technologists. Because such a facility does not presently exist, in situ empirical testing was not possible. Study activity was, therefore, concerned with the identification of a desired organizational structural model rather than with the empirical testing of the model. The essential finding of this research was that a four-level project type total matrix model will optimize the efficiency and effectiveness of space base technologists.
Energy Technology Data Exchange (ETDEWEB)
Khawli, Toufik Al; Eppelt, Urs; Hermanns, Torsten [RWTH Aachen University, Chair for Nonlinear Dynamics, Steinbachstr. 15, 52047 Aachen (Germany); Gebhardt, Sascha [RWTH Aachen University, Virtual Reality Group, IT Center, Seffenter Weg 23, 52074 Aachen (Germany); Kuhlen, Torsten [Forschungszentrum Jülich GmbH, Institute for Advanced Simulation (IAS), Jülich Supercomputing Centre (JSC), Wilhelm-Johnen-Straße, 52425 Jülich (Germany); Schulz, Wolfgang [Fraunhofer, ILT Laser Technology, Steinbachstr. 15, 52047 Aachen (Germany)
2016-06-08
In production industries, parameter identification, sensitivity analysis and multi-dimensional visualization are vital steps in the planning process for achieving optimal designs and gaining valuable information. Sensitivity analysis and visualization can help in identifying the most-influential parameters and quantify their contribution to the model output, reduce the model complexity, and enhance the understanding of the model behavior. Typically, this requires a large number of simulations, which can be both very expensive and time consuming when the simulation models are numerically complex and the number of parameter inputs increases. There are three main constituent parts in this work. The first part is to substitute the numerical, physical model by an accurate surrogate model, the so-called metamodel. The second part includes a multi-dimensional visualization approach for the visual exploration of metamodels. In the third part, the metamodel is used to provide the two global sensitivity measures: i) the Elementary Effect for screening the parameters, and ii) the variance decomposition method for calculating the Sobol indices that quantify both the main and interaction effects. The application of the proposed approach is illustrated with an industrial application with the goal of optimizing a drilling process using a Gaussian laser beam.
Real-Time Parameter Identification
National Aeronautics and Space Administration — Armstrong researchers have implemented in the control room a technique for estimating in real time the aerodynamic parameters that describe the stability and control...
Dynamics of large-scale brain activity in normal arousal states and epileptic seizures
Robinson, P. A.; Rennie, C. J.; Rowe, D. L.
2002-04-01
Links between electroencephalograms (EEGs) and underlying aspects of neurophysiology and anatomy are poorly understood. Here a nonlinear continuum model of large-scale brain electrical activity is used to analyze arousal states and their stability and nonlinear dynamics for physiologically realistic parameters. A simple ordered arousal sequence in a reduced parameter space is inferred and found to be consistent with experimentally determined parameters of waking states. Instabilities arise at spectral peaks of the major clinically observed EEG rhythms-mainly slow wave, delta, theta, alpha, and sleep spindle-with each instability zone lying near its most common experimental precursor arousal states in the reduced space. Theta, alpha, and spindle instabilities evolve toward low-dimensional nonlinear limit cycles that correspond closely to EEGs of petit mal seizures for theta instability, and grand mal seizures for the other types. Nonlinear stimulus-induced entrainment and seizures are also seen, EEG spectra and potentials evoked by stimuli are reproduced, and numerous other points of experimental agreement are found. Inverse modeling enables physiological parameters underlying observed EEGs to be determined by a new, noninvasive route. This model thus provides a single, powerful framework for quantitative understanding of a wide variety of brain phenomena.
Parameter analysis calculation on characteristics of portable FAST reactor
International Nuclear Information System (INIS)
Otsubo, Akira; Kowata, Yasuki
1998-06-01
In this report, we performed a parameter survey analysis by using the analysis program code STEDFAST (Space, TErrestrial and Deep sea FAST reactor-gas turbine system). Concerning the deep sea fast reactor-gas turbine system, calculations with many variable parameters were performed on the base case of a NaK cooled reactor of 40 kWe. We aimed at total equipment weight and surface area necessary to remove heat from the system as important values of the characteristics of the system. Electric generation power and the material of a pressure hull were specially influential for the weight. The electric generation power, reactor outlet/inlet temperatures, a natural convection heat transfer coefficient of sea water were specially influential for the area. Concerning the space reactor-gas turbine system, the calculations with the variable parameters of compressor inlet temperature, reactor outlet/inlet temperatures and turbine inlet pressure were performed on the base case of a Na cooled reactor of 40 kWe. The first and the second variable parameters were influential for the total equipment weight of the important characteristic of the system. Concerning the terrestrial fast reactor-gas turbine system, the calculations with the variable parameters of heat transferred pipe number in a heat exchanger to produce hot water of 100degC for cogeneration, compressor stage number and the kind of primary coolant material were performed on the base case of a Pb cooled reactor of 100 MWt. In the comparison of calculational results for Pb and Na of primary coolant material, the primary coolant weight flow rate was naturally large for the former case compared with for the latter case because density is very different between them. (J.P.N.)
Parallel symbolic state-space exploration is difficult, but what is the alternative?
Directory of Open Access Journals (Sweden)
Gianfranco Ciardo
2009-12-01
Full Text Available State-space exploration is an essential step in many modeling and analysis problems. Its goal is to find the states reachable from the initial state of a discrete-state model described. The state space can used to answer important questions, e.g., "Is there a dead state?" and "Can N become negative?", or as a starting point for sophisticated investigations expressed in temporal logic. Unfortunately, the state space is often so large that ordinary explicit data structures and sequential algorithms cannot cope, prompting the exploration of (1 parallel approaches using multiple processors, from simple workstation networks to shared-memory supercomputers, to satisfy large memory and runtime requirements and (2 symbolic approaches using decision diagrams to encode the large structured sets and relations manipulated during state-space generation. Both approaches have merits and limitations. Parallel explicit state-space generation is challenging, but almost linear speedup can be achieved; however, the analysis is ultimately limited by the memory and processors available. Symbolic methods are a heuristic that can efficiently encode many, but not all, functions over a structured and exponentially large domain; here the pitfalls are subtler: their performance varies widely depending on the class of decision diagram chosen, the state variable order, and obscure algorithmic parameters. As symbolic approaches are often much more efficient than explicit ones for many practical models, we argue for the need to parallelize symbolic state-space generation algorithms, so that we can realize the advantage of both approaches. This is a challenging endeavor, as the most efficient symbolic algorithm, Saturation, is inherently sequential. We conclude by discussing challenges, efforts, and promising directions toward this goal.
Chakraborty, Somdeb; Roy, Shibaji
2012-02-01
A particular decoupling limit of the nonextremal (D1, D3) brane bound state system of type IIB string theory is known to give the gravity dual of space-space noncommutative Yang-Mills theory at finite temperature. We use a string probe in this background to compute the jet quenching parameter in a strongly coupled plasma of hot noncommutative Yang-Mills theory in (3+1) dimensions from gauge/gravity duality. We give expressions for the jet quenching parameter for both small and large noncommutativity. For small noncommutativity, we find that the value of the jet quenching parameter gets reduced from its commutative value. The reduction is enhanced with temperature as T7 for fixed noncommutativity and fixed ’t Hooft coupling. We also give an estimate of the correction due to noncommutativity at the present collider energies like in RHIC or in LHC and find it too small to be detected. We further generalize the results for noncommutative Yang-Mills theories in diverse dimensions.
Large Eddy Simulation of Heat Entrainment Under Arctic Sea Ice
Ramudu, Eshwan; Gelderloos, Renske; Yang, Di; Meneveau, Charles; Gnanadesikan, Anand
2018-01-01
Arctic sea ice has declined rapidly in recent decades. The faster than projected retreat suggests that free-running large-scale climate models may not be accurately representing some key processes. The small-scale turbulent entrainment of heat from the mixed layer could be one such process. To better understand this mechanism, we model the Arctic Ocean's Canada Basin, which is characterized by a perennial anomalously warm Pacific Summer Water (PSW) layer residing at the base of the mixed layer and a summertime Near-Surface Temperature Maximum (NSTM) within the mixed layer trapping heat from solar radiation. We use large eddy simulation (LES) to investigate heat entrainment for different ice-drift velocities and different initial temperature profiles. The value of LES is that the resolved turbulent fluxes are greater than the subgrid-scale fluxes for most of our parameter space. The results show that the presence of the NSTM enhances heat entrainment from the mixed layer. Additionally there is no PSW heat entrained under the parameter space considered. We propose a scaling law for the ocean-to-ice heat flux which depends on the initial temperature anomaly in the NSTM layer and the ice-drift velocity. A case study of "The Great Arctic Cyclone of 2012" gives a turbulent heat flux from the mixed layer that is approximately 70% of the total ocean-to-ice heat flux estimated from the PIOMAS model often used for short-term predictions. Present results highlight the need for large-scale climate models to account for the NSTM layer.
Multiclustered chimeras in large semiconductor laser arrays with nonlocal interactions
Shena, J.; Hizanidis, J.; Hövel, P.; Tsironis, G. P.
2017-09-01
The dynamics of a large array of coupled semiconductor lasers is studied numerically for a nonlocal coupling scheme. Our focus is on chimera states, a self-organized spatiotemporal pattern of coexisting coherence and incoherence. In laser systems, such states have been previously found for global and nearest-neighbor coupling, mainly in small networks. The technological advantage of large arrays has motivated us to study a system of 200 nonlocally coupled lasers with respect to the emerging collective dynamics. Moreover, the nonlocal nature of the coupling allows us to obtain robust chimera states with multiple (in)coherent domains. The crucial parameters are the coupling strength, the coupling phase and the range of the nonlocal interaction. We find that multiclustered chimera states exist in a wide region of the parameter space and we provide quantitative characterization for the obtained spatiotemporal patterns. By proposing two different experimental setups for the realization of the nonlocal coupling scheme, we are confident that our results can be confirmed in the laboratory.
PC Software graphics tool for conceptual design of space/planetary electrical power systems
Truong, Long V.
1995-01-01
This paper describes the Decision Support System (DSS), a personal computer software graphics tool for designing conceptual space and/or planetary electrical power systems. By using the DSS, users can obtain desirable system design and operating parameters, such as system weight, electrical distribution efficiency, and bus power. With this tool, a large-scale specific power system was designed in a matter of days. It is an excellent tool to help designers make tradeoffs between system components, hardware architectures, and operation parameters in the early stages of the design cycle. The DSS is a user-friendly, menu-driven tool with online help and a custom graphical user interface. An example design and results are illustrated for a typical space power system with multiple types of power sources, frequencies, energy storage systems, and loads.
Designing key-dependent chaotic S-box with larger key space
International Nuclear Information System (INIS)
Yin Ruming; Yuan Jian; Wang Jian; Shan Xiuming; Wang Xiqin
2009-01-01
The construction of cryptographically strong substitution boxes (S-boxes) is an important concern in designing secure cryptosystems. The key-dependent S-boxes designed using chaotic maps have received increasing attention in recent years. However, the key space of such S-boxes does not seem to be sufficiently large due to the limited parameter range of discretized chaotic maps. In this paper, we propose a new key-dependent S-box based on the iteration of continuous chaotic maps. We explore the continuous-valued state space of chaotic systems, and devise the discrete mapping between the input and the output of the S-box. A key-dependent S-box is constructed with the logistic map in this paper. We show that its key space could be much larger than the current key-dependent chaotic S-boxes.
Interpolation of final geometry and result fields in process parameter space
Misiun, Grzegorz Stefan; Wang, Chao; Geijselaers, Hubertus J.M.; van den Boogaard, Antonius H.; Saanouni, K.
2016-01-01
Different routes to produce a product in a bulk forming process can be described by a limited set of process parameters. The parameters determine the final geometry as well as the distribution of state variables in the final shape. Ring rolling has been simulated using different parameter settings.
Alexander, LYSENKO; Iurii, VOLK
2018-03-01
We developed a cubic non-linear theory describing the dynamics of the multiharmonic space-charge wave (SCW), with harmonics frequencies smaller than the two-stream instability critical frequency, with different relativistic electron beam (REB) parameters. The self-consistent differential equation system for multiharmonic SCW harmonic amplitudes was elaborated in a cubic non-linear approximation. This system considers plural three-wave parametric resonant interactions between wave harmonics and the two-stream instability effect. Different REB parameters such as the input angle with respect to focusing magnetic field, the average relativistic factor value, difference of partial relativistic factors, and plasma frequency of partial beams were investigated regarding their influence on the frequency spectrum width and multiharmonic SCW saturation levels. We suggested ways in which the multiharmonic SCW frequency spectrum widths could be increased in order to use them in multiharmonic two-stream superheterodyne free-electron lasers, with the main purpose of forming a powerful multiharmonic electromagnetic wave.
Wang, S.; Huang, G. H.; Huang, W.; Fan, Y. R.; Li, Z.
2015-10-01
In this study, a fractional factorial probabilistic collocation method is proposed to reveal statistical significance of hydrologic model parameters and their multi-level interactions affecting model outputs, facilitating uncertainty propagation in a reduced dimensional space. The proposed methodology is applied to the Xiangxi River watershed in China to demonstrate its validity and applicability, as well as its capability of revealing complex and dynamic parameter interactions. A set of reduced polynomial chaos expansions (PCEs) only with statistically significant terms can be obtained based on the results of factorial analysis of variance (ANOVA), achieving a reduction of uncertainty in hydrologic predictions. The predictive performance of reduced PCEs is verified by comparing against standard PCEs and the Monte Carlo with Latin hypercube sampling (MC-LHS) method in terms of reliability, sharpness, and Nash-Sutcliffe efficiency (NSE). Results reveal that the reduced PCEs are able to capture hydrologic behaviors of the Xiangxi River watershed, and they are efficient functional representations for propagating uncertainties in hydrologic predictions.
PHYSICAL PROPERTIES OF LARGE AND SMALL GRANULES IN SOLAR QUIET REGIONS
Energy Technology Data Exchange (ETDEWEB)
Yu Daren; Xie Zongxia; Hu Qinghua [Harbin Institute of Technology, Harbin 150001 (China); Yang Shuhong; Zhang Jun; Wang Jingxiu, E-mail: caddiexie@hotmail.com, E-mail: zjun@ourstar.bao.ac.cn [Key Laboratory of Solar Activity, National Astronomical Observatories, Chinese Academy of Sciences, Beijing 100012 (China)
2011-12-10
The normal mode observations of seven quiet regions obtained by the Hinode spacecraft are analyzed to study the physical properties of granules. An artificial intelligence technique is introduced to automatically find the spatial distribution of granules in feature spaces. In this work, we investigate the dependence of granular continuum intensity, mean Doppler velocity, and magnetic fields on granular diameter. We recognized 71,538 granules by an automatic segmentation technique and then extracted five properties: diameter, continuum intensity, Doppler velocity, and longitudinal and transverse magnetic flux density to describe the granules. To automatically explore the intrinsic structures of the granules in the five-dimensional parameter space, the X-means clustering algorithm and one-rule classifier are introduced to define the rules for classifying the granules. It is found that diameter is a dominating parameter in classifying the granules and two families of granules are derived: small granules with diameters smaller than 1.''44, and large granules with diameters larger than 1.''44. Based on statistical analysis of the detected granules, the following results are derived: (1) the averages of diameter, continuum intensity, and Doppler velocity in the upward direction of large granules are larger than those of small granules; (2) the averages of absolute longitudinal, transverse, and unsigned flux density of large granules are smaller than those of small granules; (3) for small granules, the average of continuum intensity increases with their diameters, while the averages of Doppler velocity, transverse, absolute longitudinal, and unsigned magnetic flux density decrease with their diameters. However, the mean properties of large granules are stable; (4) the intensity distributions of all granules and small granules do not satisfy Gaussian distribution, while that of large granules almost agrees with normal distribution with a peak at 1.04 I
Unequal arm space-borne gravitational wave detectors
International Nuclear Information System (INIS)
Larson, Shane L.; Hellings, Ronald W.; Hiscock, William A.
2002-01-01
Unlike ground-based interferometric gravitational wave detectors, large space-based systems will not be rigid structures. When the end stations of the laser interferometer are freely flying spacecraft, the armlengths will change due to variations in the spacecraft positions along their orbital trajectories, so the precise equality of the arms that is required in a laboratory interferometer to cancel laser phase noise is not possible. However, using a method discovered by Tinto and Armstrong, a signal can be constructed in which laser phase noise exactly cancels out, even in an unequal arm interferometer. We examine the case where the ratio of the armlengths is a variable parameter, and compute the averaged gravitational wave transfer function as a function of that parameter. Example sensitivity curve calculations are presented for the expected design parameters of the proposed LISA interferometer, comparing it to a similar instrument with one arm shortened by a factor of 100, showing how the ratio of the armlengths will affect the overall sensitivity of the instrument
Vernon, Ian; Liu, Junli; Goldstein, Michael; Rowe, James; Topping, Jen; Lindsey, Keith
2018-01-02
Many mathematical models have now been employed across every area of systems biology. These models increasingly involve large numbers of unknown parameters, have complex structure which can result in substantial evaluation time relative to the needs of the analysis, and need to be compared to observed data of various forms. The correct analysis of such models usually requires a global parameter search, over a high dimensional parameter space, that incorporates and respects the most important sources of uncertainty. This can be an extremely difficult task, but it is essential for any meaningful inference or prediction to be made about any biological system. It hence represents a fundamental challenge for the whole of systems biology. Bayesian statistical methodology for the uncertainty analysis of complex models is introduced, which is designed to address the high dimensional global parameter search problem. Bayesian emulators that mimic the systems biology model but which are extremely fast to evaluate are embeded within an iterative history match: an efficient method to search high dimensional spaces within a more formal statistical setting, while incorporating major sources of uncertainty. The approach is demonstrated via application to a model of hormonal crosstalk in Arabidopsis root development, which has 32 rate parameters, for which we identify the sets of rate parameter values that lead to acceptable matches between model output and observed trend data. The multiple insights into the model's structure that this analysis provides are discussed. The methodology is applied to a second related model, and the biological consequences of the resulting comparison, including the evaluation of gene functions, are described. Bayesian uncertainty analysis for complex models using both emulators and history matching is shown to be a powerful technique that can greatly aid the study of a large class of systems biology models. It both provides insight into model behaviour
National Research Council Canada - National Science Library
Waltham, N. R; Prydderch, M; Mapson-Menard, H; Morrissey, Q; Turchetta, R; Pool, P; Harris, A
2005-01-01
We describe our programme to develop a large-format science-grade CMOS active pixel sensor for future space science missions, and in particular an extreme ultra-violet spectrograph for solar physics...
The Helmholtz Hierarchy: Phase Space Statistics of Cold Dark Matter
Tassev, Svetlin
2010-01-01
We present a new formalism to study large-scale structure in the universe. The result is a hierarchy (which we call the "Helmholtz Hierarchy") of equations describing the phase space statistics of cold dark matter (CDM). The hierarchy features a physical ordering parameter which interpolates between the Zel'dovich approximation and fully-fledged gravitational interactions. The results incorporate the effects of stream crossing. We show that the Helmholtz hierarchy is self-consistent and obeys...
GMC COLLISIONS AS TRIGGERS OF STAR FORMATION. I. PARAMETER SPACE EXPLORATION WITH 2D SIMULATIONS
Energy Technology Data Exchange (ETDEWEB)
Wu, Benjamin [Department of Physics, University of Florida, Gainesville, FL 32611 (United States); Loo, Sven Van [School of Physics and Astronomy, University of Leeds, Leeds LS2 9JT (United Kingdom); Tan, Jonathan C. [Departments of Astronomy and Physics, University of Florida, Gainesville, FL 32611 (United States); Bruderer, Simon, E-mail: benwu@phys.ufl.edu [Max Planck Institute for Extraterrestrial Physics, Giessenbachstrasse 1, D-85748 Garching (Germany)
2015-09-20
We utilize magnetohydrodynamic (MHD) simulations to develop a numerical model for giant molecular cloud (GMC)–GMC collisions between nearly magnetically critical clouds. The goal is to determine if, and under what circumstances, cloud collisions can cause pre-existing magnetically subcritical clumps to become supercritical and undergo gravitational collapse. We first develop and implement new photodissociation region based heating and cooling functions that span the atomic to molecular transition, creating a multiphase ISM and allowing modeling of non-equilibrium temperature structures. Then in 2D and with ideal MHD, we explore a wide parameter space of magnetic field strength, magnetic field geometry, collision velocity, and impact parameter and compare isolated versus colliding clouds. We find factors of ∼2–3 increase in mean clump density from typical collisions, with strong dependence on collision velocity and magnetic field strength, but ultimately limited by flux-freezing in 2D geometries. For geometries enabling flow along magnetic field lines, greater degrees of collapse are seen. We discuss observational diagnostics of cloud collisions, focussing on {sup 13}CO(J = 2–1), {sup 13}CO(J = 3–2), and {sup 12}CO(J = 8–7) integrated intensity maps and spectra, which we synthesize from our simulation outputs. We find that the ratio of J = 8–7 to lower-J emission is a powerful diagnostic probe of GMC collisions.
Tracking in Object Action Space
DEFF Research Database (Denmark)
Krüger, Volker; Herzog, Dennis
2013-01-01
the space of the object affordances, i.e., the space of possible actions that are applied on a given object. This way, 3D body tracking reduces to action tracking in the object (and context) primed parameter space of the object affordances. This reduces the high-dimensional joint-space to a low...
Energy Technology Data Exchange (ETDEWEB)
Plesko, Catherine S [Los Alamos National Laboratory; Clement, R Ryan [Los Alamos National Laboratory; Weaver, Robert P [Los Alamos National Laboratory; Bradley, Paul A [Los Alamos National Laboratory; Huebner, Walter F [Los Alamos National Laboratory
2009-01-01
The mitigation of impact hazards resulting from Earth-approaching asteroids and comets has received much attention in the popular press. However, many questions remain about the near-term and long-term, feasibility and appropriate application of all proposed methods. Recent and ongoing ground- and space-based observations of small solar-system body composition and dynamics have revolutionized our understanding of these bodies (e.g., Ryan (2000), Fujiwara et al. (2006), and Jedicke et al. (2006)). Ongoing increases in computing power and algorithm sophistication make it possible to calculate the response of these inhomogeneous objects to proposed mitigation techniques. Here we present the first phase of a comprehensive hazard mitigation planning effort undertaken by Southwest Research Institute and Los Alamos National Laboratory. We begin by reviewing the parameter space of the object's physical and chemical composition and trajectory. We then use the radiation hydrocode RAGE (Gittings et al. 2008), Monte Carlo N-Particle (MCNP) radiation transport (see Clement et al., this conference), and N-body dynamics codes to explore the effects these variations in object properties have on the coupling of energy into the object from a variety of mitigation techniques, including deflection and disruption by nuclear and conventional munitions, and a kinetic impactor.
International Nuclear Information System (INIS)
Zhang Zhongcan; Hu Chenguo; Fang Zhenyun
1998-01-01
The authors study the method which directly adopts the azimuthal angles and the rotation angle of the axis to describe the evolving process of the angular momentum eigenstates under the space rotation transformation. The authors obtain the angular momentum rotation and multi-rotation matrix elements' path integral which evolves with the parameter λ(0→θ,θ the rotation angle), and establish the general method of treating the functional (path) integral as a normal multi-integrals
Large area, low cost space solar cells with optional wraparound contacts
Michaels, D.; Mendoza, N.; Williams, R.
1981-01-01
Design parameters for two large area, low cost solar cells are presented, and electron irradiation testing, thermal alpha testing, and cell processing are discussed. The devices are a 2 ohm-cm base resistivity silicon cell with an evaporated aluminum reflector produced in a dielectric wraparound cell, and a 10 ohm-cm silicon cell with the BSF/BSR combination and a conventional contact system. Both cells are 5.9 x 5.9 cm and require 200 micron thick silicon material due to mission weight constraints. Normalized values for open circuit voltage, short circuit current, and maximum power calculations derived from electron radiation testing are given. In addition, thermal alpha testing values of absorptivity and emittance are included. A pilot cell processing run produced cells averaging 14.4% efficiencies at AMO 28 C. Manufacturing for such cells will be on a mechanized process line, and the area of coverslide application technology must be considered in order to achieve cost effective production.
Importance sampling large deviations in nonequilibrium steady states. I
Ray, Ushnish; Chan, Garnet Kin-Lic; Limmer, David T.
2018-03-01
Large deviation functions contain information on the stability and response of systems driven into nonequilibrium steady states and in such a way are similar to free energies for systems at equilibrium. As with equilibrium free energies, evaluating large deviation functions numerically for all but the simplest systems is difficult because by construction they depend on exponentially rare events. In this first paper of a series, we evaluate different trajectory-based sampling methods capable of computing large deviation functions of time integrated observables within nonequilibrium steady states. We illustrate some convergence criteria and best practices using a number of different models, including a biased Brownian walker, a driven lattice gas, and a model of self-assembly. We show how two popular methods for sampling trajectory ensembles, transition path sampling and diffusion Monte Carlo, suffer from exponentially diverging correlations in trajectory space as a function of the bias parameter when estimating large deviation functions. Improving the efficiencies of these algorithms requires introducing guiding functions for the trajectories.
Importance sampling large deviations in nonequilibrium steady states. I.
Ray, Ushnish; Chan, Garnet Kin-Lic; Limmer, David T
2018-03-28
Large deviation functions contain information on the stability and response of systems driven into nonequilibrium steady states and in such a way are similar to free energies for systems at equilibrium. As with equilibrium free energies, evaluating large deviation functions numerically for all but the simplest systems is difficult because by construction they depend on exponentially rare events. In this first paper of a series, we evaluate different trajectory-based sampling methods capable of computing large deviation functions of time integrated observables within nonequilibrium steady states. We illustrate some convergence criteria and best practices using a number of different models, including a biased Brownian walker, a driven lattice gas, and a model of self-assembly. We show how two popular methods for sampling trajectory ensembles, transition path sampling and diffusion Monte Carlo, suffer from exponentially diverging correlations in trajectory space as a function of the bias parameter when estimating large deviation functions. Improving the efficiencies of these algorithms requires introducing guiding functions for the trajectories.
International Nuclear Information System (INIS)
Jung, Bongki; Park, Min; Heo, Sung Ryul; Kim, Tae-Seong; Jeong, Seung Ho; Chang, Doo-Hee; Lee, Kwang Won; In, Sang-Ryul
2016-01-01
Highlights: • High power magnetic bucket-type arc plasma source for the VEST NBI system is developed with modifications based on the prototype plasma source for KSTAR. • Plasma parameters in pulse duration are measured to characterize the plasma source. • High plasma density and good uniformity is achieved at the low operating pressure below 1 Pa. • Required ion beam current density is confirmed by analysis of plasma parameters and results of a particle balance model. - Abstract: A large-scale hydrogen arc plasma source was developed at the Korea Atomic Energy Research Institute for a high power pulsed NBI system of VEST which is a compact spherical tokamak at Seoul national university. One of the research target of VEST is to study innovative tokamak operating scenarios. For this purpose, high current density and uniform large-scale pulse plasma source is required to satisfy the target ion beam power efficiently. Therefore, optimizing the plasma parameters of the ion source such as the electron density, temperature, and plasma uniformity is conducted by changing the operating conditions of the plasma source. Furthermore, ion species of the hydrogen plasma source are analyzed using a particle balance model to increase the monatomic fraction which is another essential parameter for increasing the ion beam current density. Conclusively, efficient operating conditions are presented from the results of the optimized plasma parameters and the extractable ion beam current is calculated.
Exploration of the search space of the in-core fuel management problem by knowledge-based techniques
International Nuclear Information System (INIS)
Galperin, A.
1995-01-01
The process of generating reload configuration patterns is presented as a search procedure. The search space of the problem is found to contain ∼ 10 12 possible problem states. If computational resources and execution time necessary to evaluate a single solution are taken into account, this problem may be described as a ''large space search problem.'' Understanding of the structure of the search space, i.e., distribution of the optimal (or nearly optimal) solutions, is necessary to choose an appropriate search method and to utilize adequately domain heuristic knowledge. A worth function is developed based on two performance parameters: cycle length and power peaking factor. A series of numerical experiments was carried out; 300,000 patterns were generated in 40 sessions. All these patterns were analyzed by simulating the power production cycle and by evaluating the two performance parameters. The worth function was calculated and plotted. Analysis of the worth function reveals quite a complicated search space structure. The fine structure shows an extremely large number of local peaks: about one peak per hundred configurations. The direct implication of this discovery is that within a search space of 10 12 states, there are ∼10 10 local optima. Further consideration of the worth function shape shows that the distribution of the local optima forms a contour with much slower variations, where ''better'' or ''worse'' groups of patterns are spaced within a few thousand or tens of thousands of configurations, and finally very broad subregions of the whole space display variations of the worth function, where optimal regions include tens of thousands of patterns and are separated by hundreds of thousands and millions
Hong, Min-Ho; Son, Jun Sik; Kwon, Tae-Yub
2018-03-01
The selective laser melting (SLM) process parameters, which directly determine the melting behavior of the metallic powders, greatly affect the nanostructure and surface roughness of the resulting 3D object. This study investigated the effect of various laser process parameters (laser power, scan rate, and scan line spacing) on the surface roughness of a nickel-chromium (Ni-Cr) alloy that was three-dimensionally (3D) constructed using SLM. Single-line formation tests were used to determine the optimal laser power of 200 W and scan rate of 98.8 mm/s, which resulted in beads with an optimal profile. In the subsequent multi-layer formation tests, the 3D object with the smoothest surface (Ra = 1.3 μm) was fabricated at a scan line spacing of 60 μm (overlap ratio = 73%). Narrow scan line spacing (and thus large overlap ratios) was preferred over wide scan line spacing to reduce the surface roughness of the 3D body. The findings of this study suggest that the laser power, scan rate, and scan line spacing are the key factors that control the surface quality of Ni-Cr alloys produced by SLM.
Joudaki, Shahab; Blake, Chris; Johnson, Andrew; Amon, Alexandra; Asgari, Marika; Choi, Ami; Erben, Thomas; Glazebrook, Karl; Harnois-Déraps, Joachim; Heymans, Catherine; Hildebrandt, Hendrik; Hoekstra, Henk; Klaes, Dominik; Kuijken, Konrad; Lidman, Chris; Mead, Alexander; Miller, Lance; Parkinson, David; Poole, Gregory B.; Schneider, Peter; Viola, Massimo; Wolf, Christian
2018-03-01
We perform a combined analysis of cosmic shear tomography, galaxy-galaxy lensing tomography, and redshift-space multipole power spectra (monopole and quadrupole) using 450 deg2 of imaging data by the Kilo Degree Survey (KiDS-450) overlapping with two spectroscopic surveys: the 2-degree Field Lensing Survey (2dFLenS) and the Baryon Oscillation Spectroscopic Survey (BOSS). We restrict the galaxy-galaxy lensing and multipole power spectrum measurements to the overlapping regions with KiDS, and self-consistently compute the full covariance between the different observables using a large suite of N-body simulations. We methodically analyse different combinations of the observables, finding that the galaxy-galaxy lensing measurements are particularly useful in improving the constraint on the intrinsic alignment amplitude, while the multipole power spectra are useful in tightening the constraints along the lensing degeneracy direction. The fully combined constraint on S_8 ≡ σ _8 √{Ω _m/0.3}=0.742± 0.035, which is an improvement by 20 per cent compared to KiDS alone, corresponds to a 2.6σ discordance with Planck, and is not significantly affected by fitting to a more conservative set of scales. Given the tightening of the parameter space, we are unable to resolve the discordance with an extended cosmology that is simultaneously favoured in a model selection sense, including the sum of neutrino masses, curvature, evolving dark energy and modified gravity. The complementarity of our observables allows for constraints on modified gravity degrees of freedom that are not simultaneously bounded with either probe alone, and up to a factor of three improvement in the S8 constraint in the extended cosmology compared to KiDS alone.
Impact of rapid condensations of large vapor spaces on natural circulation in integral systems
International Nuclear Information System (INIS)
Wang, Z.; Almenas, K.; DiMarzo, M.; Hsu, Y.Y.; Unal, C.
1992-01-01
In this study we demonstrated that the Interruption-Resumption flow mode (IRM) observed in the University of Maryland 2x4 loop is a unique and effective natural circulation cooling mode. The IRM flow mode consists of a series of large flow cycles which are initiated from a quiescent steady-state flow condition by periodic rapid condensation of large vapor spaces. The significance of this mass/energy transport mechanism is that it cannot be evaluated using the techniques developed for the commonly known density-driven natural circulation cooling mode. We also demonstrated that the rapid condensation mechanism essentially acts as a strong amplifier which will augment small perturbations and will activate several flow phenomena. The interplay of the phenomena involves a degree of randomness. This poses two important implications. First, the study of an isolated flow phenomenon is not sufficient for the understanding of the system-wide IRM fluid movement. Second, the duplication of reactor transients which involves randomness can be achieved only within certain bounds. The modeling of such transients by deterministic computer codes requires recognition of this physical reality. (orig.)
Changes in Periodontal and Microbial Parameters after the Space ...
African Journals Online (AJOL)
Aim: This study aims to evaluate the clinical and microbiological changes accompanying the inflammatory process of periodontal tissues during treatment with space maintainers (SMs). Materials and Methods: The children were separated into fixed (Group 1, n = 20) and removable (Group 2, n = 20) appliance groups.
International Nuclear Information System (INIS)
Vengalattore, M.; Conroy, R.S.; Prentiss, M.G.
2004-01-01
The phase space density of dense, cylindrical clouds of atoms in a 2D magneto-optic trap is investigated. For a large number of trapped atoms (>10 8 ), the density of a spherical cloud is limited by photon reabsorption. However, as the atom cloud is deformed to reduce the radial optical density, the temperature of the atoms decreases due to the suppression of multiple scattering leading to an increase in the phase space density. A density of 2x10 -4 has been achieved in a magneto-optic trap containing 2x10 8 atoms
Rosado-Souza, Laise; Scossa, Federico; Chaves, Izabel S; Kleessen, Sabrina; Salvador, Luiz F D; Milagre, Jocimar C; Finger, Fernando; Bhering, Leonardo L; Sulpice, Ronan; Araújo, Wagner L; Nikoloski, Zoran; Fernie, Alisdair R; Nunes-Nesi, Adriano
2015-09-01
Collectively, the results presented improve upon the utility of an important genetic resource and attest to a complex genetic basis for differences in both leaf metabolism and fruit morphology between natural populations. Diversity of accessions within the same species provides an alternative method to identify physiological and metabolic traits that have large effects on growth regulation, biomass and fruit production. Here, we investigated physiological and metabolic traits as well as parameters related to plant growth and fruit production of 49 phenotypically diverse pepper accessions of Capsicum chinense grown ex situ under controlled conditions. Although single-trait analysis identified up to seven distinct groups of accessions, working with the whole data set by multivariate analyses allowed the separation of the 49 accessions in three clusters. Using all 23 measured parameters and data from the geographic origin for these accessions, positive correlations between the combined phenotypes and geographic origin were observed, supporting a robust pattern of isolation-by-distance. In addition, we found that fruit set was positively correlated with photosynthesis-related parameters, which, however, do not explain alone the differences in accession susceptibility to fruit abortion. Our results demonstrated that, although the accessions belong to the same species, they exhibit considerable natural intraspecific variation with respect to physiological and metabolic parameters, presenting diverse adaptation mechanisms and being a highly interesting source of information for plant breeders. This study also represents the first study combining photosynthetic, primary metabolism and growth parameters for Capsicum to date.
A more accurate modeling of the effects of actuators in large space structures
Hablani, H. B.
1981-01-01
The paper deals with finite actuators. A nonspinning three-axis stabilized space vehicle having a two-dimensional large structure and a rigid body at the center is chosen for analysis. The torquers acting on the vehicle are modeled as antisymmetric forces distributed in a small but finite area. In the limit they represent point torquers which also are treated as a special case of surface distribution of dipoles. Ordinary and partial differential equations governing the forced vibrations of the vehicle are derived by using Hamilton's principle. Associated modal inputs are obtained for both the distributed moments and the distributed forces. It is shown that the finite torquers excite the higher modes less than the point torquers. Modal cost analysis proves to be a suitable methodology to this end.
NIAC Phase I Study Final Report on Large Ultra-Lightweight Photonic Muscle Space Structures
Ritter, Joe
2016-01-01
way to make large inexpensive deployable mirrors where the cost is measured in millions, not billions like current efforts. For example we seek an interim goal within 10 years of a Hubble size (2.4m) primary mirror weighing 1 pound at a cost of 10K in materials. Described here is a technology using thin ultra lightweight materials where shape can be controlled simply with a beam of light, allowing imaging with incredibly low mass yet precisely shaped mirrors. These " Photonic Muscle" substrates will eventually make precision control of giant s p a c e apertures (mirrors) possible. OCCAM substrates make precision control of giant ultra light-weight mirror apertures possible. This technology is posed to create a revolution in remote sensing by making large ultra lightweight space telescopes a fiscal and material reality over the next decade.
Implications of the space-star anomaly in nd breakup
Energy Technology Data Exchange (ETDEWEB)
Howell, C.R.; Setze, H.R.; Tornow, W.; Braun, R.T.; Roper, C.D.; Salinas, F.; Gonzalez Trotter, D.E.; Walter, R.L. [Duke Univ., Durham, NC (United States). Dept. of Physics; Gloeckle, W. [Institut fuer Theoretische Physik II, Ruhr-Universitaet Bochum, 44780 Bochum (Germany); Hussein, A.H. [Physics Department, Univ. of Northern Columbia, Prince George, BC (Canada); Lambert, J.M. [Department of Physics, Georgetown University, Washington, DC 20057 (United States); Mertens, G. [Institut fuer Physik, Universitaet Tuebingen, 72074 Tuebingen (Germany); Slaus, I. [Rudjer Boskovic Institute, Zagreb (Croatia); Vlahovic, B. [Physics Department, North Carolina Central Univ., Durham, NC 27707 (United States); Witala, H. [Institute of Physics, Jagellonian University, Reymonta 4, 30059 Cracow (Poland)
1998-03-02
Cross-section measurements of six exit-channel configurations in nd breakup at 13.0 MeV are reported and compared to rigorous calculations. Except for the coplanar-star configuration, our data are consistent with previous data. The present data for all configurations, with the exception of the space star, are in good agreement with theoretical predictions. The previously observed large discrepancy between theory and data for the space-star configuration is confirmed in the present work. The inclusion of the Tucson-Melbourne 2{pi} exchange three-nucleon force with a cutoff parameter that correctly binds the triton only changes the predicted cross section by 2%, a factor of 10 smaller than the amount needed to bring theory into agreement with data. (orig.) 9 refs.
Implications of the space-star anomaly in nd breakup
International Nuclear Information System (INIS)
Howell, C.R.; Setze, H.R.; Tornow, W.; Braun, R.T.; Roper, C.D.; Salinas, F.; Gonzalez Trotter, D.E.; Walter, R.L.; Hussein, A.H.; Lambert, J.M.; Mertens, G.; Slaus, I.; Vlahovic, B.; Witala, H.
1998-01-01
Cross-section measurements of six exit-channel configurations in nd breakup at 13.0 MeV are reported and compared to rigorous calculations. Except for the coplanar-star configuration, our data are consistent with previous data. The present data for all configurations, with the exception of the space star, are in good agreement with theoretical predictions. The previously observed large discrepancy between theory and data for the space-star configuration is confirmed in the present work. The inclusion of the Tucson-Melbourne 2π exchange three-nucleon force with a cutoff parameter that correctly binds the triton only changes the predicted cross section by 2%, a factor of 10 smaller than the amount needed to bring theory into agreement with data. (orig.)
Hornsby, Linda; Stahl, H. Philip; Hopkins, Randall C.
2010-01-01
The Advanced Technology Large Aperture Space Telescope (ATLAST) preliminary design concept consists of an 8 meter diameter monolithic primary mirror enclosed in an insulated, optical tube with stray light baffles and a sunshade. ATLAST will be placed in orbit about the Sun-Earth L2 and will experience constant exposure to the sun. The insulation on the optical tube and sunshade serve to cold bias the telescope which helps to minimize thermal gradients. The primary mirror will be maintained at 280K with an active thermal control system. The geometric model of the primary mirror, optical tube, sun baffles, and sunshade was developed using Thermal Desktop(R) SINDA/FLUINT(R) was used for the thermal analysis and the radiation environment was analyzed using RADCAD(R). A XX node model was executed in order to characterize the static performance and thermal stability of the mirror during maneuvers. This is important because long exposure observations, such as extra-solar terrestrial planet finding and characterization, require a very stable observatory wave front. Steady state thermal analyses served to predict mirror temperatures for several different sun angles. Transient analyses were performed in order to predict thermal time constant of the primary mirror for a 20 degree slew or 30 degree roll maneuver. This paper describes the thermal model and provides details of the geometry, thermo-optical properties, and the environment which influences the thermal performance. All assumptions that were used in the analysis are also documented. Parametric analyses are summarized for design parameters including primary mirror coatings and sunshade configuration. Estimates of mirror heater power requirements are reported. The thermal model demonstrates results for the primary mirror heated from the back side and edges using a heater system with multiple independently controlled zones.
International Nuclear Information System (INIS)
Parzen, G.
1997-01-01
It will be shown that starting from a coordinate system where the 6 phase space coordinates are linearly coupled, one can go to a new coordinate system, where the motion is uncoupled, by means of a linear transformation. The original coupled coordinates and the new uncoupled coordinates are related by a 6 x 6 matrix, R. It will be shown that of the 36 elements of the 6 x 6 decoupling matrix R, only 12 elements are independent. A set of equations is given from which the 12 elements of R can be computed form the one period transfer matrix. This set of equations also allows the linear parameters, the β i , α i , i = 1, 3, for the uncoupled coordinates, to be computed from the one period transfer matrix
Parameter spaces for linear and nonlinear whistler-mode waves
International Nuclear Information System (INIS)
Summers, Danny; Tang, Rongxin; Omura, Yoshiharu; Lee, Dong-Hun
2013-01-01
We examine the growth of magnetospheric whistler-mode waves which comprises a linear growth phase followed by a nonlinear growth phase. We construct time-profiles for the wave amplitude that smoothly match at the transition between linear and nonlinear wave growth. This matching procedure can only take place over a limited “matching region” in (N h /N 0 ,A T )-space, where A T is the electron thermal anisotropy, N h is the hot (energetic) electron number density, and N 0 is the cold (background) electron number density. We construct this matching region and determine how the matching wave amplitude varies throughout the region. Further, we specify a boundary in (N h /N 0 ,A T )-space that separates a region where only linear chorus wave growth can occur from the region in which fully nonlinear chorus growth is possible. We expect that this boundary should prove of practical use in performing computationally expensive full-scale particle simulations, and in interpreting experimental wave data
Determination of supersymmetric parameters with neural networks at the large hadron collider
International Nuclear Information System (INIS)
Bornhauser, Nicki
2013-12-01
The LHC is running and in the near future potentially some signs of new physics are measured. In this thesis it is assumed that the underlying theory of such a signal would be identified and that it is some kind of minimal supersymmetric extension of the Standard Model. Generally, the mapping from the measurable observables onto the parameter values of the supersymmetric theory is unknown. Instead, only the opposite direction is known, i.e. for fixed parameters the measurable observables can be computed with some uncertainties. In this thesis, the ability of artifical neural networks to determine this unknown function is demonstrated. At the end of a training process, the created networks are capable to calculate the parameter values with errors for an existing measurement. To do so, at first a set of mostly counting observables is introduced. In the following, the usefulness of these observables for the determination of supersymmetric parameters is checked. This is done by applying them on 283 pairs of parameter sets of a MSSM with 15 parameters. These pairs were found to be indistinguishable at the LHC by another study, even without the consideration of SM background. It can be shown that 260 of these pairs can be discriminated using the introduced observables. Without systematic errors even all pairs can be distinguished. Also with the consideration of SM background still most pairs can be disentangled (282 without and 237 with systematic errors). This result indicates the usefulness of the observables for the direct parameter determination. The performance of neural networks is investigated for four different parameter regions of the CMSSM. With the right set of observables, the neural network approach generally could also be used for any other (non-supersymmetric) theory. In each region, a reference point with around 1,000 events after cuts should be determined in the context of a LHC with a center of mass energy of 14 TeV and an integrated luminosity of 10 fb
Evans, William Todd; Neely, Kelsay E.; Strauss, Alvin M.; Cook, George E.
2017-11-01
Friction Stir Welding has been proposed as an efficient and appropriate method for in space welding. It has the potential to serve as a viable option for assembling large scale space structures. These large structures will require the use of natural in space materials such as those available from iron meteorites. Impurities present in most iron meteorites limit its ability to be welded by other space welding techniques such as electron beam laser welding. This study investigates the ability to weld pieces of in situ Campo del Cielo meteorites by Friction Stir Spot Welding. Due to the rarity of the material, low carbon steel was used as a model material to determine welding parameters. Welded samples of low carbon steel, invar, and Campo del Cielo meteorite were compared and found to behave in similar ways. This study shows that meteorites can be Friction Stir Spot Welded and that they exhibit properties analogous to that of FSSW low carbon steel welds. Thus, iron meteorites can be regarded as another viable option for in-space or Martian construction.
Non-parametric co-clustering of large scale sparse bipartite networks on the GPU
DEFF Research Database (Denmark)
Hansen, Toke Jansen; Mørup, Morten; Hansen, Lars Kai
2011-01-01
of row and column clusters from a hypothesis space of an infinite number of clusters. To reach large scale applications of co-clustering we exploit that parameter inference for co-clustering is well suited for parallel computing. We develop a generic GPU framework for efficient inference on large scale...... sparse bipartite networks and achieve a speedup of two orders of magnitude compared to estimation based on conventional CPUs. In terms of scalability we find for networks with more than 100 million links that reliable inference can be achieved in less than an hour on a single GPU. To efficiently manage...
Nash equilibria in quantum games with generalized two-parameter strategies
International Nuclear Information System (INIS)
Flitney, Adrian P.; Hollenberg, Lloyd C.L.
2007-01-01
In the Eisert protocol for 2x2 quantum games [J. Eisert, et al., Phys. Rev. Lett. 83 (1999) 3077], a number of authors have investigated the features arising from making the strategic space a two-parameter subset of single qubit unitary operators. We argue that the new Nash equilibria and the classical-quantum transitions that occur are simply an artifact of the particular strategy space chosen. By choosing a different, but equally plausible, two-parameter strategic space we show that different Nash equilibria with different classical-quantum transitions can arise. We generalize the two-parameter strategies and also consider these strategies in a multiplayer setting
On the structure of physical space
Wisnivesky, D
2001-01-01
In this paper we develop a theory based on the postulate that the environment where physical phenomena take place is the space of four complex parameters of the linear group of transformations. Using these parameters as fundamental building blocks we construct ordinary space-time and the internal space. Lorentz invariance is built in the definition of external space, while the symmetry of the internal space, S(1)*SU(2) results as a consequence of the identification of the external coordinates. Thus, special relativity and the electroweak interaction symmetry ensue from the properties of the basic building blocks of physical space. Since internal and external space are derived from a common structure, there is no need to bring into the theory any additional hypothesis to account for the microscopic nature of the internal space, nor to introduce symmetry breaking mechanisms that would normally be required to force a splitting of the internal and external symmetries. As an outcome of the existence of a basic str...
Developing a NASA strategy for the verification of large space telescope observatories
Crooke, Julie A.; Gunderson, Johanna A.; Hagopian, John G.; Levine, Marie
2006-06-01
In July 2005, the Office of Program Analysis and Evaluation (PA&E) at NASA Headquarters was directed to develop a strategy for verification of the performance of large space telescope observatories, which occurs predominantly in a thermal vacuum test facility. A mission model of the expected astronomical observatory missions over the next 20 years was identified along with performance, facility and resource requirements. Ground testing versus alternatives was analyzed to determine the pros, cons and break points in the verification process. Existing facilities and their capabilities were examined across NASA, industry and other government agencies as well as the future demand for these facilities across NASA's Mission Directorates. Options were developed to meet the full suite of mission verification requirements, and performance, cost, risk and other analyses were performed. Findings and recommendations from the study were presented to the NASA Administrator and the NASA Strategic Management Council (SMC) in February 2006. This paper details the analysis, results, and findings from this study.
Dynamic Model Averaging in Large Model Spaces Using Dynamic Occam’s Window*
Onorante, Luca; Raftery, Adrian E.
2015-01-01
Bayesian model averaging has become a widely used approach to accounting for uncertainty about the structural form of the model generating the data. When data arrive sequentially and the generating model can change over time, Dynamic Model Averaging (DMA) extends model averaging to deal with this situation. Often in macroeconomics, however, many candidate explanatory variables are available and the number of possible models becomes too large for DMA to be applied in its original form. We propose a new method for this situation which allows us to perform DMA without considering the whole model space, but using a subset of models and dynamically optimizing the choice of models at each point in time. This yields a dynamic form of Occam’s window. We evaluate the method in the context of the problem of nowcasting GDP in the Euro area. We find that its forecasting performance compares well with that of other methods. PMID:26917859
Large-D gravity and low-D strings.
Emparan, Roberto; Grumiller, Daniel; Tanabe, Kentaro
2013-06-21
We show that in the limit of a large number of dimensions a wide class of nonextremal neutral black holes has a universal near-horizon limit. The limiting geometry is the two-dimensional black hole of string theory with a two-dimensional target space. Its conformal symmetry explains the properties of massless scalars found recently in the large-D limit. For black branes with string charges, the near-horizon geometry is that of the three-dimensional black strings of Horne and Horowitz. The analogies between the α' expansion in string theory and the large-D expansion in gravity suggest a possible effective string description of the large-D limit of black holes. We comment on applications to several subjects, in particular to the problem of critical collapse.
Briseño, Jessica; Herrera, Graciela S.
2010-05-01
Herrera (1998) proposed a method for the optimal design of groundwater quality monitoring networks that involves space and time in a combined form. The method was applied later by Herrera et al (2001) and by Herrera and Pinder (2005). To get the estimates of the contaminant concentration being analyzed, this method uses a space-time ensemble Kalman filter, based on a stochastic flow and transport model. When the method is applied, it is important that the characteristics of the stochastic model be congruent with field data, but, in general, it is laborious to manually achieve a good match between them. For this reason, the main objective of this work is to extend the space-time ensemble Kalman filter proposed by Herrera, to estimate the hydraulic conductivity, together with hydraulic head and contaminant concentration, and its application in a synthetic example. The method has three steps: 1) Given the mean and the semivariogram of the natural logarithm of hydraulic conductivity (ln K), random realizations of this parameter are obtained through two alternatives: Gaussian simulation (SGSim) and Latin Hypercube Sampling method (LHC). 2) The stochastic model is used to produce hydraulic head (h) and contaminant (C) realizations, for each one of the conductivity realizations. With these realization the mean of ln K, h and C are obtained, for h and C, the mean is calculated in space and time, and also the cross covariance matrix h-ln K-C in space and time. The covariance matrix is obtained averaging products of the ln K, h and C realizations on the estimation points and times, and the positions and times with data of the analyzed variables. The estimation points are the positions at which estimates of ln K, h or C are gathered. In an analogous way, the estimation times are those at which estimates of any of the three variables are gathered. 3) Finally the ln K, h and C estimate are obtained using the space-time ensemble Kalman filter. The realization mean for each one
Remote sensing of refractivity from space for global observations of atmospheric parameters
International Nuclear Information System (INIS)
Gorbunov, M.E.; Sokolovskiy, S.V.
1993-01-01
This report presents the first results of computational simulations on the retrieval of meteorological parameters from space refractometric data on the basis of the ECHAM 3 model developed at the Max Planck Institute for Meteorology (Roeckner et al. 1992). For this purpose the grid fields of temperature, geopotential and humidity available from the model were interpolated and a continuous spatial field of refractivity (together with its first derivative) was generated. This field was used for calculating the trajectories of electromagnetic rays for the given orbits of transmitting and receiving satellites and for the determination of the quantities (incident angles or Doppler frequency shifts) being measured at receiving satellite during occultation. These quantities were then used for solving the inverse problem - retrieving the distribution of refractivity in the vicinity of the ray perigees. The retrieved refractivity was used to calculate pressure and temperature (using the hydrostatic equation and the equation of state). The results were compared with initial data, and the retrieval errors were evaluated. The study shows that the refractivity can be retrieved with very high accuracy in particular if a tomographic reconstruction is applied. Effects of humidity and temperature are not separable. Stratospheric temperatures globally and upper tropospheric temperatures at middle and high latitudes can be accurately retrieved, other areas require humidity data. Alternatively humidity data can be retrieved if the temperature fields are known. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Dutta, Tanushree [Department of Civil & Environmental Engineering, Hanyang University, 222 Wangsimni-Ro, Seoul 04763 (Korea, Republic of); Kim, Ki-Hyun, E-mail: kkim61@hanyang.ac.kr [Department of Civil & Environmental Engineering, Hanyang University, 222 Wangsimni-Ro, Seoul 04763 (Korea, Republic of); Uchimiya, Minori [USDA-ARS Southern Regional Research Center, 1100 Robert E. Lee Boulevard, New Orleans, LA 70124 (United States); Kumar, Pawan [Department of Chemical Engineering, Indian Institute of Technology, Hauz Khas, New Delhi 11016 (India); Das, Subhasish; Bhattacharya, Satya Sundar [Soil & Agro-Bioengineering Lab, Department of Environmental Science, Tezpur University, Napaam 784028 (India); Szulejko, Jan [Department of Civil & Environmental Engineering, Hanyang University, 222 Wangsimni-Ro, Seoul 04763 (Korea, Republic of)
2016-11-15
Large-scale assemblies of people in a confined space can exert significant impacts on the local air chemistry due to human emissions of volatile organics. Variations of air-quality in such small scale can be studied by quantifying fingerprint volatile organic compounds (VOCs) such as acetone, toluene, and isoprene produced during concerts, movie screenings, and sport events (like the Olympics and the World Cup). This review summarizes the extent of VOC accumulation resulting from a large population in a confined area or in a small open area during sporting and other recreational activities. Apart from VOCs emitted directly from human bodies (e.g., perspiration and exhaled breath), those released indirectly from other related sources (e.g., smoking, waste disposal, discharge of food-waste, and use of personal-care products) are also discussed. Although direct and indirect emissions of VOCs from human may constitute <1% of the global atmospheric VOCs budget, unique spatiotemporal variations in VOCs species within a confined space can have unforeseen impacts on the local atmosphere to lead to acute human exposure to harmful pollutants.
International Nuclear Information System (INIS)
Dutta, Tanushree; Kim, Ki-Hyun; Uchimiya, Minori; Kumar, Pawan; Das, Subhasish; Bhattacharya, Satya Sundar; Szulejko, Jan
2016-01-01
Large-scale assemblies of people in a confined space can exert significant impacts on the local air chemistry due to human emissions of volatile organics. Variations of air-quality in such small scale can be studied by quantifying fingerprint volatile organic compounds (VOCs) such as acetone, toluene, and isoprene produced during concerts, movie screenings, and sport events (like the Olympics and the World Cup). This review summarizes the extent of VOC accumulation resulting from a large population in a confined area or in a small open area during sporting and other recreational activities. Apart from VOCs emitted directly from human bodies (e.g., perspiration and exhaled breath), those released indirectly from other related sources (e.g., smoking, waste disposal, discharge of food-waste, and use of personal-care products) are also discussed. Although direct and indirect emissions of VOCs from human may constitute <1% of the global atmospheric VOCs budget, unique spatiotemporal variations in VOCs species within a confined space can have unforeseen impacts on the local atmosphere to lead to acute human exposure to harmful pollutants.
Latent semantics of action verbs reflect phonetic parameters of intensity and emotional content
DEFF Research Database (Denmark)
Petersen, Michael Kai
2015-01-01
already in toddlers, this study explores whether articulatory and acoustic parameters may likewise differentiate the latent semantics of action verbs. Selecting 3 X 20 emotion, face, and hand related verbs known to activate premotor areas in the brain, their mutual cosine similarities were computed using...... latent semantic analysis LSA, and the resulting adjacency matrices were compared based on two different large scale text corpora; HAWIK and TASA. Applying hierarchical clustering to identify common structures across the two text corpora, the verbs largely divide into combined mouth and hand movements...... versus emotional expressions. Transforming the verbs into their constituent phonemes, and projecting them into an articulatory space framed by tongue height and formant frequencies, the clustered small and large size movements appear differentiated by front versus back vowels corresponding to increasing...
Modal Analysis and Model Correlation of the Mir Space Station
Kim, Hyoung M.; Kaouk, Mohamed
2000-01-01
This paper will discuss on-orbit dynamic tests, modal analysis, and model refinement studies performed as part of the Mir Structural Dynamics Experiment (MiSDE). Mir is the Russian permanently manned Space Station whose construction first started in 1986. The MiSDE was sponsored by the NASA International Space Station (ISS) Phase 1 Office and was part of the Shuttle-Mir Risk Mitigation Experiment (RME). One of the main objectives for MiSDE is to demonstrate the feasibility of performing on-orbit modal testing on large space structures to extract modal parameters that will be used to correlate mathematical models. The experiment was performed over a one-year span on the Mir-alone and Mir with a Shuttle docked. A total of 45 test sessions were performed including: Shuttle and Mir thruster firings, Shuttle-Mir and Progress-Mir dockings, crew exercise and pushoffs, and ambient noise during night-to-day and day-to-night orbital transitions. Test data were recorded with a variety of existing and new instrumentation systems that included: the MiSDE Mir Auxiliary Sensor Unit (MASU), the Space Acceleration Measurement System (SAMS), the Russian Mir Structural Dynamic Measurement System (SDMS), the Mir and Shuttle Inertial Measurement Units (IMUs), and the Shuttle payload bay video cameras. Modal analysis was performed on the collected test data to extract modal parameters, i.e. frequencies, damping factors, and mode shapes. A special time-domain modal identification procedure was used on free-decay structural responses. The results from this study show that modal testing and analysis of large space structures is feasible within operational constraints. Model refinements were performed on both the Mir alone and the Shuttle-Mir mated configurations. The design sensitivity approach was used for refinement, which adjusts structural properties in order to match analytical and test modal parameters. To verify the refinement results, the analytical responses calculated using
Simpson, R.; Broussely, M.; Edwards, G.; Robinson, D.; Cozzani, A.; Casarosa, G.
2012-07-01
The National Physical Laboratory (NPL) and The European Space Research and Technology Centre (ESTEC) have performed for the first time successful surface temperature measurements using infrared thermal imaging in the ESTEC Large Space Simulator (LSS) under vacuum and with the Sun Simulator (SUSI) switched on during thermal qualification tests of the GAIA Deployable Sunshield Assembly (DSA). The thermal imager temperature measurements, with radiosity model corrections, show good agreement with thermocouple readings on well characterised regions of the spacecraft. In addition, the thermal imaging measurements identified potentially misleading thermocouple temperature readings and provided qualitative real-time observations of the thermal and spatial evolution of surface structure changes and heat dissipation during hot test loadings, which may yield additional thermal and physical measurement information through further research.
Herbei, Radu; Kubatko, Laura
2013-03-26
Markov chains are widely used for modeling in many areas of molecular biology and genetics. As the complexity of such models advances, it becomes increasingly important to assess the rate at which a Markov chain converges to its stationary distribution in order to carry out accurate inference. A common measure of convergence to the stationary distribution is the total variation distance, but this measure can be difficult to compute when the state space of the chain is large. We propose a Monte Carlo method to estimate the total variation distance that can be applied in this situation, and we demonstrate how the method can be efficiently implemented by taking advantage of GPU computing techniques. We apply the method to two Markov chains on the space of phylogenetic trees, and discuss the implications of our findings for the development of algorithms for phylogenetic inference.
Estimates for Parameter Littlewood-Paley gκ⁎ Functions on Nonhomogeneous Metric Measure Spaces
Directory of Open Access Journals (Sweden)
Guanghui Lu
2016-01-01
Full Text Available Let (X,d,μ be a metric measure space which satisfies the geometrically doubling measure and the upper doubling measure conditions. In this paper, the authors prove that, under the assumption that the kernel of Mκ⁎ satisfies a certain Hörmander-type condition, Mκ⁎,ρ is bounded from Lebesgue spaces Lp(μ to Lebesgue spaces Lp(μ for p≥2 and is bounded from L1(μ into L1,∞(μ. As a corollary, Mκ⁎,ρ is bounded on Lp(μ for 1
space H1(μ into the Lebesgue space L1(μ.
Space for Ambitions: The Dutch Space Program in Changing European and Transatlantic Contexts
Baneke, D.M.
2014-01-01
Why would a small country like the Netherlands become active in space? The field was monopolized by large countries with large military establishments, especially in the early years of spaceflight. Nevertheless, the Netherlands established a space program in the late 1960s. In this paper I will
Spin Hall effect on a noncommutative space
International Nuclear Information System (INIS)
Ma Kai; Dulat, Sayipjamal
2011-01-01
We study the spin-orbital interaction and the spin Hall effect of an electron moving on a noncommutative space under the influence of a vector potential A(vector sign). On a noncommutative space, we find that the commutator between the vector potential A(vector sign) and the electric potential V 1 (r(vector sign)) of the lattice induces a new term, which can be treated as an effective electric field, and the spin Hall conductivity obtains some correction. On a noncommutative space, the spin current and spin Hall conductivity have distinct values in different directions, and depend explicitly on the noncommutative parameter. Once this spin Hall conductivity in different directions can be measured experimentally with a high level of accuracy, the data can then be used to impose bounds on the value of the space noncommutativity parameter. We have also defined a new parameter, σ=ρθ (ρ is the electron concentration, θ is the noncommutativity parameter), which can be measured experimentally. Our approach is based on the Foldy-Wouthuysen transformation, which gives a general Hamiltonian of a nonrelativistic electron moving on a noncommutative space.
Graviton collider effects in one and more large extra dimensions
International Nuclear Information System (INIS)
Giudice, Gian F.; Plehn, Tilman; Strumia, Alessandro
2005-01-01
Astrophysical bounds severely limit the possibility of observing collider signals of gravity with less than 3 flat extra dimensions. However, small distortions of the compactified space can lift the masses of the lightest graviton excitations, evading astrophysical bounds without affecting collider signals of quantum gravity. Following this procedure we reconsider theories with one large extra dimension. A slight space warping gives a model which is safe in the infrared against astrophysical and observational bounds, and which has the ultraviolet properties of gravity with a single flat extra dimension. We extend collider studies to the case of one extra dimension, pointing out its peculiarities. Finally, for a generic number of extra dimensions, we compare different channels in LHC searches for quantum gravity, introducing an ultraviolet cutoff as an additional parameter besides the Planck mass
THE EVOLUTION OF HETERGENEOUS 'CLUMPY JETS': A PARAMETER STUDY
International Nuclear Information System (INIS)
Yirak, Kristopher; Schroeder, Ed; Frank, Adam; Cunningham, Andrew J.
2012-01-01
We investigate the role discrete clumps embedded in an astrophysical jet play on the jet's morphology and line emission characteristics. By varying clumps' size, density, position, and velocity, we cover a range of parameter space motivated by observations of objects such as the Herbig-Haro object HH 34. We here extend the results presented in Yirak et al., including how analysis of individual observations may lead to spurious sinusoidal variation whose parameters vary widely over time, owing chiefly to interactions between clumps. The goodness of fits, while poor in all simulations, are best when clump-clump collisions are minimal. Our results indicate that a large velocity dispersion leads to a clump-clump collision-dominated flow which disrupts the jet beam. Finally, we present synthetic emission images of Hα and [S II] and note an excess of [S II] emission along the jet length as compared to observations. This suggests that observed beams undergo earlier processing, if they are present at all.
Parameter estimation in nonlinear models for pesticide degradation
International Nuclear Information System (INIS)
Richter, O.; Pestemer, W.; Bunte, D.; Diekkrueger, B.
1991-01-01
A wide class of environmental transfer models is formulated as ordinary or partial differential equations. With the availability of fast computers, the numerical solution of large systems became feasible. The main difficulty in performing a realistic and convincing simulation of the fate of a substance in the biosphere is not the implementation of numerical techniques but rather the incomplete data basis for parameter estimation. Parameter estimation is a synonym for statistical and numerical procedures to derive reasonable numerical values for model parameters from data. The classical method is the familiar linear regression technique which dates back to the 18th century. Because it is easy to handle, linear regression has long been established as a convenient tool for analysing relationships. However, the wide use of linear regression has led to an overemphasis of linear relationships. In nature, most relationships are nonlinear and linearization often gives a poor approximation of reality. Furthermore, pure regression models are not capable to map the dynamics of a process. Therefore, realistic models involve the evolution in time (and space). This leads in a natural way to the formulation of differential equations. To establish the link between data and dynamical models, numerical advanced parameter identification methods have been developed in recent years. This paper demonstrates the application of these techniques to estimation problems in the field of pesticide dynamics. (7 refs., 5 figs., 2 tabs.)
Rodriguez, G.; Scheid, R. E., Jr.
1986-01-01
This paper outlines methods for modeling, identification and estimation for static determination of flexible structures. The shape estimation schemes are based on structural models specified by (possibly interconnected) elliptic partial differential equations. The identification techniques provide approximate knowledge of parameters in elliptic systems. The techniques are based on the method of maximum-likelihood that finds parameter values such that the likelihood functional associated with the system model is maximized. The estimation methods are obtained by means of a function-space approach that seeks to obtain the conditional mean of the state given the data and a white noise characterization of model errors. The solutions are obtained in a batch-processing mode in which all the data is processed simultaneously. After methods for computing the optimal estimates are developed, an analysis of the second-order statistics of the estimates and of the related estimation error is conducted. In addition to outlining the above theoretical results, the paper presents typical flexible structure simulations illustrating performance of the shape determination methods.
A Steam Jet Plume Simulation in a Large Bulk Space with a System Code MARS
International Nuclear Information System (INIS)
Bae, Sung Won; Chung, Bub Dong
2006-01-01
From May 2002, the OECD-SETH group has launched the PANDA Project in order to provide an experimental data base for a multi-dimensional code assessment. OECD-SETH group expects the PANDA Project will meet the increasing needs for adequate experimental data for a 3D distribution of relevant variables like the temperature, velocity and steam-air concentrations that are measured with a sufficient resolution and accuracy. The scope of the PANDA Project is the mixture stratification and mixing phenomena in a large bulk space. Total of 24 test series are still being performed in PSI, Switzerland. The PANDA facility consists of 2 main large vessels and 1 connection pipe Within the large vessels, a steam injection nozzle and outlet vent are arranged for each test case. These tests are categorized into 3 modes, i.e. the high momentum, near wall plume, and free plume tests. KAERI has also participated in the SETH group since 1997 so that the multi-dimensional capability of the MARS code could be assessed and developed. Test 17, the high steam jet injection test, has already been simulated by MARS and shows promising results. Now, the test 9 and 9bis cases which use a low speed horizontal steam jet flow have been simulated and investigated
Instabilities in large economies: aggregate volatility without idiosyncratic shocks
Bonart, Julius; Bouchaud, Jean-Philippe; Landier, Augustin; Thesmar, David
2014-10-01
We study a dynamical model of interconnected firms which allows for certain market imperfections and frictions, restricted here to be myopic price forecasts and slow adjustment of production. Whereas the standard rational equilibrium is still formally a stationary solution of the dynamics, we show that this equilibrium becomes linearly unstable in a whole region of parameter space. When agents attempt to reach the optimal production target too quickly, coordination breaks down and the dynamics becomes chaotic. In the unstable, ‘turbulent’ phase, the aggregate volatility of the total output remains substantial even when the amplitude of idiosyncratic shocks goes to zero or when the size of the economy becomes large. In other words, crises become endogenous. This suggests an interesting resolution of the ‘small shocks, large business cycles’ puzzle.
Instabilities in large economies: aggregate volatility without idiosyncratic shocks
International Nuclear Information System (INIS)
Bonart, Julius; Bouchaud, Jean-Philippe; Landier, Augustin; Thesmar, David
2014-01-01
We study a dynamical model of interconnected firms which allows for certain market imperfections and frictions, restricted here to be myopic price forecasts and slow adjustment of production. Whereas the standard rational equilibrium is still formally a stationary solution of the dynamics, we show that this equilibrium becomes linearly unstable in a whole region of parameter space. When agents attempt to reach the optimal production target too quickly, coordination breaks down and the dynamics becomes chaotic. In the unstable, ‘turbulent’ phase, the aggregate volatility of the total output remains substantial even when the amplitude of idiosyncratic shocks goes to zero or when the size of the economy becomes large. In other words, crises become endogenous. This suggests an interesting resolution of the ‘small shocks, large business cycles’ puzzle. (paper)
Nonperturbative volume reduction of large-N QCD with adjoint fermions
International Nuclear Information System (INIS)
Bringoltz, Barak; Sharpe, Stephen R.
2009-01-01
We use nonperturbative lattice techniques to study the volume-reduced 'Eguchi-Kawai' version of four-dimensional large-N QCD with a single adjoint Dirac fermion. We explore the phase diagram of this single-site theory in the space of quark mass and gauge coupling using Wilson fermions for a number of colors in the range 8≤N≤15. Our evidence suggests that these values of N are large enough to determine the nature of the phase diagram for N→∞. We identify the region in the parameter space where the (Z N ) 4 center symmetry is intact. According to previous theoretical work using the orbifolding paradigm, and assuming that translation invariance is not spontaneously broken in the infinite-volume theory, in this region volume reduction holds: the single-site and infinite-volume theories become equivalent when N→∞. We find strong evidence that this region includes both light and heavy quarks (with masses that are at the cutoff scale), and our results are consistent with this region extending toward the continuum limit. We also compare the action density and the eigenvalue density of the overlap Dirac operator in the fundamental representation with those obtained in large-N pure-gauge theory.
A flat array large telescope concept for use on the moon, earth, and in space
Woodgate, Bruce E.
1991-01-01
An astronomical optical telescope concept is described which can provide very large collecting areas, of order 1000 sq m. This is an order of magnitude larger than the new generation of telescopes now being designed and built. Multiple gimballed flat mirrors direct the beams from a celestial source into a single telescope of the same aperture as each flat mirror. Multiple images of the same source are formed at the telescope focal plane. A beam combiner collects these images and superimposes them into a single image, onto a detector or spectrograph aperture. This telescope could be used on the earth, the moon, or in space.
Parameter identifiability and redundancy: theoretical considerations.
Directory of Open Access Journals (Sweden)
Mark P Little
Full Text Available BACKGROUND: Models for complex biological systems may involve a large number of parameters. It may well be that some of these parameters cannot be derived from observed data via regression techniques. Such parameters are said to be unidentifiable, the remaining parameters being identifiable. Closely related to this idea is that of redundancy, that a set of parameters can be expressed in terms of some smaller set. Before data is analysed it is critical to determine which model parameters are identifiable or redundant to avoid ill-defined and poorly convergent regression. METHODOLOGY/PRINCIPAL FINDINGS: In this paper we outline general considerations on parameter identifiability, and introduce the notion of weak local identifiability and gradient weak local identifiability. These are based on local properties of the likelihood, in particular the rank of the Hessian matrix. We relate these to the notions of parameter identifiability and redundancy previously introduced by Rothenberg (Econometrica 39 (1971 577-591 and Catchpole and Morgan (Biometrika 84 (1997 187-196. Within the widely used exponential family, parameter irredundancy, local identifiability, gradient weak local identifiability and weak local identifiability are shown to be largely equivalent. We consider applications to a recently developed class of cancer models of Little and Wright (Math Biosciences 183 (2003 111-134 and Little et al. (J Theoret Biol 254 (2008 229-238 that generalize a large number of other recently used quasi-biological cancer models. CONCLUSIONS/SIGNIFICANCE: We have shown that the previously developed concepts of parameter local identifiability and redundancy are closely related to the apparently weaker properties of weak local identifiability and gradient weak local identifiability--within the widely used exponential family these concepts largely coincide.
Life Support Filtration System Trade Study for Deep Space Missions
Agui, Juan H.; Perry, Jay L.
2017-01-01
The National Aeronautics and Space Administrations (NASA) technical developments for highly reliable life support systems aim to maximize the viability of long duration deep space missions. Among the life support system functions, airborne particulate matter filtration is a significant driver of launch mass because of the large geometry required to provide adequate filtration performance and because of the number of replacement filters needed to a sustain a mission. A trade analysis incorporating various launch, operational and maintenance parameters was conducted to investigate the trade-offs between the various particulate matter filtration configurations. In addition to typical launch parameters such as mass, volume and power, the amount of crew time dedicated to system maintenance becomes an increasingly crucial factor for long duration missions. The trade analysis evaluated these parameters for conventional particulate matter filtration technologies and a new multi-stage particulate matter filtration system under development by NASAs Glenn Research Center. The multi-stage filtration system features modular components that allow for physical configuration flexibility. Specifically, the filtration system components can be configured in distributed, centralized, and hybrid physical layouts that can result in considerable mass savings compared to conventional particulate matter filtration technologies. The trade analysis results are presented and implications for future transit and surface missions are discussed.
A Large Neutrino Detector Facility at the Spallation Neutron Source at Oak Ridge National Laboratory
International Nuclear Information System (INIS)
Efremenko, Y.V.
1999-01-01
The ORLaND (Oak Ridge Large Neutrino Detector) collaboration proposes to construct a large neutrino detector in an underground experimental hall adjacent to the first target station of the Spallation Neutron Source (SNS) at the Oak Ridge National Laboratory. The main mission of a large (2000 ton) Scintillation-Cherenkov detector is to measure bar ν μ -> bar ν e neutrino oscillation parameters more accurately than they can be determined in other experiments, or significantly extending the covered parameter space below (sin'20 le 10 -4 ). In addition to the neutrino oscillation measurements, ORLaND would be capable of making precise measurements of sin 2 θ W , search for the magnetic moment of the muon neutrino, and investigate the anomaly in the KARMEN time spectrum, which has been attributed to a new neutral particle. With the same facility an extensive program of measurements of neutrino nucleus cross sections is also planned to support nuclear astrophysics
The X-ray powder diffraction pattern and lattice parameters of perovskite
International Nuclear Information System (INIS)
Ball, C.J.; Napier, J.G.
1988-02-01
The interplanar spacings and intensities of all lines appearing in the X-ray powder diffraction pattern of perovskite have been calculated. Many of the lines occur in groups with a large amount of overlap. As an aid to identifying the lines which are observed, the intensity profiles of the major groups have been plotted. Those lines which are relatively free of overlap and can be identified unambiguously have been used to calculate the lattice parameters, with the results a=5.4424 ± 0.0001 A, b=7.6417 ± 0.0002 A, c=5.3807 ± 0.0001 A
Fundamental parameters of QCD from non-perturbative methods for two and four flavors
International Nuclear Information System (INIS)
Marinkovic, Marina
2013-01-01
The non-perturbative formulation of Quantumchromodynamics (QCD) on a four dimensional space-time Euclidean lattice together with the finite size techniques enable us to perform the renormalization of the QCD parameters non-perturbatively. In order to obtain precise predictions from lattice QCD, one needs to include the dynamical fermions into lattice QCD simulations. We consider QCD with two and four mass degenerate flavors of O(a) improved Wilson quarks. In this thesis, we improve the existing determinations of the fundamental parameters of two and four flavor QCD. In four flavor theory, we compute the precise value of the Λ parameter in the units of the scale L max defined in the hadronic regime. We also give the precise determination of the Schroedinger functional running coupling in four flavour theory and compare it to the perturbative results. The Monte Carlo simulations of lattice QCD within the Schroedinger Functional framework were performed with a platform independent program package Schroedinger Funktional Mass Preconditioned Hybrid Monte Carlo (SF-MP-HMC), developed as a part of this project. Finally, we compute the strange quark mass and the Λ parameter in two flavour theory, performing a well-controlled continuum limit and chiral extrapolation. To achieve this, we developed a universal program package for simulating two flavours of Wilson fermions, Mass Preconditioned Hybrid Monte Carlo (MP-HMC), which we used to run large scale simulations on small lattice spacings and on pion masses close to the physical value.
Monotop phenomenology at the Large Hadron Collider
Agram, Jean-Laurent; Buttignol, Michael; Conte, Eric; Fuks, Benjamin
2014-01-01
We investigate new physics scenarios where systems comprised of a single top quark accompanied by missing transverse energy, dubbed monotops, can be produced at the LHC. Following a simplified model approach, we describe all possible monotop production modes via an effective theory and estimate the sensitivity of the LHC, assuming 20 fb$^{-1}$ of collisions at a center-of-mass energy of 8 TeV, to the observation of a monotop state. Considering both leptonic and hadronic top quark decays, we show that large fractions of the parameter space are reachable and that new physics particles with masses ranging up to 1.5 TeV can leave hints within the 2012 LHC dataset, assuming moderate new physics coupling strengths.
Library of Giant Planet Reflection Spectra for WFirst and Future Space Telescopes
Smith, Adam J. R. W.; Fortney, Jonathan; Morley, Caroline; Batalha, Natasha E.; Lewis, Nikole K.
2018-01-01
Future large space space telescopes will be able to directly image exoplanets in optical light. The optical light of a resolved planet is due to stellar flux reflected by Rayleigh scattering or cloud scattering, with absorption features imprinted due to molecular bands in the planetary atmosphere. To aid in the design of such missions, and to better understand a wide range of giant planet atmospheres, we have built a library of model giant planet reflection spectra, for the purpose of determining effective methods of spectral analysis as well as for comparison with actual imaged objects. This library covers a wide range of parameters: objects are modeled at ten orbital distances between 0.5 AU and 5.0 AU, which ranges from planets too warm for water clouds, out to those that are true Jupiter analogs. These calculations include six metalicities between solar and 100x solar, with a variety of different cloud thickness parameters, and across all possible phase angles.
Decoupling local mechanics from large-scale structure in modular metamaterials
Yang, Nan; Silverberg, Jesse L.
2017-04-01
A defining feature of mechanical metamaterials is that their properties are determined by the organization of internal structure instead of the raw fabrication materials. This shift of attention to engineering internal degrees of freedom has coaxed relatively simple materials into exhibiting a wide range of remarkable mechanical properties. For practical applications to be realized, however, this nascent understanding of metamaterial design must be translated into a capacity for engineering large-scale structures with prescribed mechanical functionality. Thus, the challenge is to systematically map desired functionality of large-scale structures backward into a design scheme while using finite parameter domains. Such “inverse design” is often complicated by the deep coupling between large-scale structure and local mechanical function, which limits the available design space. Here, we introduce a design strategy for constructing 1D, 2D, and 3D mechanical metamaterials inspired by modular origami and kirigami. Our approach is to assemble a number of modules into a voxelized large-scale structure, where the module’s design has a greater number of mechanical design parameters than the number of constraints imposed by bulk assembly. This inequality allows each voxel in the bulk structure to be uniquely assigned mechanical properties independent from its ability to connect and deform with its neighbors. In studying specific examples of large-scale metamaterial structures we show that a decoupling of global structure from local mechanical function allows for a variety of mechanically and topologically complex designs.
Nucleon-deuteron low energy parameters
International Nuclear Information System (INIS)
Zankel, H.; Mathelitsch, L.
1983-01-01
Momentum space Fadeev equations are solved for nucleon-deuteron scattering and effective range parameters are calculated. A reverse trend is found in the two spin states by 4 asub(nd) 4 asub(pd) and 2 asub(pd) 2 asub(nd) which is in agreement with a configuration space calculation, but in conflict with all existing experiments. The Coulomb contributions to the effective range are small in quartet but sizeable in doublet scattering. (Author)
Calculation of the level density parameter using semi-classical approach
International Nuclear Information System (INIS)
Canbula, B.; Babacan, H.
2011-01-01
The level density parameters (level density parameter a and energy shift δ) for back-shifted Fermi gas model have been determined for 1136 nuclei for which complete level scheme is available. Level density parameter is calculated by using the semi-classical single particle level density, which can be obtained analytically through spherical harmonic oscillator potential. This method also enables us to analyze the Coulomb potential's effect on the level density parameter. The dependence of this parameter on energy has been also investigated. Another parameter, δ, is determined by fitting of the experimental level scheme and the average resonance spacings for 289 nuclei. Only level scheme is used for optimization procedure for remaining 847 nuclei. Level densities for some nuclei have been calculated by using these parameter values. Obtained results have been compared with the experimental level scheme and the resonance spacing data.
Fast Kalman-like filtering for large-dimensional linear and Gaussian state-space models
Ait-El-Fquih, Boujemaa; Hoteit, Ibrahim
2015-01-01
This paper considers the filtering problem for linear and Gaussian state-space models with large dimensions, a setup in which the optimal Kalman Filter (KF) might not be applicable owing to the excessive cost of manipulating huge covariance matrices. Among the most popular alternatives that enable cheaper and reasonable computation is the Ensemble KF (EnKF), a Monte Carlo-based approximation. In this paper, we consider a class of a posteriori distributions with diagonal covariance matrices and propose fast approximate deterministic-based algorithms based on the Variational Bayesian (VB) approach. More specifically, we derive two iterative KF-like algorithms that differ in the way they operate between two successive filtering estimates; one involves a smoothing estimate and the other involves a prediction estimate. Despite its iterative nature, the prediction-based algorithm provides a computational cost that is, on the one hand, independent of the number of iterations in the limit of very large state dimensions, and on the other hand, always much smaller than the cost of the EnKF. The cost of the smoothing-based algorithm depends on the number of iterations that may, in some situations, make this algorithm slower than the EnKF. The performances of the proposed filters are studied and compared to those of the KF and EnKF through a numerical example.
Fast Kalman-like filtering for large-dimensional linear and Gaussian state-space models
Ait-El-Fquih, Boujemaa
2015-08-13
This paper considers the filtering problem for linear and Gaussian state-space models with large dimensions, a setup in which the optimal Kalman Filter (KF) might not be applicable owing to the excessive cost of manipulating huge covariance matrices. Among the most popular alternatives that enable cheaper and reasonable computation is the Ensemble KF (EnKF), a Monte Carlo-based approximation. In this paper, we consider a class of a posteriori distributions with diagonal covariance matrices and propose fast approximate deterministic-based algorithms based on the Variational Bayesian (VB) approach. More specifically, we derive two iterative KF-like algorithms that differ in the way they operate between two successive filtering estimates; one involves a smoothing estimate and the other involves a prediction estimate. Despite its iterative nature, the prediction-based algorithm provides a computational cost that is, on the one hand, independent of the number of iterations in the limit of very large state dimensions, and on the other hand, always much smaller than the cost of the EnKF. The cost of the smoothing-based algorithm depends on the number of iterations that may, in some situations, make this algorithm slower than the EnKF. The performances of the proposed filters are studied and compared to those of the KF and EnKF through a numerical example.
Index hypergeometric transform and imitation of analysis of Berezin kernels on hyperbolic spaces
International Nuclear Information System (INIS)
Neretin, Yu A
2001-01-01
The index hypergeometric transform (also called the Olevskii transform or the Jacobi transform) generalizes the spherical transform in L 2 on rank 1 symmetric spaces (that is, real, complex, and quaternionic Lobachevskii spaces). The aim of this paper is to obtain properties of the index hypergeometric transform imitating the analysis of Berezin kernels on rank 1 symmetric spaces. The problem of the explicit construction of a unitary operator identifying L 2 and a Berezin space is also discussed. This problem reduces to an integral expression (the Λ-function), which apparently cannot be expressed in a finite form in terms of standard special functions. (Only for certain special values of the parameter can this expression be reduced to the so-called Volterra type special functions.) Properties of this expression are investigated. For some series of symmetric spaces of large rank the above operator of unitary equivalence can be expressed in terms of the determinant of a matrix of Λ-functions
Enhanced 2D-DOA Estimation for Large Spacing Three-Parallel Uniform Linear Arrays
Directory of Open Access Journals (Sweden)
Dong Zhang
2018-01-01
Full Text Available An enhanced two-dimensional direction of arrival (2D-DOA estimation algorithm for large spacing three-parallel uniform linear arrays (ULAs is proposed in this paper. Firstly, we use the propagator method (PM to get the highly accurate but ambiguous estimation of directional cosine. Then, we use the relationship between the directional cosine to eliminate the ambiguity. This algorithm not only can make use of the elements of the three-parallel ULAs but also can utilize the connection between directional cosine to improve the estimation accuracy. Besides, it has satisfied estimation performance when the elevation angle is between 70° and 90° and it can automatically pair the estimated azimuth and elevation angles. Furthermore, it has low complexity without using any eigen value decomposition (EVD or singular value decompostion (SVD to the covariance matrix. Simulation results demonstrate the effectiveness of our proposed algorithm.
Furfaro, R.; Linares, R.; Gaylor, D.; Jah, M.; Walls, R.
2016-09-01
In this paper, we present an end-to-end approach that employs machine learning techniques and Ontology-based Bayesian Networks (BN) to characterize the behavior of resident space objects. State-of-the-Art machine learning architectures (e.g. Extreme Learning Machines, Convolutional Deep Networks) are trained on physical models to learn the Resident Space Object (RSO) features in the vectorized energy and momentum states and parameters. The mapping from measurements to vectorized energy and momentum states and parameters enables behavior characterization via clustering in the features space and subsequent RSO classification. Additionally, Space Object Behavioral Ontologies (SOBO) are employed to define and capture the domain knowledge-base (KB) and BNs are constructed from the SOBO in a semi-automatic fashion to execute probabilistic reasoning over conclusions drawn from trained classifiers and/or directly from processed data. Such an approach enables integrating machine learning classifiers and probabilistic reasoning to support higher-level decision making for space domain awareness applications. The innovation here is to use these methods (which have enjoyed great success in other domains) in synergy so that it enables a "from data to discovery" paradigm by facilitating the linkage and fusion of large and disparate sources of information via a Big Data Science and Analytics framework.
Directory of Open Access Journals (Sweden)
Katsinis Constantine
2006-10-01
Full Text Available Abstract Background Tumor classification is inexact and largely dependent on the qualitative pathological examination of the images of the tumor tissue slides. In this study, our aim was to develop an automated computational method to classify Hematoxylin and Eosin (H&E stained tissue sections based on cancer tissue texture features. Methods Image processing of histology slide images was used to detect and identify adipose tissue, extracellular matrix, morphologically distinct cell nuclei types, and the tubular architecture. The texture parameters derived from image analysis were then applied to classify images in a supervised classification scheme using histologic grade of a testing set as guidance. Results The histologic grade assigned by pathologists to invasive breast carcinoma images strongly correlated with both the presence and extent of cell nuclei with dispersed chromatin and the architecture, specifically the extent of presence of tubular cross sections. The two parameters that differentiated tumor grade found in this study were (1 the number density of cell nuclei with dispersed chromatin and (2 the number density of tubular cross sections identified through image processing as white blobs that were surrounded by a continuous string of cell nuclei. Classification based on subdivisions of a whole slide image containing a high concentration of cancer cell nuclei consistently agreed with the grade classification of the entire slide. Conclusion The automated image analysis and classification presented in this study demonstrate the feasibility of developing clinically relevant classification of histology images based on micro- texture. This method provides pathologists an invaluable quantitative tool for evaluation of the components of the Nottingham system for breast tumor grading and avoid intra-observer variability thus increasing the consistency of the decision-making process.
Predicting the Consequences of MMOD Penetrations on the International Space Station
Hyde, James; Christiansen, E.; Lear, D.; Evans
2018-01-01
The threat from micrometeoroid and orbital debris (MMOD) impacts on space vehicles is often quantified in terms of the probability of no penetration (PNP). However, for large spacecraft, especially those with multiple compartments, a penetration may have a number of possible outcomes. The extent of the damage (diameter of hole, crack length or penetration depth), the location of the damage relative to critical equipment or crew, crew response, and even the time of day of the penetration are among the many factors that can affect the outcome. For the International Space Station (ISS), a Monte-Carlo style software code called Manned Spacecraft Crew Survivability (MSCSurv) is used to predict the probability of several outcomes of an MMOD penetration-broadly classified as loss of crew (LOC), crew evacuation (Evac), loss of escape vehicle (LEV), and nominal end of mission (NEOM). By generating large numbers of MMOD impacts (typically in the billions) and tracking the consequences, MSCSurv allows for the inclusion of a large number of parameters and models as well as enabling the consideration of uncertainties in the models and parameters. MSCSurv builds upon the results from NASA's Bumper software (which provides the probability of penetration and critical input data to MSCSurv) to allow analysts to estimate the probability of LOC, Evac, LEV, and NEOM. This paper briefly describes the overall methodology used by NASA to quantify LOC, Evac, LEV, and NEOM with particular emphasis on describing in broad terms how MSCSurv works and its capabilities and most significant models.
Analyzing large data sets from XGC1 magnetic fusion simulations using apache spark
Energy Technology Data Exchange (ETDEWEB)
Churchill, R. Michael [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States)
2016-11-21
Apache Spark is explored as a tool for analyzing large data sets from the magnetic fusion simulation code XGCI. Implementation details of Apache Spark on the NERSC Edison supercomputer are discussed, including binary file reading, and parameter setup. Here, an unsupervised machine learning algorithm, k-means clustering, is applied to XGCI particle distribution function data, showing that highly turbulent spatial regions do not have common coherent structures, but rather broad, ring-like structures in velocity space.
GESE: A Small UV Space Telescope to Conduct a Large Spectroscopic Survey of Z-1 Galaxies
Heap, Sara R.; Gong, Qian; Hull, Tony; Kruk, Jeffrey; Purves, Lloyd
2013-01-01
One of the key goals of NASA's astrophysics program is to answer the question: How did galaxies evolve into the spirals and elliptical galaxies that we see today? We describe a space mission concept called Galaxy Evolution Spectroscopic Explorer (GESE) to address this question by making a large spectroscopic survey of galaxies at a redshift, z is approximately 1 (look-back time of approximately 8 billion years). GESE is a 1.5-meter space telescope with an ultraviolet (UV) multi-object slit spectrograph that can obtain spectra of hundreds of galaxies per exposure. The spectrograph covers the spectral range, 0.2-0.4 micrometers at a spectral resolving power, R approximately 500. This observed spectral range corresponds to 0.1-0.2 micrometers as emitted by a galaxy at a redshift, z=1. The mission concept takes advantage of two new technological advances: (1) light-weighted, wide-field telescope mirrors, and (2) the Next- Generation MicroShutter Array (NG-MSA) to be used as a slit generator in the multi-object slit spectrograph.
Benson, Robert F.; Fainberg, Joseph; Osherovich, Vladimir A.; Truhlik, Vladimir; Wang, Yongli; Bilitza, Dieter; Fung, Shing F.
2015-01-01
Large magnetic-storm induced changes have been detected in high-latitude topside vertical electron-density profiles Ne(h). The investigation was based on the large database of topside Ne(h) profiles and digital topside ionograms from the International Satellites for Ionospheric Studies (ISIS) program available from the NASA Space Physics Data Facility (SPDF) at http://spdf.gsfc.nasa.gov/isis/isis-status.html. This large database enabled Ne(h) profiles to be obtained when an ISIS satellite passed through nearly the same region of space before, during, and after a major magnetic storm. A major goal was to relate the magnetic-storm induced high-latitude Ne(h) profile changes to solar-wind parameters. Thus an additional data constraint was to consider only storms where solar-wind data were available from the NASA/SPDF OMNIWeb database. Ten large magnetic storms (with Dst less than -100 nT) were identified that satisfied both the Ne(h) profile and the solar-wind data constraints. During five of these storms topside ionospheric Ne(h) profiles were available in the high-latitude northern hemisphere and during the other five storms similar ionospheric data were available in the southern hemisphere. Large Ne(h) changes were observed during each one of these storms. Our concentration in this paper is on the northern hemisphere. The data coverage was best for the northern-hemisphere winter. Here Ne(h) profile enhancements were always observed when the magnetic local time (MLT) was between 00 and 03 and Ne(h) profile depletions were always observed between 08 and 10 MLT. The observed Ne(h) deviations were compared with solar-wind parameters, with appropriate time shifts, for four storms.
A Large Underestimate of Formic Acid from Tropical Fires: Constraints from Space-Borne Measurements.
Chaliyakunnel, S; Millet, D B; Wells, K C; Cady-Pereira, K E; Shephard, M W
2016-06-07
Formic acid (HCOOH) is one of the most abundant carboxylic acids and a dominant source of atmospheric acidity. Recent work indicates a major gap in the HCOOH budget, with atmospheric concentrations much larger than expected from known sources. Here, we employ recent space-based observations from the Tropospheric Emission Spectrometer with the GEOS-Chem atmospheric model to better quantify the HCOOH source from biomass burning, and assess whether fire emissions can help close the large budget gap for this species. The space-based data reveal a severe model HCOOH underestimate most prominent over tropical burning regions, suggesting a major missing source of organic acids from fires. We develop an approach for inferring the fractional fire contribution to ambient HCOOH and find, based on measurements over Africa, that pyrogenic HCOOH:CO enhancement ratios are much higher than expected from direct emissions alone, revealing substantial secondary organic acid production in fire plumes. Current models strongly underestimate (by 10 ± 5 times) the total primary and secondary HCOOH source from African fires. If a 10-fold bias were to extend to fires in other regions, biomass burning could produce 14 Tg/a of HCOOH in the tropics or 16 Tg/a worldwide. However, even such an increase would only represent 15-20% of the total required HCOOH source, implying the existence of other larger missing sources.
Space-Time Convolutional Codes over Finite Fields and Rings for Systems with Large Diversity Order
Directory of Open Access Journals (Sweden)
B. F. Uchôa-Filho
2008-06-01
Full Text Available We propose a convolutional encoder over the finite ring of integers modulo pk,Ã¢Â„Â¤pk, where p is a prime number and k is any positive integer, to generate a space-time convolutional code (STCC. Under this structure, we prove three properties related to the generator matrix of the convolutional code that can be used to simplify the code search procedure for STCCs over Ã¢Â„Â¤pk. Some STCCs of large diversity order (Ã¢Â‰Â¥4 designed under the trace criterion for n=2,3, and 4 transmit antennas are presented for various PSK signal constellations.
Welter, David E.; White, Jeremy T.; Hunt, Randall J.; Doherty, John E.
2015-09-18
The PEST++ Version 1 object-oriented parameter estimation code is here extended to Version 3 to incorporate additional algorithms and tools to further improve support for large and complex environmental modeling problems. PEST++ Version 3 includes the Gauss-Marquardt-Levenberg (GML) algorithm for nonlinear parameter estimation, Tikhonov regularization, integrated linear-based uncertainty quantification, options of integrated TCP/IP based parallel run management or external independent run management by use of a Version 2 update of the GENIE Version 1 software code, and utilities for global sensitivity analyses. The Version 3 code design is consistent with PEST++ Version 1 and continues to be designed to lower the barriers of entry for users as well as developers while providing efficient and optimized algorithms capable of accommodating large, highly parameterized inverse problems. As such, this effort continues the original focus of (1) implementing the most popular and powerful features of the PEST software suite in a fashion that is easy for novice or experienced modelers to use and (2) developing a software framework that is easy to extend.
Directory of Open Access Journals (Sweden)
Tang Xiaofeng
2014-01-01
Full Text Available The paper presents the three time warning distances for solving the large scale system of multiple groups of vehicles safety driving characteristics towards highway tunnel environment based on distributed model prediction control approach. Generally speaking, the system includes two parts. First, multiple vehicles are divided into multiple groups. Meanwhile, the distributed model predictive control approach is proposed to calculate the information framework of each group. Each group of optimization performance considers the local optimization and the neighboring subgroup of optimization characteristics, which could ensure the global optimization performance. Second, the three time warning distances are studied based on the basic principles used for highway intelligent space (HIS and the information framework concept is proposed according to the multiple groups of vehicles. The math model is built to avoid the chain avoidance of vehicles. The results demonstrate that the proposed highway intelligent space method could effectively ensure driving safety of multiple groups of vehicles under the environment of fog, rain, or snow.
Climate and energy use in glazed spaces
Energy Technology Data Exchange (ETDEWEB)
Wall, M.
1996-11-01
One objective of the thesis has been to elucidate the relationship between building design and the climate, thermal comfort and energy requirements in different types of glazed spaces. Another object has been to study the effect of the glazed spaces on energy requirements in adjacent buildings. It has also been the object to develop a simple calculation method for the assessment of temperatures and energy requirements in glazed spaces. The research work has mainly comprised case studies of existing buildings with glazed spaces and energy balance calculations using both the developed steady-state method and a dynamic building energy simulation program. Parameters such as the geometry of the building, type of glazing, orientation, thermal inertia, airtightness, ventilation system and sunshades have been studied. These parameters are of different importance for each specific type of glazed space. In addition, the significance of each of these parameters varies for different types of glazed spaces. The developed calculation method estimates the minimum and mean temperature in glazed spaces and the energy requirements for heating and cooling. The effect of the glazed space on the energy requirement of the surrounding buildings can also be estimated. It is intended that the method should be applied during the preliminary design stage so that the effect which the design of the building will have on climate and energy requirement may be determined. The method may provide an insight into how glazed spaces behave with regard to climate and energy. 99 refs
Full parameter scan of the Zee model: exploring Higgs lepton flavor violation
Energy Technology Data Exchange (ETDEWEB)
Herrero-García, Juan [ARC Center of Excellence for Particle Physics at the Terascale, University of Adelaide,Adelaide, SA 5005 (Australia); Department of Physics, School of Engineering Sciences, KTH Royal Institute of Technology,AlbaNova University Center, Roslagstullsbacken 21, 106 91 Stockholm (Sweden); Ohlsson, Tommy; Riad, Stella; Wirén, Jens [Department of Physics, School of Engineering Sciences, KTH Royal Institute of Technology,AlbaNova University Center, Roslagstullsbacken 21, 106 91 Stockholm (Sweden)
2017-04-21
We study the general Zee model, which includes an extra Higgs scalar doublet and a new singly-charged scalar singlet. Neutrino masses are generated at one-loop level, and in order to describe leptonic mixing, both the Standard Model and the extra Higgs scalar doublets need to couple to leptons (in a type-III two-Higgs doublet model), which necessarily generates large lepton flavor violating signals, also in Higgs decays. Imposing all relevant phenomenological constraints and performing a full numerical scan of the parameter space, we find that both normal and inverted neutrino mass orderings can be fitted, although the latter is disfavored with respect to the former. In fact, inverted ordering can only be accommodated if θ{sub 23} turns out to be in the first octant. A branching ratio for h→τμ of up to 10{sup −2} is allowed, but it could be as low as 10{sup −6}. In addition, if future expected sensitivities of τ→μγ are achieved, normal ordering can be almost completely tested. Also, μe conversion is expected to probe large parts of the parameter space, excluding completely inverted ordering if no signal is observed. Furthermore, non-standard neutrino interactions are found to be smaller than 10{sup −6}, which is well below future experimental sensitivity. Finally, the results of our scan indicate that the masses of the additional scalars have to be below 2.5 TeV, and typically they are lower than that and therefore within the reach of the LHC and future colliders.
Interrelated experiments in laboratory and space plasmas
International Nuclear Information System (INIS)
Koepke, M. E.
2005-01-01
Many advances in understanding space plasma phenomena have been linked to insight derived from theoretical modelling and/or laboratory experiments. Here are discussed advances for which laboratory experiments played an important role. How the interpretation of the space plasma data was influenced by one or more laboratory experiments is described. The space-motivation of laboratory investigations and the scaling of laboratory plasma parameters to space plasma conditions are discussed. Examples demonstrating how laboratory experiments develop physical insight, benchmark theoretical models, discover unexpected behaviour, establish observational signatures, and pioneer diagnostic methods for the space community are presented. The various device configurations found in space-related laboratory investigations are outlined. A primary objective of this review is to articulate the overlapping scientific issues that are addressable in space and lab experiments. A secondary objective is to convey the wide range of laboratory and space plasma experiments involved in this interdisciplinary alliance. The interrelation ship between plasma experiments in the laboratory and in space has a long history, with numerous demonstrations of the benefits afforded the space community by laboratory results. An experiment's suitability and limitations for investigating space processes can be quantitatively established using dimensionless parameters. Even with a partial match of these parameters, aspects of waves, instabilities, nonlinearities, particle transport, reconnection, and hydrodynamics are addressable in a way useful to observers and modelers of space phenomena. Because diagnostic access to space plasmas, laboratory-experimentalists awareness of space phenomena, and efforts by theorists and funding agencies to help scientists bridge the gap between the space and laboratory communities are increasing, the range of laboratory and space plasma experiments with overlapping scientific
Space Shuttle and Space Station Radio Frequency (RF) Exposure Analysis
Hwu, Shian U.; Loh, Yin-Chung; Sham, Catherine C.; Kroll, Quin D.
2005-01-01
This paper outlines the modeling techniques and important parameters to define a rigorous but practical procedure that can verify the compliance of RF exposure to the NASA standards for astronauts and electronic equipment. The electromagnetic modeling techniques are applied to analyze RF exposure in Space Shuttle and Space Station environments with reasonable computing time and resources. The modeling techniques are capable of taking into account the field interactions with Space Shuttle and Space Station structures. The obtained results illustrate the multipath effects due to the presence of the space vehicle structures. It's necessary to include the field interactions with the space vehicle in the analysis for an accurate assessment of the RF exposure. Based on the obtained results, the RF keep out zones are identified for appropriate operational scenarios, flight rules and necessary RF transmitter constraints to ensure a safe operating environment and mission success.
2014-10-01
considering new approaches. According to Air Force Space Command, U.S. space systems face intentional and unintentional threats , which have increased...life cycle costs • Demand for more satellites may stimulate new entrants and competition to lower acquisition costs. • Smaller, less complex...Fiscal constraints and growing threats to space systems have led DOD to consider alternatives for acquiring space-based capabilities, including
Systematic Biases in Parameter Estimation of Binary Black-Hole Mergers
Littenberg, Tyson B.; Baker, John G.; Buonanno, Alessandra; Kelly, Bernard J.
2012-01-01
Parameter estimation of binary-black-hole merger events in gravitational-wave data relies on matched filtering techniques, which, in turn, depend on accurate model waveforms. Here we characterize the systematic biases introduced in measuring astrophysical parameters of binary black holes by applying the currently most accurate effective-one-body templates to simulated data containing non-spinning numerical-relativity waveforms. For advanced ground-based detectors, we find that the systematic biases are well within the statistical error for realistic signal-to-noise ratios (SNR). These biases grow to be comparable to the statistical errors at high signal-to-noise ratios for ground-based instruments (SNR approximately 50) but never dominate the error budget. At the much larger signal-to-noise ratios expected for space-based detectors, these biases will become large compared to the statistical errors but are small enough (at most a few percent in the black-hole masses) that we expect they should not affect broad astrophysical conclusions that may be drawn from the data.
Transport regimes spanning magnetization-coupling phase space
Baalrud, Scott D.; Daligault, Jérôme
2017-10-01
The manner in which transport properties vary over the entire parameter-space of coupling and magnetization strength is explored. Four regimes are identified based on the relative size of the gyroradius compared to other fundamental length scales: the collision mean free path, Debye length, distance of closest approach, and interparticle spacing. Molecular dynamics simulations of self-diffusion and temperature anisotropy relaxation spanning the parameter space are found to agree well with the predicted boundaries. Comparison with existing theories reveals regimes where they succeed, where they fail, and where no theory has yet been developed.
Radiation risk in space exploration
International Nuclear Information System (INIS)
Schimmerling, W.; Wilson, J.W.; Cucinotta, F.; Kim, M.H.Y.
1997-01-01
Humans living and working in space are exposed to energetic charged particle radiation due to galactic cosmic rays and solar particle emissions. In order to keep the risk due to radiation exposure of astronauts below acceptable levels, the physical interaction of these particles with space structures and the biological consequences for crew members need to be understood. Such knowledge is, to a large extent, very sparse when it is available at all. Radiation limits established for space radiation protection purposes are based on extrapolation of risk from Japanese survivor data, and have been found to have large uncertainties. In space, attempting to account for large uncertainties by worst-case design results in excessive costs and accurate risk prediction is essential. It is best developed at ground-based laboratories, using particle accelerator beams to simulate individual components of space radiation. Development of mechanistic models of the action of space radiation is expected to lead to the required improvements in the accuracy of predictions, to optimization of space structures for radiation protection and, eventually, to the development of biological methods of prevention and intervention against radiation injury. (author)
Directory of Open Access Journals (Sweden)
Mehmet PENPECİOĞLU
2013-04-01
Full Text Available With the rise of neo-liberalism, large-scale urban projects (LDPs have become a powerful mechanism of urban policy. Creating spaces of neo-liberal urbanization such as central business districts, tourism centers, gated residences and shopping malls, LDPs play a role not only in the reproduction of capital accumulation relations but also in the shift of urban political priorities towards the construction of neo-liberal hegemony. The construction of neo-liberal hegemony and the role played by LDPs in this process could not only be investigated by the analysis of capital accumulation. For such an investigation; the role of state and civil society actors in LDPs, their collaborative and conflictual relationships should be researched and their functions in hegemony should be revealed. In the case of Izmir’s two LDPs, namely the New City Center (NCC and Inciraltı Tourism Center (ITC projects, this study analyzes the relationship between the production of space and neo-liberal hegemony. In the NCC project, local governments, investors, local capital organizations and professional chambers collaborated and disseminated hegemonic discourse, which provided social support for the project. Through these relationships and discourses, the NCC project has become a hegemonic project for producing space and constructed neo-liberal hegemony over urban political priorities. In contrast to the NCC project, the ITC project saw no collaboration between state and organized civil society actors. The social opposition against the ITC project, initiated by professional chambers, has brought legal action against the ITC development plans in order to prevent their implementation. As a result, the ITC project did not acquire the consent of organized social groups and failed to become a hegemonic project for producing space.
Winands, G.J.J.; Liu, Zhen; Pemen, A.J.M.; Heesch, van E.J.M.; Yan, K.; Veldhuizen, van E.M.
2006-01-01
In this paper a large-scale pulsed corona system is described in which pulse parameters such as pulse rise-time, peak voltage, pulse width and energy per pulse can be varied. The chemical efficiency of the system is determined by measuring ozone production. The temporal and spatial development of
Analysis of Higher Order Modes in Large Superconducting Radio Frequency Accelerating Structures
Galek, Tomasz; Brackebusch, Korinna; Van Rienen, Ursula
2015-01-01
Superconducting radio frequency cavities used for accelerating charged particle beams are commonly used in accelerator facilities around the world. The design and optimization of modern superconducting RF cavities requires intensive numerical simulations. Vast number of operational parameters must be calculated to ensure appropriate functioning of the accelerating structures. In this study, we primarily focus on estimation and behavior of higher order modes in superconducting RF cavities connected in chains. To calculate large RF models the state-space concatenation scheme, an efficient hybrid method, is employed.
A Very Large Area Network (VLAN) knowledge-base applied to space communication problems
Zander, Carol S.
1988-01-01
This paper first describes a hierarchical model for very large area networks (VLAN). Space communication problems whose solution could profit by the model are discussed and then an enhanced version of this model incorporating the knowledge needed for the missile detection-destruction problem is presented. A satellite network or VLAN is a network which includes at least one satellite. Due to the complexity, a compromise between fully centralized and fully distributed network management has been adopted. Network nodes are assigned to a physically localized group, called a partition. Partitions consist of groups of cell nodes with one cell node acting as the organizer or master, called the Group Master (GM). Coordinating the group masters is a Partition Master (PM). Knowledge is also distributed hierarchically existing in at least two nodes. Each satellite node has a back-up earth node. Knowledge must be distributed in such a way so as to minimize information loss when a node fails. Thus the model is hierarchical both physically and informationally.
Wilkie, William Keats; Williams, R. Brett; Agnes, Gregory S.; Wilcox, Brian H.
2007-01-01
This paper presents a feasibility study of robotically constructing a very large aperture optical space telescope on-orbit. Since the largest engineering challenges are likely to reside in the design and assembly of the 150-m diameter primary reflector, this preliminary study focuses on this component. The same technology developed for construction of the primary would then be readily used for the smaller optical structures (secondary, tertiary, etc.). A reasonable set of ground and on-orbit loading scenarios are compiled from the literature and used to define the structural performance requirements and size the primary reflector. A surface precision analysis shows that active adjustment of the primary structure is required in order to meet stringent optical surface requirements. Two potential actuation strategies are discussed along with potential actuation devices at the current state of the art. The finding of this research effort indicate that successful technology development combined with further analysis will likely enable such a telescope to be built in the future.
Janzen, Kathryn Louise
Largely because of their resistance to magnetic fields, silicon photomultipliers (SiPMs) are being considered as the readout for the GlueX Barrel Calorimeter, a key component of the GlueX detector located immediately inside a 2.2 T superconducting solenoid. SiPMs with active area 1 x 1 mm2 have been investigated for use in other experiments, but detectors with larger active areas are required for the GlueX BCAL. This puts the GlueX collaboration in the unique position of being pioneers in the use of this frontend detection revolution by driving the technology for larger area sensors. SensL, a photonics research and development company in Ireland, has been collaborating with the University of Regina GlueX group to develop prototype large area SiPMs comprising 16 - 3x3 mm2 cells assembled in a close-packed matrix. Performance parameters of individual SensL 1x1 mm2 and 3x3 mm2 SiPMs along with prototype SensL SiPM arrays are tested, including current versus voltage characteristics, photon detection efficiency, and gain uniformity, in an effort to determine the suitability of these detectors to the GlueX BCAL readout.
Yano, H.; Hirai, T.; Arai, K.; Fujii, M.
2017-12-01
The PVDF thin films have been long, space-proven instruments for hypervelocity impact detection in the diverse regions of the Solar System from orbits of Venus by IKAROS and of Pluto by New Horizons. In particular, light weight but large area membranes of a solar sail spacecraft is an ideal location for such detectors to be deployed for detecting statistically enough nubers of so large micrometeoroids that are sensitive to mean motion resonances and other gravitational effects of flux enhancements and voids with planets. The IKAROS spacecraft first detected in situ dust flux enhancement and gap region within the Earth's circumsolar dust ring as well as those of Venus by 0.54 m^2 detection area of ALADDIN sensors on the slar sail membrane. Advancing this heritage, the Solar Power Sail membrane will carry 0.4+ m^2 ALADDIN-II PVDF sensors with improved impact signal prosessng units to detect both hyperveloity dust impacts in the interplanetary space cruising phase and slow dust impacts bound to the Jupiter Trojan region in its rendezvours phase.
Regulation of NF-κB oscillation by spatial parameters in true intracellular space (TiCS)
Ohshima, Daisuke; Sagara, Hiroshi; Ichikawa, Kazuhisa
2013-10-01
Transcription factor NF-κB is activated by cytokine stimulation, viral infection, or hypoxic environment leading to its translocation to the nucleus. The nuclear NF-κB is exported from the nucleus to the cytoplasm again, and by repetitive import and export, NF-κB shows damped oscillation with the period of 1.5-2.0 h. Oscillation pattern of NF-κB is thought to determine the gene expression profile. We published a report on a computational simulation for the oscillation of nuclear NF-κB in a 3D spherical cell, and showed the importance of spatial parameters such as diffusion coefficient and locus of translation for determining the oscillation pattern. Although the value of diffusion coefficient is inherent to protein species, its effective value can be modified by organelle crowding in intracellular space. Here we tested this possibility by computer simulation. The results indicate that the effective value of diffusion coefficient is significantly changed by the organelle crowding, and this alters the oscillation pattern of nuclear NF-κB.
Energy Technology Data Exchange (ETDEWEB)
Wang, Xinchang, E-mail: wangxinchangz@163.com; Shen, Xiaotian; Sun, Fanghong; Shen, Bin
2016-12-01
Highlights: • A verified simulation model using a novel filament arrangement is constructed. • Influences of filament parameters are clarified. • A coefficient between simulated and experimental results is proposed. • Orthogonal simulations are adopted to optimize filament parameters. • A general filament arrangement suitable for different conditions is determined. - Abstract: Chemical vapor deposition (CVD) diamond films have been widely applied as protective coatings on varieties of anti-frictional and wear-resistant components, owing to their excellent mechanical and tribological properties close to the natural diamond. In applications of some components, the inner hole surface will serve as the working surface that suffers severe frictional or erosive wear. It is difficult to realize uniform depositions of diamond films on surfaces of inner holes, especially ultra-large inner holes. Adopting a SiC compact die with an aperture of 80 mm as an example, a novel filament arrangement with a certain number of filaments evenly distributed on a circle is designed, and specific effects of filament parameters, including the filament number, arrangement direction, filament temperature, filament diameter, circumradius and the downward translation, on the substrate temperature distribution are studied by computational fluid dynamics (CFD) simulations based on the finite volume method (FVM), adopting a modified computational model well consistent with the actual deposition environment. Corresponding temperature measurement experiments are also conducted to verify the rationality of the computational model. From the aspect of depositing uniform boron-doped micro-crystalline, undoped micro-crystalline and undoped fine-grained composite diamond (BDM-UMC-UFGCD) film on such the inner hole surface, filament parameters as mentioned above are accurately optimized and compensated by orthogonal simulations. Moreover, deposition experiments adopting compensated optimized
Adaptive Parameter Estimation of Person Recognition Model in a Stochastic Human Tracking Process
Nakanishi, W.; Fuse, T.; Ishikawa, T.
2015-05-01
This paper aims at an estimation of parameters of person recognition models using a sequential Bayesian filtering method. In many human tracking method, any parameters of models used for recognize the same person in successive frames are usually set in advance of human tracking process. In real situation these parameters may change according to situation of observation and difficulty level of human position prediction. Thus in this paper we formulate an adaptive parameter estimation using general state space model. Firstly we explain the way to formulate human tracking in general state space model with their components. Then referring to previous researches, we use Bhattacharyya coefficient to formulate observation model of general state space model, which is corresponding to person recognition model. The observation model in this paper is a function of Bhattacharyya coefficient with one unknown parameter. At last we sequentially estimate this