International Nuclear Information System (INIS)
Geis, J.W.
1992-01-01
This paper discusses a Space Power Subsystem Sizing program which has been developed by the Aerospace Power Division of Wright Laboratory, Wright-Patterson Air Force Base, Ohio. The Space Power Subsystem program (SPSS) contains the necessary equations and algorithms to calculate photovoltaic array power performance, including end-of-life (EOL) and beginning-of-life (BOL) specific power (W/kg) and areal power density (W/m 2 ). Additional equations and algorithms are included in the spreadsheet for determining maximum eclipse time as a function of orbital altitude, and inclination. The Space Power Subsystem Sizing program (SPSS) has been used to determine the performance of several candidate power subsystems for both Air Force and SDIO potential applications. Trade-offs have been made between subsystem weight and areal power density (W/m 2 ) as influenced by orbital high energy particle flux and time in orbit
Large size space construction for space exploitation
Kondyurin, Alexey
2016-07-01
Space exploitation is impossible without large space structures. We need to make sufficient large volume of pressurized protecting frames for crew, passengers, space processing equipment, & etc. We have to be unlimited in space. Now the size and mass of space constructions are limited by possibility of a launch vehicle. It limits our future in exploitation of space by humans and in development of space industry. Large-size space construction can be made with using of the curing technology of the fibers-filled composites and a reactionable matrix applied directly in free space. For curing the fabric impregnated with a liquid matrix (prepreg) is prepared in terrestrial conditions and shipped in a container to orbit. In due time the prepreg is unfolded by inflating. After polymerization reaction, the durable construction can be fitted out with air, apparatus and life support systems. Our experimental studies of the curing processes in the simulated free space environment showed that the curing of composite in free space is possible. The large-size space construction can be developed. A project of space station, Moon base, Mars base, mining station, interplanet space ship, telecommunication station, space observatory, space factory, antenna dish, radiation shield, solar sail is proposed and overviewed. The study was supported by Humboldt Foundation, ESA (contract 17083/03/NL/SFe), NASA program of the stratospheric balloons and RFBR grants (05-08-18277, 12-08-00970 and 14-08-96011).
Directory of Open Access Journals (Sweden)
J Rasmus Nielsen
Full Text Available Trawl survey data with high spatial and seasonal coverage were analysed using a variant of the Log Gaussian Cox Process (LGCP statistical model to estimate unbiased relative fish densities. The model estimates correlations between observations according to time, space, and fish size and includes zero observations and over-dispersion. The model utilises the fact the correlation between numbers of fish caught increases when the distance in space and time between the fish decreases, and the correlation between size groups in a haul increases when the difference in size decreases. Here the model is extended in two ways. Instead of assuming a natural scale size correlation, the model is further developed to allow for a transformed length scale. Furthermore, in the present application, the spatial- and size-dependent correlation between species was included. For cod (Gadus morhua and whiting (Merlangius merlangus, a common structured size correlation was fitted, and a separable structure between the time and space-size correlation was found for each species, whereas more complex structures were required to describe the correlation between species (and space-size. The within-species time correlation is strong, whereas the correlations between the species are weaker over time but strong within the year.
DEFF Research Database (Denmark)
Nielsen, J. Rasmus; Kristensen, Kasper; Lewy, Peter
2014-01-01
Trawl survey data with high spatial and seasonal coverage were analysed using a variant of the Log Gaussian Cox Process (LGCP) statistical model to estimate unbiased relative fish densities. The model estimates correlations between observations according to time, space, and fish size and includes...
National Oceanic and Atmospheric Administration, Department of Commerce — Collection includes presentation materials and outputs from operational space environment models produced by the NOAA Space Weather Prediction Center (SWPC) and...
Казыдуб, Надежда
2013-01-01
Discourse space is a complex structure that incorporates different levels and dimensions. The paper focuses on developing a multidisciplinary approach that is congruent to the complex character of the modern discourse. Two models of discourse space are proposed here. The Integrated Model reveals the interaction of different categorical mechanisms in the construction of the discourse space. The Evolutionary Model describes the historical roots of the modern discourse. It also reveals historica...
Space debris: modeling and detectability
Wiedemann, C.; Lorenz, J.; Radtke, J.; Kebschull, C.; Horstmann, A.; Stoll, E.
2017-01-01
High precision orbit determination is required for the detection and removal of space debris. Knowledge of the distribution of debris objects in orbit is necessary for orbit determination by active or passive sensors. The results can be used to investigate the orbits on which objects of a certain size at a certain frequency can be found. The knowledge of the orbital distribution of the objects as well as their properties in accordance with sensor performance models provide the basis for estimating the expected detection rates. Comprehensive modeling of the space debris environment is required for this. This paper provides an overview of the current state of knowledge about the space debris environment. In particular non-cataloged small objects are evaluated. Furthermore, improvements concerning the update of the current space debris model are addressed. The model of the space debris environment is based on the simulation of historical events, such as fragmentations due to explosions and collisions that actually occurred in Earth orbits. The orbital distribution of debris is simulated by propagating the orbits considering all perturbing forces up to a reference epoch. The modeled object population is compared with measured data and validated. The model provides a statistical distribution of space objects, according to their size and number. This distribution is based on the correct consideration of orbital mechanics. This allows for a realistic description of the space debris environment. Subsequently, a realistic prediction can be provided concerning the question, how many pieces of debris can be expected on certain orbits. To validate the model, a software tool has been developed which allows the simulation of the observation behavior of ground-based or space-based sensors. Thus, it is possible to compare the results of published measurement data with simulated detections. This tool can also be used for the simulation of sensor measurement campaigns. It is
International Nuclear Information System (INIS)
Wald, H.B.
1990-01-01
The 'PATH' codes are used to design magnetic optics subsystems for neutral particle beam systems. They include a 2-1/2D and three 3-D space charge models, two of which have recently been added. This paper describes the 3-D models and reports on preliminary benchmark studies in which these models are checked for stability as the cloud size is varied and for consistency with each other. Differences between the models are investigated and the computer time requirements for running these models are established
DEFF Research Database (Denmark)
Ravn-Jonsen, Lars
Ecosystem Management requires models that can link the ecosystem level to the operation level. This link can be created by an ecosystem production model. Because the function of the individual fish in the marine ecosystem, seen in trophic context, is closely related to its size, the model groups...... fish according to size. The model summarises individual predation events into ecosystem level properties, and thereby uses the law of conversation of mass as a framework. This paper provides the background, the conceptual model, basic assumptions, integration of fishing activities, mathematical...... the predator--prey interaction, (ii) mass balance in the predator--prey allocation, and (iii) mortality and somatic growth as a consequence of the predator--prey allocation. By incorporating additional assumptions, the model can be extended to other dimensions of the ecosystem, for example, space or species...
Modeling and Sizing of Supercapacitors
Directory of Open Access Journals (Sweden)
PETREUS, D.
2008-06-01
Full Text Available Faced with numerous challenges raised by the requirements of the modern industries for higher power and higher energy, supercapacitors study started playing an important role in offering viable solutions for some of these requirements. This paper presents the surface redox reactions based modeling in order to study the origin of high capacity of EDLC (electrical double-layer capacitor for better understanding the working principles of supercapacitors. Some application-dependent sizing methods are also presented since proper sizing can increase the efficiency and the life cycle of the supercapacitor based systems.
Michael N. Gooseff; Justin K. Anderson; Steven M. Wondzell; Justin LaNier; Roy. Haggerty
2005-01-01
Studies of hyporheic exchange flows have identified physical features of channels that control exchange flow at the channel unit scale, namely slope breaks in the longitudinal profile of streams that generate subsurface head distributions. We recently completed a field study that suggested channel unit spacing in stream longitudinal profiles can be used to predict the...
Modelling of Size Effect with Regularised Continua
Directory of Open Access Journals (Sweden)
H. Askes
2004-01-01
Full Text Available A nonlocal damage continuum and a viscoplastic damage continuum are used to model size effects. Three-point bending specimens are analysed, whereby a distinction is made between unnotched specimens, specimens with a constant notch and specimens with a proportionally scaled notch. Numerical finite element simulations have been performed for specimen sizes in a range of 1:64. Size effects are established in terms of nominal strength and compared to existing size effect models from the literature.
Cost Modeling for Space Telescope
Stahl, H. Philip
2011-01-01
Parametric cost models are an important tool for planning missions, compare concepts and justify technology investments. This paper presents on-going efforts to develop single variable and multi-variable cost models for space telescope optical telescope assembly (OTA). These models are based on data collected from historical space telescope missions. Standard statistical methods are used to derive CERs for OTA cost versus aperture diameter and mass. The results are compared with previously published models.
State Space Modeling Using SAS
Directory of Open Access Journals (Sweden)
Rajesh Selukar
2011-05-01
Full Text Available This article provides a brief introduction to the state space modeling capabilities in SAS, a well-known statistical software system. SAS provides state space modeling in a few different settings. SAS/ETS, the econometric and time series analysis module of the SAS system, contains many procedures that use state space models to analyze univariate and multivariate time series data. In addition, SAS/IML, an interactive matrix language in the SAS system, provides Kalman filtering and smoothing routines for stationary and nonstationary state space models. SAS/IML also provides support for linear algebra and nonlinear function optimization, which makes it a convenient environment for general-purpose state space modeling.
Estimating Functions of Distributions Defined over Spaces of Unknown Size
Directory of Open Access Journals (Sweden)
David H. Wolpert
2013-10-01
Full Text Available We consider Bayesian estimation of information-theoretic quantities from data, using a Dirichlet prior. Acknowledging the uncertainty of the event space size m and the Dirichlet prior’s concentration parameter c, we treat both as random variables set by a hyperprior. We show that the associated hyperprior, P(c, m, obeys a simple “Irrelevance of Unseen Variables” (IUV desideratum iff P(c, m = P(cP(m. Thus, requiring IUV greatly reduces the number of degrees of freedom of the hyperprior. Some information-theoretic quantities can be expressed multiple ways, in terms of different event spaces, e.g., mutual information. With all hyperpriors (implicitly used in earlier work, different choices of this event space lead to different posterior expected values of these information-theoretic quantities. We show that there is no such dependence on the choice of event space for a hyperprior that obeys IUV. We also derive a result that allows us to exploit IUV to greatly simplify calculations, like the posterior expected mutual information or posterior expected multi-information. We also use computer experiments to favorably compare an IUV-based estimator of entropy to three alternative methods in common use. We end by discussing how seemingly innocuous changes to the formalization of an estimation problem can substantially affect the resultant estimates of posterior expectations.
Two-level method with coarse space size independent convergence
Energy Technology Data Exchange (ETDEWEB)
Vanek, P.; Brezina, M. [Univ. of Colorado, Denver, CO (United States); Tezaur, R.; Krizkova, J. [UWB, Plzen (Czech Republic)
1996-12-31
The basic disadvantage of the standard two-level method is the strong dependence of its convergence rate on the size of the coarse-level problem. In order to obtain the optimal convergence result, one is limited to using a coarse space which is only a few times smaller than the size of the fine-level one. Consequently, the asymptotic cost of the resulting method is the same as in the case of using a coarse-level solver for the original problem. Today`s two-level domain decomposition methods typically offer an improvement by yielding a rate of convergence which depends on the ratio of fine and coarse level only polylogarithmically. However, these methods require the use of local subdomain solvers for which straightforward application of iterative methods is problematic, while the usual application of direct solvers is expensive. We suggest a method diminishing significantly these difficulties.
IASM: Individualized activity space modeler
Hasanzadeh, Kamyar
2018-01-01
Researchers from various disciplines have long been interested in analyzing and describing human mobility patterns. Activity space (AS), defined as an area encapsulating daily human mobility and activities, has been at the center of this interest. However, given the applied nature of research in this field and the complexity that advanced geographical modeling can pose to its users, the proposed models remain simplistic and inaccurate in many cases. Individualized Activity Space Modeler (IASM) is a geographic information system (GIS) toolbox, written in Python programming language using ESRI's Arcpy module, comprising four tools aiming to facilitate the use of advanced activity space models in empirical research. IASM provides individual-based and context-sensitive tools to estimate home range distances, delineate activity spaces, and model place exposures using individualized geographical data. In this paper, we describe the design and functionality of IASM, and provide an example of how it performs on a spatial dataset collected through an online map-based survey.
Simplicial models for trace spaces
DEFF Research Database (Denmark)
Raussen, Martin
Directed Algebraic Topology studies topological spaces in which certain directed paths (d-paths) - in general irreversible - are singled out. The main interest concerns the spaces of directed paths between given end points - and how those vary under variation of the end points. The original...... motivation stems from certain models for concurrent computation. So far, spaces of d-paths and their topological invariants have only been determined in cases that were elementary to overlook. In this paper, we develop a systematic approach describing spaces of directed paths - up to homotopy equivalence...
Effect of crack size on gas leakage characteristics in a confined space
Energy Technology Data Exchange (ETDEWEB)
Sung, Kun Hyuk; Ryou, Hong Sun; Yoon, Kee Bong; Lee, Hy Uk; Bang, Joo Won [Chung-Ang University, Seoul (Korea, Republic of); Li, Longnan; Choi, Jin Wook; Kim, Dae Joong [Sogang University, Seoul (Korea, Republic of)
2016-07-15
We numerically investigated the influence of crack size on gas leakage characteristics in a confined space. The real scale model of underground Combined cycle power plant (CCPP) was taken for simulating gas leakage characteristics for different crack sizes such as 10 mm, 15 mm and 20 mm. The commercial code of Fluent (v.16.1) was used for three-dimensional simulation. In particular, a risk region showing such a probability of ignition was newly suggested with the concept of Lower flammable limit (LFL) of methane gas used in the present study to characterize the gas propagation and the damage area in space. From the results, the longitudinal and transverse leakage distances were estimated and analyzed for quantitative evaluation of risk area. The crack size was found to have a great impact on the longitudinal leakage distance, showing an increasing tendency with the crack size. In case of a crack size of 20 mm, the longitudinal leakage distance suddenly increased after 180 s, whereas it remained constant after 2 s in the other cases. This is because a confinement effect, which is caused by circulation flows in the whole space, increased the gas concentration near the gas flow released from the crack. The confinement effect is thus closely associated with the released mass flow rate changing with the crack size. This result would be useful in designing the gas detector system for preventing accidents in the confined space as like CCPP.
Estimating the size of the solution space of metabolic networks
Directory of Open Access Journals (Sweden)
Mulet Roberto
2008-05-01
Full Text Available Abstract Background Cellular metabolism is one of the most investigated system of biological interactions. While the topological nature of individual reactions and pathways in the network is quite well understood there is still a lack of comprehension regarding the global functional behavior of the system. In the last few years flux-balance analysis (FBA has been the most successful and widely used technique for studying metabolism at system level. This method strongly relies on the hypothesis that the organism maximizes an objective function. However only under very specific biological conditions (e.g. maximization of biomass for E. coli in reach nutrient medium the cell seems to obey such optimization law. A more refined analysis not assuming extremization remains an elusive task for large metabolic systems due to algorithmic limitations. Results In this work we propose a novel algorithmic strategy that provides an efficient characterization of the whole set of stable fluxes compatible with the metabolic constraints. Using a technique derived from the fields of statistical physics and information theory we designed a message-passing algorithm to estimate the size of the affine space containing all possible steady-state flux distributions of metabolic networks. The algorithm, based on the well known Bethe approximation, can be used to approximately compute the volume of a non full-dimensional convex polytope in high dimensions. We first compare the accuracy of the predictions with an exact algorithm on small random metabolic networks. We also verify that the predictions of the algorithm match closely those of Monte Carlo based methods in the case of the Red Blood Cell metabolic network. Then we test the effect of gene knock-outs on the size of the solution space in the case of E. coli central metabolism. Finally we analyze the statistical properties of the average fluxes of the reactions in the E. coli metabolic network. Conclusion We propose a
State-Space Modelling in Marine Science
DEFF Research Database (Denmark)
Albertsen, Christoffer Moesgaard
State-space models provide a natural framework for analysing time series that cannot be observed without error. This is the case for fisheries stock assessments and movement data from marine animals. In fisheries stock assessments, the aim is to estimate the stock size; however, the only data...... available is the number of fish removed from the population and samples on a small fraction of the population. In marine animal movement, accurate position systems such as GPS cannot be used. Instead, inaccurate alternative must be used yielding observations with large errors. Both assessment and individual...... animal movement models are important for management and conservation of marine animals. Consequently, models should be developed to be operational in a management context while adequately evaluating uncertainties in the models. This thesis develops state-space models using the Laplace approximation...
Toups, Larry; Simon, Matthew; Smitherman, David; Spexarth, Gary
2012-01-01
NASA's Human Space Flight Architecture Team (HAT) is a multi-disciplinary, cross-agency study team that conducts strategic analysis of integrated development approaches for human and robotic space exploration architectures. During each analysis cycle, HAT iterates and refines the definition of design reference missions (DRMs), which inform the definition of a set of integrated capabilities required to explore multiple destinations. An important capability identified in this capability-driven approach is habitation, which is necessary for crewmembers to live and work effectively during long duration transits to and operations at exploration destinations beyond Low Earth Orbit (LEO). This capability is captured by an element referred to as the Deep Space Habitat (DSH), which provides all equipment and resources for the functions required to support crew safety, health, and work including: life support, food preparation, waste management, sleep quarters, and housekeeping.The purpose of this paper is to describe the design of the DSH capable of supporting crew during exploration missions. First, the paper describes the functionality required in a DSH to support the HAT defined exploration missions, the parameters affecting its design, and the assumptions used in the sizing of the habitat. Then, the process used for arriving at parametric sizing estimates to support additional HAT analyses is detailed. Finally, results from the HAT Cycle C DSH sizing are presented followed by a brief description of the remaining design trades and technological advancements necessary to enable the exploration habitation capability.
Computational Modeling of Space Physiology
Lewandowski, Beth E.; Griffin, Devon W.
2016-01-01
The Digital Astronaut Project (DAP), within NASAs Human Research Program, develops and implements computational modeling for use in the mitigation of human health and performance risks associated with long duration spaceflight. Over the past decade, DAP developed models to provide insights into space flight related changes to the central nervous system, cardiovascular system and the musculoskeletal system. Examples of the models and their applications include biomechanical models applied to advanced exercise device development, bone fracture risk quantification for mission planning, accident investigation, bone health standards development, and occupant protection. The International Space Station (ISS), in its role as a testing ground for long duration spaceflight, has been an important platform for obtaining human spaceflight data. DAP has used preflight, in-flight and post-flight data from short and long duration astronauts for computational model development and validation. Examples include preflight and post-flight bone mineral density data, muscle cross-sectional area, and muscle strength measurements. Results from computational modeling supplement space physiology research by informing experimental design. Using these computational models, DAP personnel can easily identify both important factors associated with a phenomenon and areas where data are lacking. This presentation will provide examples of DAP computational models, the data used in model development and validation, and applications of the model.
Large-size space debris flyby in low earth orbits
Baranov, A. A.; Grishko, D. A.; Razoumny, Y. N.
2017-09-01
the analysis of NORAD catalogue of space objects executed with respect to the overall sizes of upper-stages and last stages of carrier rockets allows the classification of 5 groups of large-size space debris (LSSD). These groups are defined according to the proximity of orbital inclinations of the involved objects. The orbits within a group have various values of deviations in the Right Ascension of the Ascending Node (RAAN). It is proposed to use the RAANs deviations' evolution portrait to clarify the orbital planes' relative spatial distribution in a group so that the RAAN deviations should be calculated with respect to the concrete precessing orbital plane of the concrete object. In case of the first three groups (inclinations i = 71°, i = 74°, i = 81°) the straight lines of the RAAN relative deviations almost do not intersect each other. So the simple, successive flyby of group's elements is effective, but the significant value of total Δ V is required to form drift orbits. In case of the fifth group (Sun-synchronous orbits) these straight lines chaotically intersect each other for many times due to the noticeable differences in values of semi-major axes and orbital inclinations. The intersections' existence makes it possible to create such a flyby sequence for LSSD group when the orbit of one LSSD object simultaneously serves as the drift orbit to attain another LSSD object. This flyby scheme requiring less Δ V was called "diagonal." The RAANs deviations' evolution portrait built for the fourth group (to be studied in the paper) contains both types of lines, so the simultaneous combination of diagonal and successive flyby schemes is possible. The value of total Δ V and temporal costs were calculated to cover all the elements of the 4th group. The article is also enriched by the results obtained for the flyby problem solution in case of all the five mentioned LSSD groups. The general recommendations are given concerned with the required reserve of total
Simplicial models of trace spaces
DEFF Research Database (Denmark)
Raussen, Martin
2010-01-01
variation of the end points. The original motivation stems from certain models for concurrent computation. So far, homotopy types of spaces of d-paths and their topological invariants have only been determined in cases that were elementary to overlook. In this paper, we develop a systematic approach...
Modeling and optimization of wet sizing process
International Nuclear Information System (INIS)
Thai Ba Cau; Vu Thanh Quang and Nguyen Ba Tien
2004-01-01
Mathematical simulation on basis of Stock law has been done for wet sizing process on cylinder equipment of laboratory and semi-industrial scale. The model consists of mathematical equations describing relations between variables, such as: - Resident time distribution function of emulsion particles in the separating zone of the equipment depending on flow-rate, height, diameter and structure of the equipment. - Size-distribution function in the fine and coarse parts depending on resident time distribution function of emulsion particles, characteristics of the material being processed, such as specific density, shapes, and characteristics of the environment of classification, such as specific density, viscosity. - Experimental model was developed on data collected from an experimental cylindrical equipment with diameter x height of sedimentation chamber equal to 50 x 40 cm for an emulsion of zirconium silicate in water. - Using this experimental model allows to determine optimal flow-rate in order to obtain product with desired grain size in term of average size or size distribution function. (author)
Measurement of joint space width and erosion size
Sharp, JI; van der Heijde, D; Angwin, J; Duryea, J; Moens, HJB; Jacobs, JWG; Maillefert, JF; Strand, CV
2005-01-01
Measurement of radiographic abnormalities in metric units has been reported by several investigators during the last 15 years. Measurement of joint space in large joints has been employed in a few trials to evaluate therapy in osteoarthritis. Measurement of joint space width in small joints has been
Modeling volatility using state space models.
Timmer, J; Weigend, A S
1997-08-01
In time series problems, noise can be divided into two categories: dynamic noise which drives the process, and observational noise which is added in the measurement process, but does not influence future values of the system. In this framework, we show that empirical volatilities (the squared relative returns of prices) exhibit a significant amount of observational noise. To model and predict their time evolution adequately, we estimate state space models that explicitly include observational noise. We obtain relaxation times for shocks in the logarithm of volatility ranging from three weeks (for foreign exchange) to three to five months (for stock indices). In most cases, a two-dimensional hidden state is required to yield residuals that are consistent with white noise. We compare these results with ordinary autoregressive models (without a hidden state) and find that autoregressive models underestimate the relaxation times by about two orders of magnitude since they do not distinguish between observational and dynamic noise. This new interpretation of the dynamics of volatility in terms of relaxators in a state space model carries over to stochastic volatility models and to GARCH models, and is useful for several problems in finance, including risk management and the pricing of derivative securities. Data sets used: Olsen & Associates high frequency DEM/USD foreign exchange rates (8 years). Nikkei 225 index (40 years). Dow Jones Industrial Average (25 years).
A Descriptive Evaluation of Software Sizing Models
1987-09-01
2-22 2.3.2 SPQR Sizer/FP ............................... 2-25 2.3.3 QSM Size Planner: Function Points .......... 2-26 2.3.4 Feature...Characteristics ............................. 4-20 4.5.3 Results and Conclusions ..................... 4-20 4.6 Application of the SPQR SIZER/FP Approach...4-19 4-7 SPQR Function Point Estimate for the CATSS Sensitivity Model .................................................. 4-23 4-8 ASSET-R
The Space Laser Business Model
2005-01-01
Creating long-duration, high-powered lasers, for satellites, that can withstand the type of optical misalignment and damage dished out by the unforgiving environment of space, is work that is unique to NASA. It is complicated, specific work, where each step forward is into uncharted territory. In the 1990s, as this technology was first being created, NASA gave free reign to a group of "laser jocks" to develop their own business model and supply the Space Agency with the technology it needed. It was still to be a part of NASA as a division of Goddard Space Flight Center, but would operate independently out of a remote office. The idea for this satellite laboratory was based on the Skunk Works concept at Lockheed Martin Corporation. Formerly known as the Lockheed Corporation, in 1943, the aerospace firm, realizing that the type of advanced research it needed done could not be performed within the confines of a larger company, allowed a group of researchers and engineers to essentially run their own microbusiness without the corporate oversight. The Skunk Works project, in Burbank, California, produced America s first jet fighter, the world s most successful spy plane (U-2), the first 3-times-the-speed-of-sound surveillance aircraft, and the F-117A Nighthawk Stealth Fighter. Boeing followed suit with its Phantom Works, an advanced research and development branch of the company that operates independent of the larger unit and is responsible for a great deal of its most cutting-edge research. NASA s version of this advanced business model was the Space Lidar Technology Center (SLTC), just south of Goddard, in College Park, Maryland. Established in 1998 under a Cooperative Agreement between Goddard and the University of Maryland s A. James Clark School of Engineering, it was a high-tech laser shop where a small group of specialists, never more than 20 employees, worked all hours of the day and night to create the cutting- edge technology the Agency required of them. Drs
Analyzing Damping Vibration Methods of Large-Size Space Vehicles in the Earth's Magnetic Field
Directory of Open Access Journals (Sweden)
G. A. Shcheglov
2016-01-01
Full Text Available It is known that most of today's space vehicles comprise large antennas, which are bracket-attached to the vehicle body. Dimensions of reflector antennas may be of 30 ... 50 m. The weight of such constructions can reach approximately 200 kg.Since the antenna dimensions are significantly larger than the size of the vehicle body and the points to attach the brackets to the space vehicles have a low stiffness, conventional dampers may be inefficient. The paper proposes to consider the damping antenna in terms of its interaction with the Earth's magnetic field.A simple dynamic model of the space vehicle equipped with a large-size structure is built. The space vehicle is a parallelepiped to which the antenna is attached through a beam.To solve the model problems, was used a simplified model of Earth's magnetic field: uniform, with intensity lines parallel to each other and perpendicular to the plane of the antenna.The paper considers two layouts of coils with respect to the antenna, namely: a vertical one in which an axis of magnetic dipole is perpendicular to the antenna plane, and a horizontal layout in which an axis of magnetic dipole lies in the antenna plane. It also explores two ways for magnetic damping of oscillations: through the controlled current that is supplied from the power supply system of the space vehicle, and by the self-induction current in the coil. Thus, four objectives were formulated.In each task was formulated an oscillation equation. Then a ratio of oscillation amplitudes and their decay time were estimated. It was found that each task requires the certain parameters either of the antenna itself, its dimensions and moment of inertia, or of the coil and, respectively, the current, which is supplied from the space vehicle. In each task for these parameters were found the ranges, which allow us to tell of efficient damping vibrations.The conclusion can be drawn based on the analysis of tasks that a specialized control system
My Life with State Space Models
DEFF Research Database (Denmark)
Lundbye-Christensen, Søren
2007-01-01
. The conceptual idea behind the state space model is that the evolution over time in the object we are observing and the measurement process itself are modelled separately. My very first serious analysis of a data set was done using a state space model, and since then I seem to have been "haunted" by state space...
Regional climate model sensitivity to domain size
Energy Technology Data Exchange (ETDEWEB)
Leduc, Martin [Universite du Quebec a Montreal, Canadian Regional Climate Modelling and Diagnostics (CRCMD) Network, ESCER Centre, Montreal (Canada); UQAM/Ouranos, Montreal, QC (Canada); Laprise, Rene [Universite du Quebec a Montreal, Canadian Regional Climate Modelling and Diagnostics (CRCMD) Network, ESCER Centre, Montreal (Canada)
2009-05-15
Regional climate models are increasingly used to add small-scale features that are not present in their lateral boundary conditions (LBC). It is well known that the limited area over which a model is integrated must be large enough to allow the full development of small-scale features. On the other hand, integrations on very large domains have shown important departures from the driving data, unless large scale nudging is applied. The issue of domain size is studied here by using the ''perfect model'' approach. This method consists first of generating a high-resolution climatic simulation, nicknamed big brother (BB), over a large domain of integration. The next step is to degrade this dataset with a low-pass filter emulating the usual coarse-resolution LBC. The filtered nesting data (FBB) are hence used to drive a set of four simulations (LBs for Little Brothers), with the same model, but on progressively smaller domain sizes. The LB statistics for a climate sample of four winter months are compared with BB over a common region. The time average (stationary) and transient-eddy standard deviation patterns of the LB atmospheric fields generally improve in terms of spatial correlation with the reference (BB) when domain gets smaller. The extraction of the small-scale features by using a spectral filter allows detecting important underestimations of the transient-eddy variability in the vicinity of the inflow boundary, which can penalize the use of small domains (less than 100 x 100 grid points). The permanent ''spatial spin-up'' corresponds to the characteristic distance that the large-scale flow needs to travel before developing small-scale features. The spin-up distance tends to grow in size at higher levels in the atmosphere. (orig.)
Regional climate model sensitivity to domain size
Leduc, Martin; Laprise, René
2009-05-01
Regional climate models are increasingly used to add small-scale features that are not present in their lateral boundary conditions (LBC). It is well known that the limited area over which a model is integrated must be large enough to allow the full development of small-scale features. On the other hand, integrations on very large domains have shown important departures from the driving data, unless large scale nudging is applied. The issue of domain size is studied here by using the “perfect model” approach. This method consists first of generating a high-resolution climatic simulation, nicknamed big brother (BB), over a large domain of integration. The next step is to degrade this dataset with a low-pass filter emulating the usual coarse-resolution LBC. The filtered nesting data (FBB) are hence used to drive a set of four simulations (LBs for Little Brothers), with the same model, but on progressively smaller domain sizes. The LB statistics for a climate sample of four winter months are compared with BB over a common region. The time average (stationary) and transient-eddy standard deviation patterns of the LB atmospheric fields generally improve in terms of spatial correlation with the reference (BB) when domain gets smaller. The extraction of the small-scale features by using a spectral filter allows detecting important underestimations of the transient-eddy variability in the vicinity of the inflow boundary, which can penalize the use of small domains (less than 100 × 100 grid points). The permanent “spatial spin-up” corresponds to the characteristic distance that the large-scale flow needs to travel before developing small-scale features. The spin-up distance tends to grow in size at higher levels in the atmosphere.
Sample sizes and model comparison metrics for species distribution models
B.B. Hanberry; H.S. He; D.C. Dey
2012-01-01
Species distribution models use small samples to produce continuous distribution maps. The question of how small a sample can be to produce an accurate model generally has been answered based on comparisons to maximum sample sizes of 200 observations or fewer. In addition, model comparisons often are made with the kappa statistic, which has become controversial....
A deployable mechanism concept for the collection of small-to-medium-size space debris
St-Onge, David; Sharf, Inna; Sagnières, Luc; Gosselin, Clément
2018-03-01
Current efforts in active debris removal strategies and mission planning focus on removing the largest, most massive debris. It can be argued, however, that small untrackable debris, specifically those smaller than 5 cm in size, also pose a serious threat. In this work, we propose and analyze a mission to sweep the most crowded Low Earth Orbit with a large cupola device to remove small-to-medium-size debris. The cupola consists of a deployable mechanism expanding more than 25 times its storage size to extend a membrane covering its surface. The membrane is sufficiently stiff to capture most small debris and to slow down the medium-size objects, thus accelerating their fall. An overview of the design of a belt-driven rigid-link mechanism proposed to support the collecting cupola surface is presented, based on our previous work. Because of its large size, the cupola will be subject to significant aerodynamic drag; thus, orbit maintenance analysis is carried out using the DTM-2013 atmospheric density model and it predicts feasible requirements. While in operation, the device will also be subject to numerous hyper-velocity impacts which may significantly perturb its orientation from the desired attitude for debris collection. Thus, another important feature of the proposed debris removal device is a distributed array of flywheels mounted on the cupola for reorienting and stabilizing its attitude during the mission. Analysis using a stochastic modeling framework for hyper-velocity impacts demonstrates that three-axes attitude stabilization is achievable with the flywheels array. MASTER-2009 software is employed to provide relevant data for all debris related estimates, including the debris fluxes for the baseline mission design and for assessment of its expected performance. Space debris removal is a high priority for ensuring sustainability of space and continual launch and operation of man-made space assets. This manuscript presents the first analysis of a small
Modeling Space Radiation with Bleomycin
National Aeronautics and Space Administration — Space radiation is a mixed field of solar particle events (proton) and particles of Galactic Cosmic Rays (GCR) with different energy levels. These radiation events...
Modeling nonstationarity in space and time.
Shand, Lyndsay; Li, Bo
2017-09-01
We propose to model a spatio-temporal random field that has nonstationary covariance structure in both space and time domains by applying the concept of the dimension expansion method in Bornn et al. (2012). Simulations are conducted for both separable and nonseparable space-time covariance models, and the model is also illustrated with a streamflow dataset. Both simulation and data analyses show that modeling nonstationarity in both space and time can improve the predictive performance over stationary covariance models or models that are nonstationary in space but stationary in time. © 2017, The International Biometric Society.
Fundamental study on the size and inter-key spacing of numeric keys for touch screen.
Harada, H; Katsuura, T; Kikuchi, Y
1996-12-01
The purpose of this study was to reveal the optimum size and inter-key spacing of numeric square keys for touch screens. Six male students (22-25 years old) and three female students (21-24 years old) volunteered as subjects for this experiment. Each subject took part in data entry tasks using numeric square keys of touch devices. The sizes of keys were 6, 12, 21, 30 and 39 mm and each the inter-key spacing was 0, 3, 6, 12 and 21 mm. Response times with key sizes of 6 and 12 mm were significantly slower than with key sizes of 21 and 30 mm (p touch screens should be more than 21 mm and optimum inter-key spacing should be from 3 to 6 mm. Optimum key size, however, must be selected with regard to the limitation of screen size.
Size effects in foams : Experiments and modeling
Tekoglu, C.; Gibson, L. J.; Pardoen, T.; Onck, P. R.
Mechanical properties of cellular solids depend on the ratio of the sample size to the cell size at length scales where the two are of the same order of magnitude. Considering that the cell size of many cellular solids used in engineering applications is between 1 and 10 mm, it is not uncommon to
Pump Component Model in SPACE Code
International Nuclear Information System (INIS)
Kim, Byoung Jae; Kim, Kyoung Doo
2010-08-01
This technical report describes the pump component model in SPACE code. A literature survey was made on pump models in existing system codes. The models embedded in SPACE code were examined to check the confliction with intellectual proprietary rights. Design specifications, computer coding implementation, and test results are included in this report
A Hybrid 3D Indoor Space Model
Directory of Open Access Journals (Sweden)
A. Jamali
2016-10-01
Full Text Available GIS integrates spatial information and spatial analysis. An important example of such integration is for emergency response which requires route planning inside and outside of a building. Route planning requires detailed information related to indoor and outdoor environment. Indoor navigation network models including Geometric Network Model (GNM, Navigable Space Model, sub-division model and regular-grid model lack indoor data sources and abstraction methods. In this paper, a hybrid indoor space model is proposed. In the proposed method, 3D modeling of indoor navigation network is based on surveying control points and it is less dependent on the 3D geometrical building model. This research proposes a method of indoor space modeling for the buildings which do not have proper 2D/3D geometrical models or they lack semantic or topological information. The proposed hybrid model consists of topological, geometrical and semantical space.
Kotik, A.; Usyukin, V.; Vinogradov, I.; Arkhipov, M.
2017-11-01
he realization of astrophysical researches requires the development of high-sensitive centimeterband parabolic space radiotelescopes (SRT) with the large-size mirrors. Constructively such SRT with the mirror size more than 10 m can be realized as deployable rigid structures. Mesh-structures of such size do not provide the reflector reflecting surface accuracy which is necessary for the centimeter band observations. Now such telescope with the 10 m diameter mirror is developed in Russia in the frame of "SPECTR - R" program. External dimensions of the telescope is more than the size of existing thermo-vacuum chambers used to prove SRT reflecting surface accuracy parameters under the action of space environment factors. That's why the numerical simulation turns out to be the basis required to accept the taken designs. Such modeling should be based on experimental working of the basic constructive materials and elements of the future reflector. In the article computational modeling of reflecting surface deviations of a centimeter-band of a large-sized deployable space reflector at a stage of his orbital functioning is considered. The analysis of the factors that determines the deviations - both determined (temperatures fields) and not-determined (telescope manufacturing and installation faults; the deformations caused by features of composite materials behavior in space) is carried out. The finite-element model and complex of methods are developed. They allow to carry out computational modeling of reflecting surface deviations caused by influence of all factors and to take into account the deviations correction by space vehicle orientation system. The results of modeling for two modes of functioning (orientation at the Sun) SRT are presented.
Preliminary Cost Model for Space Telescopes
Stahl, H. Philip; Prince, F. Andrew; Smart, Christian; Stephens, Kyle; Henrichs, Todd
2009-01-01
Parametric cost models are routinely used to plan missions, compare concepts and justify technology investments. However, great care is required. Some space telescope cost models, such as those based only on mass, lack sufficient detail to support such analysis and may lead to inaccurate conclusions. Similarly, using ground based telescope models which include the dome cost will also lead to inaccurate conclusions. This paper reviews current and historical models. Then, based on data from 22 different NASA space telescopes, this paper tests those models and presents preliminary analysis of single and multi-variable space telescope cost models.
Size and complexity in model financial systems
Arinaminpathy, Nimalan; Kapadia, Sujit; May, Robert M.
2012-01-01
The global financial crisis has precipitated an increasing appreciation of the need for a systemic perspective toward financial stability. For example: What role do large banks play in systemic risk? How should capital adequacy standards recognize this role? How is stability shaped by concentration and diversification in the financial system? We explore these questions using a deliberately simplified, dynamic model of a banking system that combines three different channels for direct transmission of contagion from one bank to another: liquidity hoarding, asset price contagion, and the propagation of defaults via counterparty credit risk. Importantly, we also introduce a mechanism for capturing how swings in “confidence” in the system may contribute to instability. Our results highlight that the importance of relatively large, well-connected banks in system stability scales more than proportionately with their size: the impact of their collapse arises not only from their connectivity, but also from their effect on confidence in the system. Imposing tougher capital requirements on larger banks than smaller ones can thus enhance the resilience of the system. Moreover, these effects are more pronounced in more concentrated systems, and continue to apply, even when allowing for potential diversification benefits that may be realized by larger banks. We discuss some tentative implications for policy, as well as conceptual analogies in ecosystem stability and in the control of infectious diseases. PMID:23091020
Near-Earth Space Radiation Models
Xapsos, Michael A.; O'Neill, Patrick M.; O'Brien, T. Paul
2012-01-01
Review of models of the near-Earth space radiation environment is presented, including recent developments in trapped proton and electron, galactic cosmic ray and solar particle event models geared toward spacecraft electronics applications.
Zooplankton size selection relative to gill raker spacing in rainbow trout
Budy, P.; Haddix, T.; Schneidervin, R.
2005-01-01
Rainbow trout Oncorhynchus mykiss are one of the most widely stocked salmonids worldwide, often based on the assumption that they will effectively utilize abundant invertebrate food resources. We evaluated the potential for feeding morphology to affect prey selection by rainbow trout using a combination of laboratory feeding experiments and field observations in Flaming Gorge Reservoir, Utah-Wyoming. For rainbow trout collected from the reservoir, inter-gill raker spacing averaged 1.09 mm and there was low variation among fish overall (SD = 0.28). Ninety-seven percent of all zooplankton observed in the diets of rainbow trout collected in the reservoir were larger than the interraker spacing, while only 29% of the zooplankton found in the environment were larger than the interraker spacing. Over the size range of rainbow trout evaluated here (200-475 mm), interraker spacing increased moderately with increasing fish length; however, the size of zooplankton found in the diet did not increase with increasing fish length. In laboratory experiments, rainbow trout consumed the largest zooplankton available; the mean size of zooplankton observed in the diets was significantly larger than the mean size of zooplankton available. Electivity indices for both laboratory and field observations indicated strong selection for larger-sized zooplankton. The size threshold at which electivity switched from selection against smaller-sized zooplankton to selection for larger-sized zooplankton closely corresponded to the mean interraker spacing for both groups (???1-1.2 mm). The combination of results observed here indicates that rainbow trout morphology limits the retention of different-sized zooplankton prey and reinforces the importance of understanding how effectively rainbow trout can utilize the type and sizes of different prey available in a given system. These considerations may improve our ability to predict the potential for growth and survival of rainbow trout within and
Size Evolution and Stochastic Models: Explaining Ostracod Size through Probabilistic Distributions
Krawczyk, M.; Decker, S.; Heim, N. A.; Payne, J.
2014-12-01
The biovolume of animals has functioned as an important benchmark for measuring evolution throughout geologic time. In our project, we examined the observed average body size of ostracods over time in order to understand the mechanism of size evolution in these marine organisms. The body size of ostracods has varied since the beginning of the Ordovician, where the first true ostracods appeared. We created a stochastic branching model to create possible evolutionary trees of ostracod size. Using stratigraphic ranges for ostracods compiled from over 750 genera in the Treatise on Invertebrate Paleontology, we calculated overall speciation and extinction rates for our model. At each timestep in our model, new lineages can evolve or existing lineages can become extinct. Newly evolved lineages are assigned sizes based on their parent genera. We parameterized our model to generate neutral and directional changes in ostracod size to compare with the observed data. New sizes were chosen via a normal distribution, and the neutral model selected new sizes differentials centered on zero, allowing for an equal chance of larger or smaller ostracods at each speciation. Conversely, the directional model centered the distribution on a negative value, giving a larger chance of smaller ostracods. Our data strongly suggests that the overall direction of ostracod evolution has been following a model that directionally pushes mean ostracod size down, shying away from a neutral model. Our model was able to match the magnitude of size decrease. Our models had a constant linear decrease while the actual data had a much more rapid initial rate followed by a constant size. The nuance of the observed trends ultimately suggests a more complex method of size evolution. In conclusion, probabilistic methods can provide valuable insight into possible evolutionary mechanisms determining size evolution in ostracods.
Simulation of finite size effects of the fiber bundle model
Hao, Da-Peng; Tang, Gang; Xun, Zhi-Peng; Xia, Hui; Han, Kui
2018-01-01
In theory, the macroscopic fracture of materials should correspond with the thermodynamic limit of the fiber bundle model. However, the simulation of a fiber bundle model with an infinite size is unrealistic. To study the finite size effects of the fiber bundle model, fiber bundle models of various size are simulated in detail. The effects of system size on the constitutive behavior, critical stress, maximum avalanche size, avalanche size distribution, and increased step number of external load are explored. The simulation results imply that there is no feature size or cut size for macroscopic mechanical and statistical properties of the model. The constitutive curves near the macroscopic failure for various system size can collapse well with a simple scaling relationship. Simultaneously, the introduction of a simple extrapolation method facilitates the acquisition of more accurate simulation results in a large-limit system, which is better for comparison with theoretical results.
Modeling beams with elements in phase space
International Nuclear Information System (INIS)
Nelson, E.M.
1998-01-01
Conventional particle codes represent beams as a collection of macroparticles. An alternative is to represent the beam as a collection of current carrying elements in phase space. While such a representation has limitations, it may be less noisy than a macroparticle model, and it may provide insights about the transport of space charge dominated beams which would otherwise be difficult to gain from macroparticle simulations. The phase space element model of a beam is described, and progress toward an implementation and difficulties with this implementation are discussed. A simulation of an axisymmetric beam using 1d elements in phase space is demonstrated
Modelling the effect of size-asymmetric competition on size inequality
DEFF Research Database (Denmark)
Rasmussen, Camilla Ruø; Weiner, Jacob
2017-01-01
Abstract The concept of size asymmetry in resource competition among plants, in which larger individuals obtain a disproportionate share of contested resources, appears to be very straightforward, but the effects of size asymmetry on growth and size variation among individuals have proved...... to be controversial. It has often been assumed that competition among individual plants in a population has to be size-asymmetric to result in higher size inequality than in the absence of competition, but here we question this inference. Using very simple, individual-based models, we investigate how size symmetry...... of competition affects the development in size inequality between two competing plants and show that increased size inequality due to competition is not always strong evidence for size-asymmetric competition. Even absolute symmetric competition, in which all plants receive the same amount of resources...
Forward modeling of space-borne gravitational wave detectors
International Nuclear Information System (INIS)
Rubbo, Louis J.; Cornish, Neil J.; Poujade, Olivier
2004-01-01
Planning is underway for several space-borne gravitational wave observatories to be built in the next 10 to 20 years. Realistic and efficient forward modeling will play a key role in the design and operation of these observatories. Space-borne interferometric gravitational wave detectors operate very differently from their ground-based counterparts. Complex orbital motion, virtual interferometry, and finite size effects complicate the description of space-based systems, while nonlinear control systems complicate the description of ground-based systems. Here we explore the forward modeling of space-based gravitational wave detectors and introduce an adiabatic approximation to the detector response that significantly extends the range of the standard low frequency approximation. The adiabatic approximation will aid in the development of data analysis techniques, and improve the modeling of astrophysical parameter extraction
Size of the lower third molar space in relation to age in Serbian population.
Zelić, Ksenija; Nedeljković, Nenad
2013-10-01
It is considered that the shortage of space is the major cause of the third molar impaction. The aim of this study was to establish the frequency of insufficient lower third molar eruption space in Serbian population, to question the differences in this frequency in the subjects of different age, to determine the influence of the lower third molar space (retromolar space) size on third molar eruption, and to investigate a possible correlation between the size of gonial angle and the space/third molar width ratio. Digital orthopantomograms were taken from 93 patients divided into two groups: early adult (16-18 years of age) and adult (18-26) patients. Retromolar space, mesiodistal third molar crown width, gonial angle and eruption levels were measured. The space/third molar width in early adult subjects was smaller (p third molars erupted in case of enough space in both age groups (p third molar width ratio is more favorable in adult subjects. Gonial angle is not in correlation with the retromolar space/third molar width ratio.
On discrete models of space-time
International Nuclear Information System (INIS)
Horzela, A.; Kempczynski, J.; Kapuscik, E.; Georgia Univ., Athens, GA; Uzes, Ch.
1992-02-01
Analyzing the Einstein radiolocation method we come to the conclusion that results of any measurement of space-time coordinates should be expressed in terms of rational numbers. We show that this property is Lorentz invariant and may be used in the construction of discrete models of space-time different from the models of the lattice type constructed in the process of discretization of continuous models. (author)
Efficient Neural Network Modeling for Flight and Space Dynamics Simulation
Directory of Open Access Journals (Sweden)
Ayman Hamdy Kassem
2011-01-01
Full Text Available This paper represents an efficient technique for neural network modeling of flight and space dynamics simulation. The technique will free the neural network designer from guessing the size and structure for the required neural network model and will help to minimize the number of neurons. For linear flight/space dynamics systems, the technique can find the network weights and biases directly by solving a system of linear equations without the need for training. Nonlinear flight dynamic systems can be easily modeled by training its linearized models keeping the same network structure. The training is fast, as it uses the linear system knowledge to speed up the training process. The technique is tested on different flight/space dynamic models and showed promising results.
Space Vehicle Reliability Modeling in DIORAMA
Energy Technology Data Exchange (ETDEWEB)
Tornga, Shawn Robert [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-07-12
When modeling system performance of space based detection systems it is important to consider spacecraft reliability. As space vehicles age the components become prone to failure for a variety of reasons such as radiation damage. Additionally, some vehicles may lose the ability to maneuver once they exhaust fuel supplies. Typically failure is divided into two categories: engineering mistakes and technology surprise. This document will report on a method of simulating space vehicle reliability in the DIORAMA framework.
Emulating a flexible space structure: Modeling
Waites, H. B.; Rice, S. C.; Jones, V. L.
1988-01-01
Control Dynamics, in conjunction with Marshall Space Flight Center, has participated in the modeling and testing of Flexible Space Structures. Through the series of configurations tested and the many techniques used for collecting, analyzing, and modeling the data, many valuable insights have been gained and important lessons learned. This paper discusses the background of the Large Space Structure program, Control Dynamics' involvement in testing and modeling of the configurations (especially the Active Control Technique Evaluation for Spacecraft (ACES) configuration), the results from these two processes, and insights gained from this work.
A lateral cephalometric study of the size of tongue and intermaxillary space in Korean
International Nuclear Information System (INIS)
Lee, Sang Rae
1977-01-01
A study was performed to investigate the size of tongue area and intermaxillary space area, and compare the sexual differences between normal Korean children and adults by introducing planimetric and linear analysis of the lateral cephalograms. The cephalograms were composed of 41 child male aged 10.8, 40 child female aged 10.5, 38 adult male aged 21.3 , and 40 adult female aged 20.8 respectively. In order to study and measure the intermaxillary space area, the following were selected, as reference items: occlusal plane, anterior intermaxillary space height, posterior intermaxillary space height, length of intermaxillary space. Among those reference items anterior intermaxillary space height and posterior intermaxillary space height were perpendicular to the maxillary plane. An index, (Anterior intermaxillary space height + posterior intermaxillary space height )/2 Length of intermaxillary space, was introduced for the calculation of intermaxillary space area. While the tongue area was plotted by outline of tongue shadow, above a line extending from the vallecula to the most anterior point on the hyoid body, and above a line from the most anterior point of the hyoid body to the mention. The obtained results were as follows: 1. In general, the measurements of male were larger than those of female in intermaxillary space area in childhood and adulthood group. But intermaxillary space area of childhood group showed no significant sexual difference, and that of adulthood group showed significant sexual difference when evaluated statistically. 2. In both groups, the measurements of male were larger than those of female in tongue area, and there are also statistical significance of sexual differences in both age groups. 3. Considerable growth changes between the childhood and adulthood groups were revealed in intermaxillary space area an d tongue area, and the tongue had tendency to become relatively smaller when compared with the intermaxillary space in both sexes.
A lateral cephalometric study of the size of tongue and intermaxillary space in Korean
Energy Technology Data Exchange (ETDEWEB)
Lee, Sang Rae [Department of Dental Radiology, College of Dentistry, Kung Hee University, Seoul (Korea, Republic of)
1977-11-15
A study was performed to investigate the size of tongue area and intermaxillary space area, and compare the sexual differences between normal Korean children and adults by introducing planimetric and linear analysis of the lateral cephalograms. The cephalograms were composed of 41 child male aged 10.8, 40 child female aged 10.5, 38 adult male aged 21.3 , and 40 adult female aged 20.8 respectively. In order to study and measure the intermaxillary space area, the following were selected, as reference items: occlusal plane, anterior intermaxillary space height, posterior intermaxillary space height, length of intermaxillary space. Among those reference items anterior intermaxillary space height and posterior intermaxillary space height were perpendicular to the maxillary plane. An index, (Anterior intermaxillary space height + posterior intermaxillary space height)/2 Length of intermaxillary space, was introduced for the calculation of intermaxillary space area. While the tongue area was plotted by outline of tongue shadow, above a line extending from the vallecula to the most anterior point on the hyoid body, and above a line from the most anterior point of the hyoid body to the mention. The obtained results were as follows: 1. In general, the measurements of male were larger than those of female in intermaxillary space area in childhood and adulthood group. But intermaxillary space area of childhood group showed no significant sexual difference, and that of adulthood group showed significant sexual difference when evaluated statistically. 2. In both groups, the measurements of male were larger than those of female in tongue area, and there are also statistical significance of sexual differences in both age groups. 3. Considerable growth changes between the childhood and adulthood groups were revealed in intermaxillary space area an d tongue area, and the tongue had tendency to become relatively smaller when compared with the intermaxillary space in both sexes.
3D space analysis of dental models
Chuah, Joon H.; Ong, Sim Heng; Kondo, Toshiaki; Foong, Kelvin W. C.; Yong, Than F.
2001-05-01
Space analysis is an important procedure by orthodontists to determine the amount of space available and required for teeth alignment during treatment planning. Traditional manual methods of space analysis are tedious and often inaccurate. Computer-based space analysis methods that work on 2D images have been reported. However, as the space problems in the dental arch exist in all three planes of space, a full 3D analysis of the problems is necessary. This paper describes a visualization and measurement system that analyses 3D images of dental plaster models. Algorithms were developed to determine dental arches. The system is able to record the depths of the Curve of Spee, and quantify space liabilities arising from a non-planar Curve of Spee, malalignment and overjet. Furthermore, the difference between total arch space available and the space required to arrange the teeth in ideal occlusion can be accurately computed. The system for 3D space analysis of the dental arch is an accurate, comprehensive, rapid and repeatable method of space analysis to facilitate proper orthodontic diagnosis and treatment planning.
The manifold model for space-time
International Nuclear Information System (INIS)
Heller, M.
1981-01-01
Physical processes happen on a space-time arena. It turns out that all contemporary macroscopic physical theories presuppose a common mathematical model for this arena, the so-called manifold model of space-time. The first part of study is an heuristic introduction to the concept of a smooth manifold, starting with the intuitively more clear concepts of a curve and a surface in the Euclidean space. In the second part the definitions of the Csub(infinity) manifold and of certain structures, which arise in a natural way from the manifold concept, are given. The role of the enveloping Euclidean space (i.e. of the Euclidean space appearing in the manifold definition) in these definitions is stressed. The Euclidean character of the enveloping space induces to the manifold local Euclidean (topological and differential) properties. A suggestion is made that replacing the enveloping Euclidean space by a discrete non-Euclidean space would be a correct way towards the quantization of space-time. (author)
Lag space estimation in time series modelling
DEFF Research Database (Denmark)
Goutte, Cyril
1997-01-01
The purpose of this article is to investigate some techniques for finding the relevant lag-space, i.e. input information, for time series modelling. This is an important aspect of time series modelling, as it conditions the design of the model through the regressor vector a.k.a. the input layer...
Parametric Cost Models for Space Telescopes
Stahl, H. Philip; Henrichs, Todd; Dollinger, Courtney
2010-01-01
Multivariable parametric cost models for space telescopes provide several benefits to designers and space system project managers. They identify major architectural cost drivers and allow high-level design trades. They enable cost-benefit analysis for technology development investment. And, they provide a basis for estimating total project cost. A survey of historical models found that there is no definitive space telescope cost model. In fact, published models vary greatly [1]. Thus, there is a need for parametric space telescopes cost models. An effort is underway to develop single variable [2] and multi-variable [3] parametric space telescope cost models based on the latest available data and applying rigorous analytical techniques. Specific cost estimating relationships (CERs) have been developed which show that aperture diameter is the primary cost driver for large space telescopes; technology development as a function of time reduces cost at the rate of 50% per 17 years; it costs less per square meter of collecting aperture to build a large telescope than a small telescope; and increasing mass reduces cost.
Parametric cost models for space telescopes
Stahl, H. Philip; Henrichs, Todd; Dollinger, Courtnay
2017-11-01
Multivariable parametric cost models for space telescopes provide several benefits to designers and space system project managers. They identify major architectural cost drivers and allow high-level design trades. They enable cost-benefit analysis for technology development investment. And, they provide a basis for estimating total project cost. A survey of historical models found that there is no definitive space telescope cost model. In fact, published models vary greatly [1]. Thus, there is a need for parametric space telescopes cost models. An effort is underway to develop single variable [2] and multi-variable [3] parametric space telescope cost models based on the latest available data and applying rigorous analytical techniques. Specific cost estimating relationships (CERs) have been developed which show that aperture diameter is the primary cost driver for large space telescopes; technology development as a function of time reduces cost at the rate of 50% per 17 years; it costs less per square meter of collecting aperture to build a large telescope than a small telescope; and increasing mass reduces cost.
Transforming community access to space science models
MacNeice, Peter; Hesse, Michael; Kuznetsova, Maria; Maddox, Marlo; Rastaetter, Lutz; Berrios, David; Pulkkinen, Antti
2012-04-01
Researching and forecasting the ever changing space environment (often referred to as space weather) and its influence on humans and their activities are model-intensive disciplines. This is true because the physical processes involved are complex, but, in contrast to terrestrial weather, the supporting observations are typically sparse. Models play a vital role in establishing a physically meaningful context for interpreting limited observations, testing theory, and producing both nowcasts and forecasts. For example, with accurate forecasting of hazardous space weather conditions, spacecraft operators can place sensitive systems in safe modes, and power utilities can protect critical network components from damage caused by large currents induced in transmission lines by geomagnetic storms.
Observational modeling of topological spaces
International Nuclear Information System (INIS)
Molaei, M.R.
2009-01-01
In this paper a model for a multi-dimensional observer by using of the fuzzy theory is presented. Relative form of Tychonoff theorem is proved. The notion of topological entropy is extended. The persistence of relative topological entropy under relative conjugate relation is proved.
An introduction to Space Weather Integrated Modeling
Zhong, D.; Feng, X.
2012-12-01
The need for a software toolkit that integrates space weather models and data is one of many challenges we are facing with when applying the models to space weather forecasting. To meet this challenge, we have developed Space Weather Integrated Modeling (SWIM) that is capable of analysis and visualizations of the results from a diverse set of space weather models. SWIM has a modular design and is written in Python, by using NumPy, matplotlib, and the Visualization ToolKit (VTK). SWIM provides data management module to read a variety of spacecraft data products and a specific data format of Solar-Interplanetary Conservation Element/Solution Element MHD model (SIP-CESE MHD model) for the study of solar-terrestrial phenomena. Data analysis, visualization and graphic user interface modules are also presented in a user-friendly way to run the integrated models and visualize the 2-D and 3-D data sets interactively. With these tools we can locally or remotely analysis the model result rapidly, such as extraction of data on specific location in time-sequence data sets, plotting interplanetary magnetic field lines, multi-slicing of solar wind speed, volume rendering of solar wind density, animation of time-sequence data sets, comparing between model result and observational data. To speed-up the analysis, an in-situ visualization interface is used to support visualizing the data 'on-the-fly'. We also modified some critical time-consuming analysis and visualization methods with the aid of GPU and multi-core CPU. We have used this tool to visualize the data of SIP-CESE MHD model in real time, and integrated the Database Model of shock arrival, Shock Propagation Model, Dst forecasting model and SIP-CESE MHD model developed by SIGMA Weather Group at State Key Laboratory of Space Weather/CAS.
A general model for the scaling of offspring size and adult size.
Falster, Daniel S; Moles, Angela T; Westoby, Mark
2008-09-01
Understanding evolutionary coordination among different life-history traits is a key challenge for ecology and evolution. Here we develop a general quantitative model predicting how offspring size should scale with adult size by combining a simple model for life-history evolution with a frequency-dependent survivorship model. The key innovation is that larger offspring are afforded three different advantages during ontogeny: higher survivorship per time, a shortened juvenile phase, and advantage during size-competitive growth. In this model, it turns out that size-asymmetric advantage during competition is the factor driving evolution toward larger offspring sizes. For simplified and limiting cases, the model is shown to produce the same predictions as the previously existing theory on which it is founded. The explicit treatment of different survival advantages has biologically important new effects, mainly through an interaction between total maternal investment in reproduction and the duration of competitive growth. This goes on to explain alternative allometries between log offspring size and log adult size, as observed in mammals (slope = 0.95) and plants (slope = 0.54). Further, it suggests how these differences relate quantitatively to specific biological processes during recruitment. In these ways, the model generalizes across previous theory and provides explanations for some differences between major taxa.
Perceived size and perceived direction: The interplay of the two descriptors of visual space
Czech Academy of Sciences Publication Activity Database
Šikl, Radovan; Šimeček, Michal
2011-01-01
Roč. 40, č. 8 (2011), s. 953-961 ISSN 0301-0066 R&D Projects: GA ČR GPP407/10/P566 Institutional research plan: CEZ:AV0Z70250504 Keywords : visual space * spatial descriptors * size judgments * direction judgments * parameterization Subject RIV: AN - Psychology Impact factor: 1.313, year: 2011
Influence of sett size and spacing on yield and multiplication ratio of ...
African Journals Online (AJOL)
Ghana Journal of Agricultural Science ... and three spacings 12 cm W12 cm, 15 cm W 15 cm, and 15 cm W 23 cm) were studied for their ... greenhouse conditions was highest for the 10 g sett class and decreased with reduction in sett size.
Automated thinning increases uniformity of in-row spacing and plant size in romaine lettuce
Low availability and high cost of farm hand labor make automated thinners a faster and cheaper alternative to hand thinning in lettuce (Lactuca sativa L.). However, the effects of this new technology on uniformity of plant spacing and size as well as crop yield are not proven. Three experiments wer...
A size-structured model of bacterial growth and reproduction.
Ellermeyer, S F; Pilyugin, S S
2012-01-01
We consider a size-structured bacterial population model in which the rate of cell growth is both size- and time-dependent and the average per capita reproduction rate is specified as a model parameter. It is shown that the model admits classical solutions. The population-level and distribution-level behaviours of these solutions are then determined in terms of the model parameters. The distribution-level behaviour is found to be different from that found in similar models of bacterial population dynamics. Rather than convergence to a stable size distribution, we find that size distributions repeat in cycles. This phenomenon is observed in similar models only under special assumptions on the functional form of the size-dependent growth rate factor. Our main results are illustrated with examples, and we also provide an introductory study of the bacterial growth in a chemostat within the framework of our model.
Modeling of space environment impact on nanostructured materials. General principles
Voronina, Ekaterina; Novikov, Lev
2016-07-01
In accordance with the resolution of ISO TC20/SC14 WG4/WG6 joint meeting, Technical Specification (TS) 'Modeling of space environment impact on nanostructured materials. General principles' which describes computer simulation methods of space environment impact on nanostructured materials is being prepared. Nanomaterials surpass traditional materials for space applications in many aspects due to their unique properties associated with nanoscale size of their constituents. This superiority in mechanical, thermal, electrical and optical properties will evidently inspire a wide range of applications in the next generation spacecraft intended for the long-term (~15-20 years) operation in near-Earth orbits and the automatic and manned interplanetary missions. Currently, ISO activity on developing standards concerning different issues of nanomaterials manufacturing and applications is high enough. Most such standards are related to production and characterization of nanostructures, however there is no ISO documents concerning nanomaterials behavior in different environmental conditions, including the space environment. The given TS deals with the peculiarities of the space environment impact on nanostructured materials (i.e. materials with structured objects which size in at least one dimension lies within 1-100 nm). The basic purpose of the document is the general description of the methodology of applying computer simulation methods which relate to different space and time scale to modeling processes occurring in nanostructured materials under the space environment impact. This document will emphasize the necessity of applying multiscale simulation approach and present the recommendations for the choice of the most appropriate methods (or a group of methods) for computer modeling of various processes that can occur in nanostructured materials under the influence of different space environment components. In addition, TS includes the description of possible
In-situ detection of micron-sized dust particles in near-Earth space
Gruen, E.; Zook, H. A.
1985-01-01
In situ detectors for micron sized dust particles based on the measurement of impact ionization have been flown on several space missions (Pioneer 8/9, HEOS-2 and Helios 1/2). Previous measurements of small dust particles in near-Earth space are reviewed. An instrument is proposed for the measurement of micron sized meteoroids and space debris such as solid rocket exhaust particles from on board an Earth orbiting satellite. The instrument will measure the mass, speed, flight direction and electrical charge of individually impacting debris and meteoritic particles. It is a multicoincidence detector of 1000 sq cm sensitive area and measures particle masses in the range from 10 to the -14th power g to 10 to the -8th power g at an impact speed of 10 km/s. The instrument is lightweight (5 kg), consumes little power (4 watts), and requires a data sampling rate of about 100 bits per second.
Preliminary Multivariable Cost Model for Space Telescopes
Stahl, H. Philip
2010-01-01
Parametric cost models are routinely used to plan missions, compare concepts and justify technology investments. Previously, the authors published two single variable cost models based on 19 flight missions. The current paper presents the development of a multi-variable space telescopes cost model. The validity of previously published models are tested. Cost estimating relationships which are and are not significant cost drivers are identified. And, interrelationships between variables are explored
Effective hamiltonian calculations using incomplete model spaces
International Nuclear Information System (INIS)
Koch, S.; Mukherjee, D.
1987-01-01
It appears that the danger of encountering ''intruder states'' is substantially reduced if an effective hamiltonian formalism is developed for incomplete model spaces (IMS). In a Fock-space approach, the proof a ''connected diagram theorem'' is fairly straightforward with exponential-type of ansatze for the wave-operator W, provided the normalization chosen for W is separable. Operationally, one just needs a suitable categorization of the Fock-space operators into ''diagonal'' and ''non-diagonal'' parts that is generalization of the corresponding procedure for the complete model space. The formalism is applied to prototypical 2-electron systems. The calculations have been performed on the Cyber 205 super-computer. The authors paid special attention to an efficient vectorization for the construction and solution of the resulting coupled non-linear equations
Developing Viable Financing Models for Space Tourism
Eilingsfeld, F.; Schaetzler, D.
2002-01-01
Increasing commercialization of space services and the impending release of government's control of space access promise to make space ventures more attractive. Still, many investors shy away from going into the space tourism market as long as they do not feel secure that their return expectations will be met. First and foremost, attracting investors from the capital markets requires qualifying financing models. Based on earlier research on the cost of capital for space tourism, this paper gives a brief run-through of commercial, technical and financial due diligence aspects. After that, a closer look is taken at different valuation techniques as well as alternative ways of streamlining financials. Experience from earlier ventures has shown that the high cost of capital represents a significant challenge. Thus, the sophistication and professionalism of business plans and financial models needs to be very high. Special emphasis is given to the optimization of the debt-to-equity ratio over time. The different roles of equity and debt over a venture's life cycle are explained. Based on the latter, guidelines for the design of an optimized loan structure are given. These are then applied to simulating the financial performance of a typical space tourism venture over time, including the calculation of Weighted Average Cost of Capital (WACC) and Net Present Value (NPV). Based on a concluding sensitivity analysis, the lessons learned are presented. If applied properly, these will help to make space tourism economically viable.
Sample Size Determination for Rasch Model Tests
Draxler, Clemens
2010-01-01
This paper is concerned with supplementing statistical tests for the Rasch model so that additionally to the probability of the error of the first kind (Type I probability) the probability of the error of the second kind (Type II probability) can be controlled at a predetermined level by basing the test on the appropriate number of observations.…
Hierarchical modeling of cluster size in wildlife surveys
Royle, J. Andrew
2008-01-01
Clusters or groups of individuals are the fundamental unit of observation in many wildlife sampling problems, including aerial surveys of waterfowl, marine mammals, and ungulates. Explicit accounting of cluster size in models for estimating abundance is necessary because detection of individuals within clusters is not independent and detectability of clusters is likely to increase with cluster size. This induces a cluster size bias in which the average cluster size in the sample is larger than in the population at large. Thus, failure to account for the relationship between delectability and cluster size will tend to yield a positive bias in estimates of abundance or density. I describe a hierarchical modeling framework for accounting for cluster-size bias in animal sampling. The hierarchical model consists of models for the observation process conditional on the cluster size distribution and the cluster size distribution conditional on the total number of clusters. Optionally, a spatial model can be specified that describes variation in the total number of clusters per sample unit. Parameter estimation, model selection, and criticism may be carried out using conventional likelihood-based methods. An extension of the model is described for the situation where measurable covariates at the level of the sample unit are available. Several candidate models within the proposed class are evaluated for aerial survey data on mallard ducks (Anas platyrhynchos).
A random energy model for size dependence : recurrence vs. transience
Külske, Christof
1998-01-01
We investigate the size dependence of disordered spin models having an infinite number of Gibbs measures in the framework of a simplified 'random energy model for size dependence'. We introduce two versions (involving either independent random walks or branching processes), that can be seen as
Qualitative models for space system engineering
Forbus, Kenneth D.
1990-01-01
The objectives of this project were: (1) to investigate the implications of qualitative modeling techniques for problems arising in the monitoring, diagnosis, and design of Space Station subsystems and procedures; (2) to identify the issues involved in using qualitative models to enhance and automate engineering functions. These issues include representing operational criteria, fault models, alternate ontologies, and modeling continuous signals at a functional level of description; and (3) to develop a prototype collection of qualitative models for fluid and thermal systems commonly found in Space Station subsystems. Potential applications of qualitative modeling to space-systems engineering, including the notion of intelligent computer-aided engineering are summarized. Emphasis is given to determining which systems of the proposed Space Station provide the most leverage for study, given the current state of the art. Progress on using qualitative models, including development of the molecular collection ontology for reasoning about fluids, the interaction of qualitative and quantitative knowledge in analyzing thermodynamic cycles, and an experiment on building a natural language interface to qualitative reasoning is reported. Finally, some recommendations are made for future research.
Directory of Open Access Journals (Sweden)
Abdeldjallil eNaceri
2015-09-01
Full Text Available In our daily life experience, the angular size of an object correlates with its distance from the observer, provided that the physical size of the object remains constant. In this work, we investigated depth perception in action space (i.e., beyond the arm reach, while keeping the angular size of the target object constant. This was achieved by increasing the physical size of the target object as its distance to the observer increased. To the best of our knowledge, this is the first time that a similar protocol has been tested in action space, for distances to the observer ranging from 1.4 to 2.4m. We replicated the task in virtual and real environments and we found that the performance was significantly different between the two environments. In the real environment, all participants perceived the depth of the target object precisely. Whereas, in virtual reality the responses were significantly less precise, although, still above chance level in 16 of the 20 observers. The difference in the discriminability of the stimuli was likely due to different contributions of the convergence and the accommodation cues in the two environments. The values of Weber fractions estimated in our study were compared to those reported in previous studies in peripersonal and action space.
Mathematical model of parking space unit for triangular parking area
Syahrini, Intan; Sundari, Teti; Iskandar, Taufiq; Halfiani, Vera; Munzir, Said; Ramli, Marwan
2018-01-01
Parking space unit (PSU) is an effective measure for the area size of a vehicle, including the free space and the width of the door opening of the vehicle (car). This article discusses a mathematical model for parking space of vehicles in triangular shape area. An optimization model for triangular parking lot is developed. Integer Linear Programming (ILP) method is used to determine the maximum number of the PSU. The triangular parking lot is in isosceles and equilateral triangles shape and implements four possible rows and five possible angles for each field. The vehicles which are considered are cars and motorcycles. The results show that the isosceles triangular parking area has 218 units of optimal PSU, which are 84 units of PSU for cars and 134 units of PSU for motorcycles. Equilateral triangular parking area has 688 units of optimal PSU, which are 175 units of PSU for cars and 513 units of PSU for motorcycles.
Multimedia Mapping using Continuous State Space Models
DEFF Research Database (Denmark)
Lehn-Schiøler, Tue
2004-01-01
In this paper a system that transforms speech waveforms to animated faces are proposed. The system relies on continuous state space models to perform the mapping, this makes it possible to ensure video with no sudden jumps and allows continuous control of the parameters in 'face space'. Simulations...... are performed on recordings of 3-5 sec. video sequences with sentences from the Timit database. The model is able to construct an image sequence from an unknown noisy speech sequence fairly well even though the number of training examples are limited....
Table-sized matrix model in fractional learning
Soebagyo, J.; Wahyudin; Mulyaning, E. C.
2018-05-01
This article provides an explanation of the fractional learning model i.e. a Table-Sized Matrix model in which fractional representation and its operations are symbolized by the matrix. The Table-Sized Matrix are employed to develop problem solving capabilities as well as the area model. The Table-Sized Matrix model referred to in this article is used to develop an understanding of the fractional concept to elementary school students which can then be generalized into procedural fluency (algorithm) in solving the fractional problem and its operation.
The theoretical foundations for size spectrum models of fish communities
DEFF Research Database (Denmark)
Andersen, Ken Haste; Jacobsen, Nis Sand; Farnsworth, K.D.
2016-01-01
Size spectrum models have emerged from 40 years of basic research on how body size determines individual physiology and structures marine communities. They are based on commonly accepted assumptions and have a low parameter set, which make them easy to deploy for strategic ecosystem oriented impact...... assessment of fisheries. We describe the fundamental concepts in size-based models about food encounter and the bioenergetics budget of individuals. Within the general framework three model types have emerged that differs in their degree of complexity: the food-web, the trait-based and the community model...
Physical models on discrete space and time
International Nuclear Information System (INIS)
Lorente, M.
1986-01-01
The idea of space and time quantum operators with a discrete spectrum has been proposed frequently since the discovery that some physical quantities exhibit measured values that are multiples of fundamental units. This paper first reviews a number of these physical models. They are: the method of finite elements proposed by Bender et al; the quantum field theory model on discrete space-time proposed by Yamamoto; the finite dimensional quantum mechanics approach proposed by Santhanam et al; the idea of space-time as lattices of n-simplices proposed by Kaplunovsky et al; and the theory of elementary processes proposed by Weizsaecker and his colleagues. The paper then presents a model proposed by the authors and based on the (n+1)-dimensional space-time lattice where fundamental entities interact among themselves 1 to 2n in order to build up a n-dimensional cubic lattice as a ground field where the physical interactions take place. The space-time coordinates are nothing more than the labelling of the ground field and take only discrete values. 11 references
Mouse infection models for space flight immunology
Chapes, Stephen Keith; Ganta, Roman Reddy; Chapers, S. K. (Principal Investigator)
2005-01-01
Several immunological processes can be affected by space flight. However, there is little evidence to suggest that flight-induced immunological deficits lead to illness. Therefore, one of our goals has been to define models to examine host resistance during space flight. Our working hypothesis is that space flight crews will come from a heterogeneous population; the immune response gene make-up will be quite varied. It is unknown how much the immune response gene variation contributes to the potential threat from infectious organisms, allergic responses or other long term health problems (e.g. cancer). This article details recent efforts of the Kansas State University gravitational immunology group to assess how population heterogeneity impacts host health, either in laboratory experimental situations and/or using the skeletal unloading model of space-flight stress. This paper details our use of several mouse strains with several different genotypes. In particular, mice with varying MHCII allotypes and mice on the C57BL background with different genetic defects have been particularly useful tools with which to study infections by Staphylococcus aureus, Salmonella typhimurium, Pasteurella pneumotropica and Ehrlichia chaffeensis. We propose that some of these experimental challenge models will be useful to assess the effects of space flight on host resistance to infection.
Assumptions behind size-based ecosystem models are realistic
DEFF Research Database (Denmark)
Andersen, Ken Haste; Blanchard, Julia L.; Fulton, Elizabeth A.
2016-01-01
A recent publication about balanced harvesting (Froese et al., ICES Journal of Marine Science; doi:10.1093/icesjms/fsv122) contains several erroneous statements about size-spectrum models. We refute the statements by showing that the assumptions pertaining to size-spectrum models discussed by Fro...... that there is indeed a constructive role for a wide suite of ecosystem models to evaluate fishing strategies in an ecosystem context...
Space-time modeling of timber prices
Mo Zhou; Joseph Buongriorno
2006-01-01
A space-time econometric model was developed for pine sawtimber timber prices of 21 geographically contiguous regions in the southern United States. The correlations between prices in neighboring regions helped predict future prices. The impulse response analysis showed that although southern pine sawtimber markets were not globally integrated, local supply and demand...
New Skeletal-Space-Filling Models
Clarke, Frank H.
1977-01-01
Describes plastic, skeletal molecular models that are color-coded and can illustrate both the conformation and overall shape of small molecules. They can also be converted to space-filling counterparts by the additions of color-coded polystyrene spheres. (MLH)
Reliability models for Space Station power system
Singh, C.; Patton, A. D.; Kim, Y.; Wagner, H.
1987-01-01
This paper presents a methodology for the reliability evaluation of Space Station power system. The two options considered are the photovoltaic system and the solar dynamic system. Reliability models for both of these options are described along with the methodology for calculating the reliability indices.
Electron Emitter for small-size Electrodynamic Space Tether using MEMS Technology
DEFF Research Database (Denmark)
Fleron, René A. W.; Blanke, Mogens
2004-01-01
Adjustment of the orbit of a spacecraft using the forces created by an electro-dynamic space-tether has been shown as a theoretic possibility in recent literature. Practical implementation is being pursued for larger scale missions where a hot filament device controls electron emission...... and the current flowing in the electrodynamic space tether. Applications to small spacecraft, or space debris in the 1–10 kg range, possess difficulties with electron emission technology, as low power emitting devices are needed. This paper addresses the system concepts of a small spacecraft electrodynamic tether...... system with focus on electron emitter design and manufacture using micro-electro-mechanical- system (MEMS) technology. The paper addresses the system concepts of a small size electrodynamic tether mission and shows a novel electron emitter for the 1-2 mA range where altitude can be effectively affected...
Habitability Concept Models for Living in Space
Ferrino, M.
2002-01-01
As growing trends show, living in "space" has acquired new meanings, especially considering the utilization of the International Space Station (ISS) with regard to group interaction as well as individual needs in terms of time, space and crew accommodations. In fact, for the crew, the Spaced Station is a combined Laboratory-Office/Home and embodies ethical, social, and cultural aspects as additional parameters to be assessed to achieve a user centered architectural design of crew workspace. Habitability Concept Models can improve the methods and techniques used to support the interior design and layout of space architectures and at the same time guarantee a human focused approach. This paper discusses and illustrates some of the results obtained for the interior design of a Habitation Module for the ISS. In this work, two different but complementary approaches are followed. The first is "object oriented" and based on Video Data (American and Russian) supported by Proxemic methods (Edward T. Hall, 1963 and Francesca Pregnolato, 1998). This approach offers flexible and adaptive design solutions. The second is "subject oriented" and based on a Virtual Reality environment. With this approach human perception and cognitive aspects related to a specific crew task are considered. Data obtained from these two approaches are used to verify requirements and advance the design of the Habitation Module for aspects related to man machine interfaces (MMI), ergonomics, work and free-time. It is expected that the results achieved can be applied to future space related projects.
Hauschildt, Verena; Gerken, Martina
2016-03-01
This study aims to assess plot size related changes in spacing and behavioural synchronization in a herd of 14 German Blackface ewes kept on three different pasture sizes: S (126m(2)), M (1100m(2)), and L (11,200m(2)). In direct field observations, behaviour and nearest neighbour distance were recorded individually. Additionally, interindividual and nearest neighbour distances were derived from aerial photographs of the herd taken on plot sizes S and M. Nearest neighbour distances behaviour as intraindividual repeatability of the derived distances was highly significant (Kendall's W between 0.32 and 0.58; pbehavioural synchronization might be mainly attributed to the motivation for close proximity to any conspecific. Copyright © 2015 Elsevier B.V. All rights reserved.
Modeling motoneuron firing properties: dependency on size and calcium dynamics
van der Heyden, M. J.; Hilgevoord, A. A.; Bour, L. J.; Ongerboer de Visser, B. W.
1994-01-01
The origin of functional differences between motoneurons of varying size was investigated by employing a one-compartmental motoneuron model containing a slow K+ conductance dependent on the intracellular calcium concentration. The size of the cell was included as an explicit parameter. Simulations
Multipartite geometric entanglement in finite size XY model
Energy Technology Data Exchange (ETDEWEB)
Blasone, Massimo; Dell' Anno, Fabio; De Siena, Silvio; Giampaolo, Salvatore Marco; Illuminati, Fabrizio, E-mail: blasone@sa.infn.i [Dipartimento di Matematica e Informatica, Universita degli Studi di Salerno, Via Ponte don Melillo, I-84084 Fisciano (Italy)
2009-06-01
We investigate the behavior of the multipartite entanglement in the finite size XY model by means of the hierarchical geometric measure of entanglement. By selecting specific components of the hierarchy, we study both global entanglement and genuinely multipartite entanglement.
Optimum workforce-size model using dynamic programming approach
African Journals Online (AJOL)
This paper presents an optimum workforce-size model which determines the minimum number of excess workers (overstaffing) as well as the minimum total recruitment cost during a specified planning horizon. The model is an extension of other existing dynamic programming models for manpower planning in the sense ...
Modeling and Analysis of Space Based Transceivers
Moore, Michael S.; Price, Jeremy C.; Abbott, Ben; Liebetreu, John; Reinhart, Richard C.; Kacpura, Thomas J.
2007-01-01
This paper presents the tool chain, methodology, and initial results of a study to provide a thorough, objective, and quantitative analysis of the design alternatives for space Software Defined Radio (SDR) transceivers. The approach taken was to develop a set of models and tools for describing communications requirements, the algorithm resource requirements, the available hardware, and the alternative software architectures, and generate analysis data necessary to compare alternative designs. The Space Transceiver Analysis Tool (STAT) was developed to help users identify and select representative designs, calculate the analysis data, and perform a comparative analysis of the representative designs. The tool allows the design space to be searched quickly while permitting incremental refinement in regions of higher payoff.
A simple shear limited, single size, time dependent flocculation model
Kuprenas, R.; Tran, D. A.; Strom, K.
2017-12-01
This research focuses on the modeling of flocculation of cohesive sediment due to turbulent shear, specifically, investigating the dependency of flocculation on the concentration of cohesive sediment. Flocculation is important in larger sediment transport models as cohesive particles can create aggregates which are orders of magnitude larger than their unflocculated state. As the settling velocity of each particle is determined by the sediment size, density, and shape, accounting for this aggregation is important in determining where the sediment is deposited. This study provides a new formulation for flocculation of cohesive sediment by modifying the Winterwerp (1998) flocculation model (W98) so that it limits floc size to that of the Kolmogorov micro length scale. The W98 model is a simple approach that calculates the average floc size as a function of time. Because of its simplicity, the W98 model is ideal for implementing into larger sediment transport models; however, the model tends to over predict the dependency of the floc size on concentration. It was found that the modification of the coefficients within the original model did not allow for the model to capture the dependency on concentration. Therefore, a new term within the breakup kernel of the W98 formulation was added. The new formulation results is a single size, shear limited, and time dependent flocculation model that is able to effectively capture the dependency of the equilibrium size of flocs on both suspended sediment concentration and the time to equilibrium. The overall behavior of the new model is explored and showed align well with other studies on flocculation. Winterwerp, J. C. (1998). A simple model for turbulence induced flocculation of cohesive sediment. .Journal of Hydraulic Research, 36(3):309-326.
Adaptive Numerical Algorithms in Space Weather Modeling
Toth, Gabor; vanderHolst, Bart; Sokolov, Igor V.; DeZeeuw, Darren; Gombosi, Tamas I.; Fang, Fang; Manchester, Ward B.; Meng, Xing; Nakib, Dalal; Powell, Kenneth G.;
2010-01-01
Space weather describes the various processes in the Sun-Earth system that present danger to human health and technology. The goal of space weather forecasting is to provide an opportunity to mitigate these negative effects. Physics-based space weather modeling is characterized by disparate temporal and spatial scales as well as by different physics in different domains. A multi-physics system can be modeled by a software framework comprising of several components. Each component corresponds to a physics domain, and each component is represented by one or more numerical models. The publicly available Space Weather Modeling Framework (SWMF) can execute and couple together several components distributed over a parallel machine in a flexible and efficient manner. The framework also allows resolving disparate spatial and temporal scales with independent spatial and temporal discretizations in the various models. Several of the computationally most expensive domains of the framework are modeled by the Block-Adaptive Tree Solar wind Roe Upwind Scheme (BATS-R-US) code that can solve various forms of the magnetohydrodynamics (MHD) equations, including Hall, semi-relativistic, multi-species and multi-fluid MHD, anisotropic pressure, radiative transport and heat conduction. Modeling disparate scales within BATS-R-US is achieved by a block-adaptive mesh both in Cartesian and generalized coordinates. Most recently we have created a new core for BATS-R-US: the Block-Adaptive Tree Library (BATL) that provides a general toolkit for creating, load balancing and message passing in a 1, 2 or 3 dimensional block-adaptive grid. We describe the algorithms of BATL and demonstrate its efficiency and scaling properties for various problems. BATS-R-US uses several time-integration schemes to address multiple time-scales: explicit time stepping with fixed or local time steps, partially steady-state evolution, point-implicit, semi-implicit, explicit/implicit, and fully implicit numerical
Crane cabins' interior space multivariate anthropometric modeling.
Essdai, Ahmed; Spasojević Brkić, Vesna K; Golubović, Tamara; Brkić, Aleksandar; Popović, Vladimir
2018-01-01
Previous research has shown that today's crane cabins fail to meet the needs of a large proportion of operators. Performance and financial losses and effects on safety should not be overlooked as well. The first aim of this survey is to model the crane cabin interior space using up-to-date crane operator anthropometric data and to compare the multivariate and univariate method anthropometric models. The second aim of the paper is to define the crane cabin interior space dimensions that enable anthropometric convenience. To facilitate the cabin design, the anthropometric dimensions of 64 crane operators in the first sample and 19 more in the second sample were collected in Serbia. The multivariate anthropometric models, spanning 95% of the population on the basis of a set of 8 anthropometric dimensions, have been developed. The percentile method was also used on the same set of data. The dimensions of the interior space, necessary for the accommodation of the crane operator, are 1174×1080×1865 mm. The percentiles results for the 5th and 95th model are within the obtained dimensions. The results of this study may prove useful to crane cabin designers in eliminating anthropometric inconsistencies and improving the health of operators, but can also aid in improving the safety, performance and financial results of the companies where crane cabins operate.
A BRDF statistical model applying to space target materials modeling
Liu, Chenghao; Li, Zhi; Xu, Can; Tian, Qichen
2017-10-01
In order to solve the problem of poor effect in modeling the large density BRDF measured data with five-parameter semi-empirical model, a refined statistical model of BRDF which is suitable for multi-class space target material modeling were proposed. The refined model improved the Torrance-Sparrow model while having the modeling advantages of five-parameter model. Compared with the existing empirical model, the model contains six simple parameters, which can approximate the roughness distribution of the material surface, can approximate the intensity of the Fresnel reflectance phenomenon and the attenuation of the reflected light's brightness with the azimuth angle changes. The model is able to achieve parameter inversion quickly with no extra loss of accuracy. The genetic algorithm was used to invert the parameters of 11 different samples in the space target commonly used materials, and the fitting errors of all materials were below 6%, which were much lower than those of five-parameter model. The effect of the refined model is verified by comparing the fitting results of the three samples at different incident zenith angles in 0° azimuth angle. Finally, the three-dimensional modeling visualizations of these samples in the upper hemisphere space was given, in which the strength of the optical scattering of different materials could be clearly shown. It proved the good describing ability of the refined model at the material characterization as well.
Multi-Criteria Model for Determining Order Size
Directory of Open Access Journals (Sweden)
Katarzyna Jakowska-Suwalska
2013-01-01
Full Text Available A multi-criteria model for determining the order size for materials used in production has been presented. It was assumed that the consumption rate of each material is a random variable with a known probability distribution. Using such a model, in which the purchase cost of materials ordered is limited, three criteria were considered: order size, probability of a lack of materials in the production process, and deviations in the order size from the consumption rate in past periods. Based on an example, it has been shown how to use the model to determine the order sizes for polyurethane adhesive and wood in a hard-coal mine. (original abstract
Modelling of Patterns in Space and Time
Murray, James
1984-01-01
This volume contains a selection of papers presented at the work shop "Modelling of Patterns in Space and Time", organized by the 80nderforschungsbereich 123, "8tochastische Mathematische Modelle", in Heidelberg, July 4-8, 1983. The main aim of this workshop was to bring together physicists, chemists, biologists and mathematicians for an exchange of ideas and results in modelling patterns. Since the mathe matical problems arising depend only partially on the particular field of applications the interdisciplinary cooperation proved very useful. The workshop mainly treated phenomena showing spatial structures. The special areas covered were morphogenesis, growth in cell cultures, competition systems, structured populations, chemotaxis, chemical precipitation, space-time oscillations in chemical reactors, patterns in flames and fluids and mathematical methods. The discussions between experimentalists and theoreticians were especially interesting and effective. The editors hope that these proceedings reflect ...
Data Model Management for Space Information Systems
Hughes, J. Steven; Crichton, Daniel J.; Ramirez, Paul; Mattmann, chris
2006-01-01
The Reference Architecture for Space Information Management (RASIM) suggests the separation of the data model from software components to promote the development of flexible information management systems. RASIM allows the data model to evolve independently from the software components and results in a robust implementation that remains viable as the domain changes. However, the development and management of data models within RASIM are difficult and time consuming tasks involving the choice of a notation, the capture of the model, its validation for consistency, and the export of the model for implementation. Current limitations to this approach include the lack of ability to capture comprehensive domain knowledge, the loss of significant modeling information during implementation, the lack of model visualization and documentation capabilities, and exports being limited to one or two schema types. The advent of the Semantic Web and its demand for sophisticated data models has addressed this situation by providing a new level of data model management in the form of ontology tools. In this paper we describe the use of a representative ontology tool to capture and manage a data model for a space information system. The resulting ontology is implementation independent. Novel on-line visualization and documentation capabilities are available automatically, and the ability to export to various schemas can be added through tool plug-ins. In addition, the ingestion of data instances into the ontology allows validation of the ontology and results in a domain knowledge base. Semantic browsers are easily configured for the knowledge base. For example the export of the knowledge base to RDF/XML and RDFS/XML and the use of open source metadata browsers provide ready-made user interfaces that support both text- and facet-based search. This paper will present the Planetary Data System (PDS) data model as a use case and describe the import of the data model into an ontology tool
Dynamic multibody modeling for tethered space elevators
Williams, Paul
2009-08-01
This paper presents a fundamental modeling strategy for dealing with powered and propelled bodies moving along space tethers. The tether is divided into a large number of discrete masses, which are connected by viscoelastic springs. The tether is subject to the full range of forces expected in Earth orbit in a relatively simple manner. Two different models of the elevator dynamics are presented. In order to capture the effect of the elevator moving along the tether, the elevator dynamics are included as a separate body in both models. One model treats the elevator's motion dynamically, where propulsive and friction forces are applied to the elevator body. The second model treats the elevator's motion kinematically, where the distance along the tether is determined by adjusting the lengths of tether on either side of the elevator. The tether model is used to determine optimal configurations for the space elevator. A modal analysis of two different configurations is presented which show that the fundamental mode of oscillation is a pendular one around the anchor point with a period on the order of 160 h for the in-plane motion, and 24 h for the out-of-plane motion. Numerical simulation results of the effects of the elevator moving along the cable are presented for different travel velocities and different elevator masses.
Body size mediated coexistence of consumers competing for resources in space
Basset, A.; Angelis, D.L.
2007-01-01
Body size is a major phenotypic trait of individuals that commonly differentiates co-occurring species. We analyzed inter-specific competitive interactions between a large consumer and smaller competitors, whose energetics, selection and giving-up behaviour on identical resource patches scaled with individual body size. The aim was to investigate whether pure metabolic constraints on patch behaviour of vagile species can determine coexistence conditions consistent with existing theoretical and experimental evidence. We used an individual-based spatially explicit simulation model at a spatial scale defined by the home range of the large consumer, which was assumed to be parthenogenic and semelparous. Under exploitative conditions, competitive coexistence occurred in a range of body size ratios between 2 and 10. Asymmetrical competition and the mechanism underlying asymmetry, determined by the scaling of energetics and patch behaviour with consumer body size, were the proximate determinant of inter-specific coexistence. The small consumer exploited patches more efficiently, but searched for profitable patches less effectively than the larger competitor. Therefore, body-size related constraints induced niche partitioning, allowing competitive coexistence within a set of conditions where the large consumer maintained control over the small consumer and resource dynamics. The model summarises and extends the existing evidence of species coexistence on a limiting resource, and provides a mechanistic explanation for decoding the size-abundance distribution patterns commonly observed at guild and community levels. ?? Oikos.
Space-time modeling of soil moisture
Chen, Zijuan; Mohanty, Binayak P.; Rodriguez-Iturbe, Ignacio
2017-11-01
A physically derived space-time mathematical representation of the soil moisture field is carried out via the soil moisture balance equation driven by stochastic rainfall forcing. The model incorporates spatial diffusion and in its original version, it is shown to be unable to reproduce the relative fast decay in the spatial correlation functions observed in empirical data. This decay resulting from variations in local topography as well as in local soil and vegetation conditions is well reproduced via a jitter process acting multiplicatively over the space-time soil moisture field. The jitter is a multiplicative noise acting on the soil moisture dynamics with the objective to deflate its correlation structure at small spatial scales which are not embedded in the probabilistic structure of the rainfall process that drives the dynamics. These scales of order of several meters to several hundred meters are of great importance in ecohydrologic dynamics. Properties of space-time correlation functions and spectral densities of the model with jitter are explored analytically, and the influence of the jitter parameters, reflecting variabilities of soil moisture at different spatial and temporal scales, is investigated. A case study fitting the derived model to a soil moisture dataset is presented in detail.
Directory of Open Access Journals (Sweden)
D. F. AL RIZA
2015-07-01
Full Text Available This paper presents a sizing optimization methodology of panel and battery capacity in a standalone photovoltaic system with lighting load. Performance of the system is identified by performing Loss of Power Supply Probability (LPSP calculation. Input data used for the calculation is the daily weather data and system components parameters. Capital Cost and Life Cycle Cost (LCC is calculated as optimization parameters. Design space for optimum system configuration is identified based on a given LPSP value, Capital Cost and Life Cycle Cost. Excess energy value is used as an over-design indicator in the design space. An economic analysis, including cost of the energy and payback period, for selected configurations are also studied.
Use of swivel desks and aisle space to promote interaction in mid-sized college classrooms.
Directory of Open Access Journals (Sweden)
Robert G. Henshaw
2011-12-01
Full Text Available Traditional designs for most mid-sized college classrooms discourage 1 face-to-face interaction among students, 2 instructor movement in the classroom, and 3 efficient transitions between different kinds of learning activities. An experimental classroom piloted during Spring Semester 2011 at the University of North Carolina at Chapel Hill uses clusters of stationary desks that swivel 360-degrees and aisle space to address these challenges. The findings from a study involving ten courses taught in the room suggest that there is a need for designs that not only promote quality interactions but also facilitate movement between small group work, class discussion, and lecture.
Space use of wintering waterbirds in India: Influence of trophic ecology on home-range size
Namgail, Tsewang; Takekawa, John Y.; Balachandran, Sivananinthaperumal; Sathiyaselvam, Ponnusamy; Mundkur, Taej; Newman, Scott H.
2014-01-01
Relationship between species' home range and their other biological traits remains poorly understood, especially in migratory birds due to the difficulty associated with tracking them. Advances in satellite telemetry and remote sensing techniques have proved instrumental in overcoming such challenges. We studied the space use of migratory ducks through satellite telemetry with an objective of understanding the influence of body mass and feeding habits on their home-range sizes. We marked 26 individuals, representing five species of migratory ducks, with satellite transmitters during two consecutive winters in three Indian states. We used kernel methods to estimate home ranges and core use areas of these waterfowl, and assessed the influence of body mass and feeding habits on home-range size. Feeding habits influenced the home-range size of the migratory ducks. Carnivorous ducks had the largest home ranges, herbivorous ducks the smallest, while omnivorous species had intermediate home-ranges. Body mass did not explain variation in home-range size. To our knowledge, this is the first study of its kind on migratory ducks, and it has important implications for their conservation and management.
Parabolic Free Boundary Price Formation Models Under Market Size Fluctuations
Markowich, Peter A.; Teichmann, Josef; Wolfram, Marie Therese
2016-01-01
In this paper we propose an extension of the Lasry-Lions price formation model which includes uctuations of the numbers of buyers and vendors. We analyze the model in the case of deterministic and stochastic market size uctuations and present
Modeling and control of flexible space structures
Wie, B.; Bryson, A. E., Jr.
1981-01-01
The effects of actuator and sensor locations on transfer function zeros are investigated, using uniform bars and beams as generic models of flexible space structures. It is shown how finite element codes may be used directly to calculate transfer function zeros. The impulse response predicted by finite-dimensional models is compared with the exact impulse response predicted by the infinite dimensional models. It is shown that some flexible structures behave as if there were a direct transmission between actuator and sensor (equal numbers of zeros and poles in the transfer function). Finally, natural damping models for a vibrating beam are investigated since natural damping has a strong influence on the appropriate active control logic for a flexible structure.
On population size estimators in the Poisson mixture model.
Mao, Chang Xuan; Yang, Nan; Zhong, Jinhua
2013-09-01
Estimating population sizes via capture-recapture experiments has enormous applications. The Poisson mixture model can be adopted for those applications with a single list in which individuals appear one or more times. We compare several nonparametric estimators, including the Chao estimator, the Zelterman estimator, two jackknife estimators and the bootstrap estimator. The target parameter of the Chao estimator is a lower bound of the population size. Those of the other four estimators are not lower bounds, and they may produce lower confidence limits for the population size with poor coverage probabilities. A simulation study is reported and two examples are investigated. © 2013, The International Biometric Society.
[The effect of disinfectant soaking on dental gypsum model size].
Zhu, Cao-yun; Xu, Yun-wen; Xu, Kan
2012-12-01
To study the influence of disinfectant soaking on the dimensional stability of three kinds of dental gypsum model. Three commonly used gypsums ( type III,IV,Vtype) in clinic were used to make 24 specimens for 50 mm×15 mm×10 mm in size. One hour after release, the specimens were placed for 24 h. A digital caliper was used to measure the size of the gypsum model. Distilled water immersion was as used control, glutaraldehyde disinfectant and Metrix CaviCide disinfectant soaking were used for the experimental group. After soaking for 0.5h, the gypsum models were removed and placed for 0.5 h, 1 h, 2 h, 24 h. The size of the models was measured again using the same method. The data was analyzed with SPSS10.0 software package. The initial gypsum model length was (50.07±0.017) mm, (50.048±0.015) mm and (50.027±0.015) mm. After soaking for different times, the size of the model changed little, and the dimensions changed less than 0.01%. The results show that disinfectant soaking has no significant effect on dental model dimensions.
A probability space for quantum models
Lemmens, L. F.
2017-06-01
A probability space contains a set of outcomes, a collection of events formed by subsets of the set of outcomes and probabilities defined for all events. A reformulation in terms of propositions allows to use the maximum entropy method to assign the probabilities taking some constraints into account. The construction of a probability space for quantum models is determined by the choice of propositions, choosing the constraints and making the probability assignment by the maximum entropy method. This approach shows, how typical quantum distributions such as Maxwell-Boltzmann, Fermi-Dirac and Bose-Einstein are partly related with well-known classical distributions. The relation between the conditional probability density, given some averages as constraints and the appropriate ensemble is elucidated.
A model for size- and rotation-invariant pattern processing in the visual system.
Reitboeck, H J; Altmann, J
1984-01-01
The mapping of retinal space onto the striate cortex of some mammals can be approximated by a log-polar function. It has been proposed that this mapping is of functional importance for scale- and rotation-invariant pattern recognition in the visual system. An exact log-polar transform converts centered scaling and rotation into translations. A subsequent translation-invariant transform, such as the absolute value of the Fourier transform, thus generates overall size- and rotation-invariance. In our model, the translation-invariance is realized via the R-transform. This transform can be executed by simple neural networks, and it does not require the complex computations of the Fourier transform, used in Mellin-transform size-invariance models. The logarithmic space distortion and differentiation in the first processing stage of the model is realized via "Mexican hat" filters whose diameter increases linearly with eccentricity, similar to the characteristics of the receptive fields of retinal ganglion cells. Except for some special cases, the model can explain object recognition independent of size, orientation and position. Some general problems of Mellin-type size-invariance models-that also apply to our model-are discussed.
Integrated Space Asset Management Database and Modeling
MacLeod, Todd; Gagliano, Larry; Percy, Thomas; Mason, Shane
2015-01-01
Effective Space Asset Management is one key to addressing the ever-growing issue of space congestion. It is imperative that agencies around the world have access to data regarding the numerous active assets and pieces of space junk currently tracked in orbit around the Earth. At the center of this issues is the effective management of data of many types related to orbiting objects. As the population of tracked objects grows, so too should the data management structure used to catalog technical specifications, orbital information, and metadata related to those populations. Marshall Space Flight Center's Space Asset Management Database (SAM-D) was implemented in order to effectively catalog a broad set of data related to known objects in space by ingesting information from a variety of database and processing that data into useful technical information. Using the universal NORAD number as a unique identifier, the SAM-D processes two-line element data into orbital characteristics and cross-references this technical data with metadata related to functional status, country of ownership, and application category. The SAM-D began as an Excel spreadsheet and was later upgraded to an Access database. While SAM-D performs its task very well, it is limited by its current platform and is not available outside of the local user base. Further, while modeling and simulation can be powerful tools to exploit the information contained in SAM-D, the current system does not allow proper integration options for combining the data with both legacy and new M&S tools. This paper provides a summary of SAM-D development efforts to date and outlines a proposed data management infrastructure that extends SAM-D to support the larger data sets to be generated. A service-oriented architecture model using an information sharing platform named SIMON will allow it to easily expand to incorporate new capabilities, including advanced analytics, M&S tools, fusion techniques and user interface for
Cloud Computing Adoption Business Model Factors: Does Enterprise Size Matter?
Bogataj Habjan, Kristina; Pucihar, Andreja
2017-01-01
This paper presents the results of research investigating the impact of business model factors on cloud computing adoption. The introduced research model consists of 40 cloud computing business model factors, grouped into eight factor groups. Their impact and importance for cloud computing adoption were investigated among enterpirses in Slovenia. Furthermore, differences in opinion according to enterprise size were investigated. Research results show no statistically significant impacts of in...
Size, Albedo, and Taxonomy of the Don Quijote Space Mission Target
Harris, Alan; Mueller, Michael; Fitzsimmons, Alan
2006-03-01
Rendezvous and lander missions are a very effective but very expensive way of investigating Solar-System bodies. The planning, optimization and success of space missions depends crucially on prior remotely-sensed knowledge of target bodies. Near-Earth asteroids (NEAs), which are mainly fragments of main-belt asteroids, are seen as important goals for investigation by space missions, mainly due to the role their forebears played in planet formation and the evolution of the Solar System, but also for the pragmatic reason that these objects can collide with the Earth with potentially devastating consequences. The European Space Agency is currently planning the Don Quijote mission to a NEA, which includes a rendezvous (and perhaps a lander) spacecraft and an impactor vehicle. The aim is to study the physical properties of the target asteroid and the effects of the impact on its dynamical state, as a first step in considering realistic mitigation measures against an eventual hazardous NEA. Two potential targets have been selected for the mission, the preferred one being (10302) 1989 ML, which is energetically easier to reach and is possibly a scientifically interesting primitive asteroid. However, due to the ambiguity of available spectral data, it is currently not possible to confidently determine the taxonomic type and mineralogy of this object. Crucially, the albedo is uncertain by a factor of 10, which leads to large uncertainties in the size and mass and hence the planned near-surface operations of Don Quijote. Thermal-infrared observations are urgently required for accurate size and albedo determination. These observations, which can only be carried out by Spitzer and would require only a modest amount of observing time, would enable an accurate diameter to be derived for the first time and the resulting albedo would remove the taxonomic ambiguity. The proposed Spitzer observations are critical for effective mission planning and would greatly increase our
Multivariable Wind Modeling in State Space
DEFF Research Database (Denmark)
Sichani, Mahdi Teimouri; Pedersen, B. J.
2011-01-01
Turbulence of the incoming wind field is of paramount importance to the dynamic response of wind turbines. Hence reliable stochastic models of the turbulence should be available from which time series can be generated for dynamic response and structural safety analysis. In the paper an empirical...... for the vector turbulence process incorporating its phase spectrum in one stage, and its results are compared with a conventional ARMA modeling method....... the succeeding state space and ARMA modeling of the turbulence rely on the positive definiteness of the cross-spectral density matrix, the problem with the non-positive definiteness of such matrices is at first addressed and suitable treatments regarding it are proposed. From the adjusted positive definite cross...
A probabilistic model of RNA conformational space
DEFF Research Database (Denmark)
Frellsen, Jes; Moltke, Ida; Thiim, Martin
2009-01-01
efficient sampling of RNA conformations in continuous space, and with associated probabilities. We show that the model captures several key features of RNA structure, such as its rotameric nature and the distribution of the helix lengths. Furthermore, the model readily generates native-like 3-D......, the discrete nature of the fragments necessitates the use of carefully tuned, unphysical energy functions, and their non-probabilistic nature impairs unbiased sampling. We offer a solution to the sampling problem that removes these important limitations: a probabilistic model of RNA structure that allows......The increasing importance of non-coding RNA in biology and medicine has led to a growing interest in the problem of RNA 3-D structure prediction. As is the case for proteins, RNA 3-D structure prediction methods require two key ingredients: an accurate energy function and a conformational sampling...
Numerical Modeling of Ophthalmic Response to Space
Nelson, E. S.; Myers, J. G.; Mulugeta, L.; Vera, J.; Raykin, J.; Feola, A.; Gleason, R.; Samuels, B.; Ethier, C. R.
2015-01-01
To investigate ophthalmic changes in spaceflight, we would like to predict the impact of blood dysregulation and elevated intracranial pressure (ICP) on Intraocular Pressure (IOP). Unlike other physiological systems, there are very few lumped parameter models of the eye. The eye model described here is novel in its inclusion of the human choroid and retrobulbar subarachnoid space (rSAS), which are key elements in investigating the impact of increased ICP and ocular blood volume. Some ingenuity was required in modeling the blood and rSAS compartments due to the lack of quantitative data on essential hydrodynamic quantities, such as net choroidal volume and blood flowrate, inlet and exit pressures, and material properties, such as compliances between compartments.
Right-sizing statistical models for longitudinal data.
Wood, Phillip K; Steinley, Douglas; Jackson, Kristina M
2015-12-01
Arguments are proposed that researchers using longitudinal data should consider more and less complex statistical model alternatives to their initially chosen techniques in an effort to "right-size" the model to the data at hand. Such model comparisons may alert researchers who use poorly fitting, overly parsimonious models to more complex, better-fitting alternatives and, alternatively, may identify more parsimonious alternatives to overly complex (and perhaps empirically underidentified and/or less powerful) statistical models. A general framework is proposed for considering (often nested) relationships between a variety of psychometric and growth curve models. A 3-step approach is proposed in which models are evaluated based on the number and patterning of variance components prior to selection of better-fitting growth models that explain both mean and variation-covariation patterns. The orthogonal free curve slope intercept (FCSI) growth model is considered a general model that includes, as special cases, many models, including the factor mean (FM) model (McArdle & Epstein, 1987), McDonald's (1967) linearly constrained factor model, hierarchical linear models (HLMs), repeated-measures multivariate analysis of variance (MANOVA), and the linear slope intercept (linearSI) growth model. The FCSI model, in turn, is nested within the Tuckerized factor model. The approach is illustrated by comparing alternative models in a longitudinal study of children's vocabulary and by comparing several candidate parametric growth and chronometric models in a Monte Carlo study. (c) 2015 APA, all rights reserved).
A discrete-space urban model with environmental amenities
Liaila Tajibaeva; Robert G. Haight; Stephen Polasky
2008-01-01
This paper analyzes the effects of providing environmental amenities associated with open space in a discrete-space urban model and characterizes optimal provision of open space across a metropolitan area. The discrete-space model assumes distinct neighborhoods in which developable land is homogeneous within a neighborhood but heterogeneous across neighborhoods. Open...
Model catalysis by size-selected cluster deposition
Energy Technology Data Exchange (ETDEWEB)
Anderson, Scott [Univ. of Utah, Salt Lake City, UT (United States)
2015-11-20
This report summarizes the accomplishments during the last four years of the subject grant. Results are presented for experiments in which size-selected model catalysts were studied under surface science and aqueous electrochemical conditions. Strong effects of cluster size were found, and by correlating the size effects with size-dependent physical properties of the samples measured by surface science methods, it was possible to deduce mechanistic insights, such as the factors that control the rate-limiting step in the reactions. Results are presented for CO oxidation, CO binding energetics and geometries, and electronic effects under surface science conditions, and for the electrochemical oxygen reduction reaction, ethanol oxidation reaction, and for oxidation of carbon by water.
Sato, K. Y.; Tomko, D. L.; Levine, H. G.; Quincy, C. D.; Rayl, N. A.; Sowa, M. B.; Taylor, E. M.; Sun, S. C.; Kundrot, C. E.
2018-02-01
Model organisms are foundational for conducting physiological and systems biology research to define how life responds to the deep space environment. The organisms, areas of research, and Deep Space Gateway capabilities needed will be presented.
Integrated Space Asset Management Database and Modeling
Gagliano, L.; MacLeod, T.; Mason, S.; Percy, T.; Prescott, J.
The Space Asset Management Database (SAM-D) was implemented in order to effectively track known objects in space by ingesting information from a variety of databases and performing calculations to determine the expected position of the object at a specified time. While SAM-D performs this task very well, it is limited by technology and is not available outside of the local user base. Modeling and simulation can be powerful tools to exploit the information contained in SAM-D. However, the current system does not allow proper integration options for combining the data with both legacy and new M&S tools. A more capable data management infrastructure would extend SAM-D to support the larger data sets to be generated by the COI. A service-oriented architecture model will allow it to easily expand to incorporate new capabilities, including advanced analytics, M&S tools, fusion techniques and user interface for visualizations. Based on a web-centric approach, the entire COI will be able to access the data and related analytics. In addition, tight control of information sharing policy will increase confidence in the system, which would encourage industry partners to provide commercial data. SIMON is a Government off the Shelf information sharing platform in use throughout DoD and DHS information sharing and situation awareness communities. SIMON providing fine grained control to data owners allowing them to determine exactly how and when their data is shared. SIMON supports a micro-service approach to system development, meaning M&S and analytic services can be easily built or adapted. It is uniquely positioned to fill this need as an information-sharing platform with a proven track record of successful situational awareness system deployments. Combined with the integration of new and legacy M&S tools, a SIMON-based architecture will provide a robust SA environment for the NASA SA COI that can be extended and expanded indefinitely. First Results of Coherent Uplink from a
Large-size deployable construction heated by solar irradiation in free space
Pestrenina, Irena; Kondyurin, Alexey; Pestrenin, Valery; Kashin, Nickolay; Naymushin, Alexey
Large-size deployable construction in free space with subsequent direct curing was invented more than fifteen years ago (Briskman et al., 1997 and Kondyurin, 1998). It caused a lot of scientific problems, one of which is a possibility to use the solar energy for initiation of the curing reaction. This paper is devoted to investigate the curing process under sun irradiation during a space flight in Earth orbits. A rotation of the construction is considered. This motion can provide an optimal temperature distribution in the construction that is required for the polymerization reaction. The cylindrical construction of 80 m length with two hemispherical ends of 10 m radius is considered. The wall of the construction of 10 mm carbon fibers/epoxy matrix composite is irradiated by heat flux from the sun and radiates heat from the external surface by the Stefan- Boltzmann law. A stage of polymerization reaction is calculated as a function of temperature/time based on the laboratory experiments with certified composite materials for space exploitation. The curing kinetics of the composite is calculated for different inclination Low Earth Orbits (300 km altitude) and Geostationary Earth Orbit (40000 km altitude). The results show that • the curing process depends strongly on the Earth orbit and the rotation of the construction; • the optimal flight orbit and rotation can be found to provide the thermal regime that is sufficient for the complete curing of the considered construction. The study is supported by RFBR grant No.12-08-00970-a. 1. Briskman V., A.Kondyurin, K.Kostarev, V.Leontyev, M.Levkovich, A.Mashinsky, G.Nechitailo, T.Yudina, Polymerization in microgravity as a new process in space technology, Paper No IAA-97-IAA.12.1.07, 48th International Astronautical Congress, October 6-10, 1997, Turin Italy. 2. Kondyurin A.V., Building the shells of large space stations by the polymerisation of epoxy composites in open space, Int. Polymer Sci. and Technol., v.25, N4
Linear Model for Optimal Distributed Generation Size Predication
Directory of Open Access Journals (Sweden)
Ahmed Al Ameri
2017-01-01
Full Text Available This article presents a linear model predicting optimal size of Distributed Generation (DG that addresses the minimum power loss. This method is based fundamentally on strong coupling between active power and voltage angle as well as between reactive power and voltage magnitudes. This paper proposes simplified method to calculate the total power losses in electrical grid for different distributed generation sizes and locations. The method has been implemented and tested on several IEEE bus test systems. The results show that the proposed method is capable of predicting approximate optimal size of DG when compared with precision calculations. The method that linearizes a complex model showed a good result, which can actually reduce processing time required. The acceptable accuracy with less time and memory required can help the grid operator to assess power system integrated within large-scale distribution generation.
Analyzing ROC curves using the effective set-size model
Samuelson, Frank W.; Abbey, Craig K.; He, Xin
2018-03-01
The Effective Set-Size model has been used to describe uncertainty in various signal detection experiments. The model regards images as if they were an effective number (M*) of searchable locations, where the observer treats each location as a location-known-exactly detection task with signals having average detectability d'. The model assumes a rational observer behaves as if he searches an effective number of independent locations and follows signal detection theory at each location. Thus the location-known-exactly detectability (d') and the effective number of independent locations M* fully characterize search performance. In this model the image rating in a single-response task is assumed to be the maximum response that the observer would assign to these many locations. The model has been used by a number of other researchers, and is well corroborated. We examine this model as a way of differentiating imaging tasks that radiologists perform. Tasks involving more searching or location uncertainty may have higher estimated M* values. In this work we applied the Effective Set-Size model to a number of medical imaging data sets. The data sets include radiologists reading screening and diagnostic mammography with and without computer-aided diagnosis (CAD), and breast tomosynthesis. We developed an algorithm to fit the model parameters using two-sample maximum-likelihood ordinal regression, similar to the classic bi-normal model. The resulting model ROC curves are rational and fit the observed data well. We find that the distributions of M* and d' differ significantly among these data sets, and differ between pairs of imaging systems within studies. For example, on average tomosynthesis increased readers' d' values, while CAD reduced the M* parameters. We demonstrate that the model parameters M* and d' are correlated. We conclude that the Effective Set-Size model may be a useful way of differentiating location uncertainty from the diagnostic uncertainty in medical
Model-Based Trade Space Exploration for Near-Earth Space Missions
Cohen, Ronald H.; Boncyk, Wayne; Brutocao, James; Beveridge, Iain
2005-01-01
We developed a capability for model-based trade space exploration to be used in the conceptual design of Earth-orbiting space missions. We have created a set of reusable software components to model various subsystems and aspects of space missions. Several example mission models were created to test the tools and process. This technique and toolset has demonstrated itself to be valuable for space mission architectural design.
Space Surveillance Network and Analysis Model (SSNAM) Performance Improvements
National Research Council Canada - National Science Library
Butkus, Albert; Roe, Kevin; Mitchell, Barbara L; Payne, Timothy
2007-01-01
... capacity by sensor, models for sensors yet to be created, user defined weather conditions, National Aeronautical and Space Administration catalog growth model including space debris, and solar flux just to name a few...
Modeling Physarum space exploration using memristors
International Nuclear Information System (INIS)
Ntinas, V; Sirakoulis, G Ch; Vourkas, I; Adamatzky, A I
2017-01-01
Slime mold Physarum polycephalum optimizes its foraging behaviour by minimizing the distances between the sources of nutrients it spans. When two sources of nutrients are present, the slime mold connects the sources, with its protoplasmic tubes, along the shortest path. We present a two-dimensional mesh grid memristor based model as an approach to emulate Physarum’s foraging strategy, which includes space exploration and reinforcement of the optimally formed interconnection network in the presence of multiple aliment sources. The proposed algorithmic approach utilizes memristors and LC contours and is tested in two of the most popular computational challenges for Physarum, namely maze and transportation networks. Furthermore, the presented model is enriched with the notion of noise presence, which positively contributes to a collective behavior and enables us to move from deterministic to robust results. Consequently, the corresponding simulation results manage to reproduce, in a much better qualitative way, the expected transportation networks. (paper)
Space weather: Modeling and forecasting ionospheric
International Nuclear Information System (INIS)
Calzadilla Mendez, A.
2008-01-01
Full text: Space weather is the set of phenomena and interactions that take place in the interplanetary medium. It is regulated primarily by the activity originating in the Sun and affects both the artificial satellites that are outside of the protective cover of the Earth's atmosphere as the rest of the planets in the solar system. Among the phenomena that are of great relevance and impact on Earth are the auroras and geomagnetic storms , these are a direct result of irregularities in the flow of the solar wind and the interplanetary magnetic field . Given the high complexity of the physical phenomena involved (magnetic reconnection , particle inlet and ionizing radiation to the atmosphere) one of the great scientific challenges today is to forecast the state of plasmatic means either the interplanetary medium , the magnetosphere and ionosphere , for their importance to the development of various human activities such as radio , global positioning , navigation, etc. . It briefly address some of the international ionospheric modeling methods and contributions and participation that currently has the space group of the Institute of Geophysics Geophysics and Astronomy (IGA) in these activities of modeling and forecasting ionospheric. (author)
Evolutionary model of the growth and size of firms
Kaldasch, Joachim
2012-07-01
The key idea of this model is that firms are the result of an evolutionary process. Based on demand and supply considerations the evolutionary model presented here derives explicitly Gibrat's law of proportionate effects as the result of the competition between products. Applying a preferential attachment mechanism for firms, the theory allows to establish the size distribution of products and firms. Also established are the growth rate and price distribution of consumer goods. Taking into account the characteristic property of human activities to occur in bursts, the model allows also an explanation of the size-variance relationship of the growth rate distribution of products and firms. Further the product life cycle, the learning (experience) curve and the market size in terms of the mean number of firms that can survive in a market are derived. The model also suggests the existence of an invariant of a market as the ratio of total profit to total revenue. The relationship between a neo-classic and an evolutionary view of a market is discussed. The comparison with empirical investigations suggests that the theory is able to describe the main stylized facts concerning the size and growth of firms.
Influence of horizontal resolution and ensemble size on model performance
CSIR Research Space (South Africa)
Dalton, A
2014-10-01
Full Text Available Conference of South African Society for Atmospheric Sciences (SASAS), Potchefstroom, 1-2 October 2014 Influence of horizontal resolution and ensemble size on model performance Amaris Dalton*¹, Willem A. Landman ¹ʾ² ¹Departmen of Geography, Geo...
Finite-Size Effects for Some Bootstrap Percolation Models
Enter, A.C.D. van; Adler, Joan; Duarte, J.A.M.S.
The consequences of Schonmann's new proof that the critical threshold is unity for certain bootstrap percolation models are explored. It is shown that this proof provides an upper bound for the finite-size scaling in these systems. Comparison with data for one case demonstrates that this scaling
Space Weather Models at the CCMC And Their Capabilities
Hesse, Michael; Rastatter, Lutz; MacNeice, Peter; Kuznetsova, Masha
2007-01-01
The Community Coordinated Modeling Center (CCMC) is a US inter-agency activity aiming at research in support of the generation of advanced space weather models. As one of its main functions, the CCMC provides to researchers the use of space science models, even if they are not model owners themselves. The second focus of CCMC activities is on validation and verification of space weather models, and on the transition of appropriate models to space weather forecast centers. As part of the latter activity, the CCMC develops real-time simulation systems that stress models through routine execution. A by-product of these real-time calculations is the ability to derive model products, which may be useful for space weather operators. In this presentation, we will provide an overview of the community-provided, space weather-relevant, model suite, which resides at CCMC. We will discuss current capabilities, and analyze expected future developments of space weather related modeling.
Parabolic Free Boundary Price Formation Models Under Market Size Fluctuations
Markowich, Peter A.
2016-10-04
In this paper we propose an extension of the Lasry-Lions price formation model which includes uctuations of the numbers of buyers and vendors. We analyze the model in the case of deterministic and stochastic market size uctuations and present results on the long time asymptotic behavior and numerical evidence and conjectures on periodic, almost periodic, and stochastic uctuations. The numerical simulations extend the theoretical statements and give further insights into price formation dynamics.
International Space Station Model Correlation Analysis
Laible, Michael R.; Fitzpatrick, Kristin; Hodge, Jennifer; Grygier, Michael
2018-01-01
This paper summarizes the on-orbit structural dynamic data and the related modal analysis, model validation and correlation performed for the International Space Station (ISS) configuration ISS Stage ULF7, 2015 Dedicated Thruster Firing (DTF). The objective of this analysis is to validate and correlate the analytical models used to calculate the ISS internal dynamic loads and compare the 2015 DTF with previous tests. During the ISS configurations under consideration, on-orbit dynamic measurements were collected using the three main ISS instrumentation systems; Internal Wireless Instrumentation System (IWIS), External Wireless Instrumentation System (EWIS) and the Structural Dynamic Measurement System (SDMS). The measurements were recorded during several nominal on-orbit DTF tests on August 18, 2015. Experimental modal analyses were performed on the measured data to extract modal parameters including frequency, damping, and mode shape information. Correlation and comparisons between test and analytical frequencies and mode shapes were performed to assess the accuracy of the analytical models for the configurations under consideration. These mode shapes were also compared to earlier tests. Based on the frequency comparisons, the accuracy of the mathematical models is assessed and model refinement recommendations are given. In particular, results of the first fundamental mode will be discussed, nonlinear results will be shown, and accelerometer placement will be assessed.
Space-to-Ground Quantum Key Distribution Using a Small-Sized Payload on Tiangong-2 Space Lab
Liao, Sheng-Kai; Lin, Jin; Ren, Ji-Gang; Liu, Wei-Yue; Qiang, Jia; Yin, Juan; Li, Yang; Shen, Qi; Zhang, Liang; Liang, Xue-Feng; Yong, Hai-Lin; Li, Feng-Zhi; Yin, Ya-Yun; Cao, Yuan; Cai, Wen-Qi; Zhang, Wen-Zhuo; Jia, Jian-Jun; Wu, Jin-Cai; Chen, Xiao-Wen; Zhang, Shan-Cong; Jiang, Xiao-Jun; Wang, Jian-Feng; Huang, Yong-Mei; Wang, Qiang; Ma, Lu; Li, Li; Pan, Ge-Sheng; Zhang, Qiang; Chen, Yu-Ao; Lu, Chao-Yang; Liu, Nai-Le; Ma, Xiongfeng; Shu, Rong; Peng, Cheng-Zhi; Wang, Jian-Yu; Pan, Jian-Wei
2017-08-01
Not Available Supported by China Manned Space Program, Technology and Engineering Center for Space Utilization Chinese Academy of Sciences, Chinese Academy of Sciences, and the National Natural Science Foundation of China.
Ports: Definition and study of types, sizes and business models
Ivan Roa; Yessica Peña; Beatriz Amante; María Goretti
2013-01-01
Purpose: In the world today there are thousands of port facilities of different types and sizes, competing to capture some market share of freight by sea, mainly. This article aims to determine the type of port and the most common size, in order to find out which business model is applied in that segment and what is the legal status of the companies of such infrastructure.Design/methodology/approach: To achieve this goal, we develop a research on a representative sample of 800 ports worldwide...
Directory of Open Access Journals (Sweden)
Ghanshyam G. Tejani
2018-04-01
Full Text Available In this study, simultaneous size, shape, and topology optimization of planar and space trusses are investigated. Moreover, the trusses are subjected to constraints for element stresses, nodal displacements, and kinematic stability conditions. Truss Topology Optimization (TTO removes the superfluous elements and nodes from the ground structure. In this method, the difficulties arise due to unacceptable and singular topologies; therefore, the Grubler’s criterion and the positive definiteness are used to handle such issue. Moreover, the TTO is challenging due to its search space, which is implicit, non-convex, non-linear, and often leading to divergence. Therefore, mutation-based metaheuristics are proposed to investigate them. This study compares the performance of four improved metaheuristics (viz. Improved Teaching–Learning-Based Optimization (ITLBO, Improved Heat Transfer Search (IHTS, Improved Water Wave Optimization (IWWO, and Improved Passing Vehicle Search (IPVS and four basic metaheuristics (viz. TLBO, HTS, WWO, and PVS in order to solve structural optimization problems. Keywords: Structural optimization, Mutation operator, Improved metaheuristics, Modified algorithms, Truss topology optimization
The critical domain size of stochastic population models.
Reimer, Jody R; Bonsall, Michael B; Maini, Philip K
2017-02-01
Identifying the critical domain size necessary for a population to persist is an important question in ecology. Both demographic and environmental stochasticity impact a population's ability to persist. Here we explore ways of including this variability. We study populations with distinct dispersal and sedentary stages, which have traditionally been modelled using a deterministic integrodifference equation (IDE) framework. Individual-based models (IBMs) are the most intuitive stochastic analogues to IDEs but yield few analytic insights. We explore two alternate approaches; one is a scaling up to the population level using the Central Limit Theorem, and the other a variation on both Galton-Watson branching processes and branching processes in random environments. These branching process models closely approximate the IBM and yield insight into the factors determining the critical domain size for a given population subject to stochasticity.
A State Space Model for the Wood Chip Refining Model
Directory of Open Access Journals (Sweden)
David Di Ruscio
1997-07-01
Full Text Available A detailed dynamic model of the fibre size distribution between the refiner discs, distributed along the refiner radius, is presented. Both one- and two-dimensional descriptions for the fibre or shive geometry are given. It is shown that this model may be simplified and that analytic solutions exist under non-restrictive assumptions. A direct method for the recursive estimation of unknown parameters is presented. This method is applicable to linear or linearized systems which have a triangular structure.
Modeling, Analysis, and Optimization Issues for Large Space Structures
Pinson, L. D. (Compiler); Amos, A. K. (Compiler); Venkayya, V. B. (Compiler)
1983-01-01
Topics concerning the modeling, analysis, and optimization of large space structures are discussed including structure-control interaction, structural and structural dynamics modeling, thermal analysis, testing, and design.
Sklar, L. S.; Mahmoudi, M.
2016-12-01
Landscape evolution models rarely represent sediment size explicitly, despite the importance of sediment size in regulating rates of bedload sediment transport, river incision into bedrock, and many other processes in channels and on hillslopes. A key limitation has been the lack of a general model for predicting the size of sediments produced on hillslopes and supplied to channels. Here we present a framework for such a model, as a first step toward building a `geomorphic transport law' that balances mechanistic realism with computational simplicity and is widely applicable across diverse landscapes. The goal is to take as inputs landscape-scale boundary conditions such as lithology, climate and tectonics, and predict the spatial variation in the size distribution of sediments supplied to channels across catchments. The model framework has two components. The first predicts the initial size distribution of particles produced by erosion of bedrock underlying hillslopes, while the second accounts for the effects of physical and chemical weathering during transport down slopes and delivery to channels. The initial size distribution can be related to the spacing and orientation of fractures within bedrock, which depend on the stresses and deformation experienced during exhumation and on rock resistance to fracture propagation. Other controls on initial size include the sizes of mineral grains in crystalline rocks, the sizes of cemented particles in clastic sedimentary rocks, and the potential for characteristic size distributions produced by tree throw, frost cracking, and other erosional processes. To model how weathering processes transform the initial size distribution we consider the effects of erosion rate and the thickness of soil and weathered bedrock on hillslope residence time. Residence time determines the extent of size reduction, for given values of model terms that represent the potential for chemical and physical weathering. Chemical weathering potential
Model space dimensionalities for multiparticle fermion systems
International Nuclear Information System (INIS)
Draayer, J.P.; Valdes, H.T.
1985-01-01
A menu driven program for determining the dimensionalities of fixed-(J) [or (J,T)] model spaces built by distributing identical fermions (electrons, neutrons, protons) or two distinguihable fermion types (neutron-proton and isospin formalisms) among any mixture of positive and negative parity spherical orbitals is presented. The algorithm, built around the elementary difference formula d(J)=d(M=J)-d(M=J+1), takes full advantage of M->-M and particle-hole symmetries. A 96 K version of the program suffices for as compilated a case as d[(+1/2, +3/2, + 5/2, + 7/2-11/2)sup(n-26)J=2 + ,T=7]=210,442,716,722 found in the 0hω valence space of 56 126 Ba 70 . The program calculates the total fixed-(Jsup(π)) or fixed-(Jsup(π),T) dimensionality of a model space generated by distributing a specified number of fermions among a set of input positive and negative parity (π) spherical (j) orbitals. The user is queried at each step to select among various options: 1. formalism - identical particle, neutron-proton, isospin; 2. orbits -bumber, +/-2*J of all orbits; 3. limits -minimum/maximum number of particles of each parity; 4. specifics - number of particles, +/-2*J (total), 2*T; 5. continue - same orbit structure, new case quit. Though designed for nuclear applications (jj-coupling), the program can be used in the atomic case (LS-coupling) so long as half integer spin values (j=l+-1/2) are input for the valnce orbitals. Mutiple occurrences of a given j value are properly taken into account. A minor extension provides labelling information for a generalized seniority classification scheme. The program logic is an adaption of methods used in statistical spectroscopy to evaluate configuration averages. Indeed, the need for fixed symmetry leve densities in spectral distribution theory motivated this work. The methods extend to other group structures where there are M-like additive quantum labels. (orig.)
Convergence of surface diffusion parameters with model crystal size
Cohen, Jennifer M.; Voter, Arthur F.
1994-07-01
A study of the variation in the calculated quantities for adatom diffusion with respect to the size of the model crystal is presented. The reported quantities include surface diffusion barrier heights, pre-exponential factors, and dynamical correction factors. Embedded atom method (EAM) potentials were used throughout this effort. Both the layer size and the depth of the crystal were found to influence the values of the Arrhenius factors significantly. In particular, exchange type mechanisms required a significantly larger model than standard hopping mechanisms to determine adatom diffusion barriers of equivalent accuracy. The dynamical events that govern the corrections to transition state theory (TST) did not appear to be as sensitive to crystal depth. Suitable criteria for the convergence of the diffusion parameters with regard to the rate properties are illustrated.
Kahl, Wolf-Achim; Holzheid, Astrid
2013-04-01
The geometry and internal structures of sandstone reservoirs, like grain size, sorting, degree of bioturbation, and the history of the diagenetic alterations determine the quantity, flow rates, and recovery of hydrocarbons present in the pore space. In this respect, processes influencing the deep reservoir quality in sandstones are either of depositional, shallow diagenetic, or deep-burial origin. To assess the effect of compaction and cementation on the pore space during diagenesis, we investigated a set of sandstone samples using high-resolution microtomography (µ-CT). By high-resolution µ-CT, size distributions (in 2D and 3D), surface areas and volume fractions of the grain skeleton and pore space of sandstones and - in addition - of mineral powders have been determined. For this study, we analysed aliquots of sandstones that exhibit either complete, partial or no cemententation of the pore space, and sets of mineral powders (quartz, feldspar, calcite). As the resolution of the µ-CT scans is in the µm-range, the surface areas determined for sandstones and powders do detect the geometric surface of the material (Kahl & Holzheid, 2010). Since there are differing approaches to "size" parameters like e.g., long/short particle axes, area equivalent radius, Feret-diameter (2D), and structural thickness (3D), we decided to illustrate the effect of various size determinations for (a) single grains, (b) grain skeletons, and (c) pore space. Therefor, the computer-aided morphometric analysis of the segmented 3D models of the reconstructed scan images comprises versatile calculation algorithms. For example, size distribution of the pore space of partially cemented sandstones can be used to infer the timing of the formation of the cement in respect to tectonic/diagenetic activities. In the case of a late-stage partial cementation of a Bunter sandstone, both pore space and cement phase show identical size distributions. On the contrary, the anhydrite cement of a
Towards modeling intergranular stress corrosion cracks on grain size scales
International Nuclear Information System (INIS)
Simonovski, Igor; Cizelj, Leon
2012-01-01
Highlights: ► Simulating the onset and propagation of intergranular cracking. ► Model based on the as-measured geometry and crystallographic orientations. ► Feasibility, performance of the proposed computational approach demonstrated. - Abstract: Development of advanced models at the grain size scales has so far been mostly limited to simulated geometry structures such as for example 3D Voronoi tessellations. The difficulty came from a lack of non-destructive techniques for measuring the microstructures. In this work a novel grain-size scale approach for modelling intergranular stress corrosion cracking based on as-measured 3D grain structure of a 400 μm stainless steel wire is presented. Grain topologies and crystallographic orientations are obtained using a diffraction contrast tomography, reconstructed within a detailed finite element model and coupled with advanced constitutive models for grains and grain boundaries. The wire is composed of 362 grains and over 1600 grain boundaries. Grain boundary damage initialization and early development is then explored for a number of cases, ranging from isotropic elasticity up to crystal plasticity constitutive laws for the bulk grain material. In all cases the grain boundaries are modeled using the cohesive zone approach. The feasibility of the approach is explored.
Micron-sized and submicron-sized aerosol deposition in a new ex vivo preclinical model.
Perinel, Sophie; Leclerc, Lara; Prévôt, Nathalie; Deville, Agathe; Cottier, Michèle; Durand, Marc; Vergnon, Jean-Michel; Pourchez, Jérémie
2016-07-07
The knowledge of where particles deposit in the respiratory tract is crucial for understanding the health effects associated with inhaled drug particles. An ex vivo study was conducted to assess regional deposition patterns (thoracic vs. extrathoracic) of radioactive polydisperse aerosols with different size ranges [0.15 μm-0.5 μm], [0.25 μm-1 μm] and [1 μm-9 μm]. SPECT/CT analyses were performed complementary in order to assess more precisely the regional deposition of aerosols within the pulmonary tract. Experiments were set using an original respiratory tract model composed of a human plastinated head connected to an ex vivo porcine pulmonary tract. The model was ventilated by passive expansion, simulating pleural depressions. Aerosol was administered during nasal breathing. Planar scintigraphies allowed to calculate the deposited aerosol fractions for particles in the three size ranges from sub-micron to micron The deposited fractions obtained, for thoracic vs. extra-thoracic regions respectively, were 89 ± 4 % vs. 11 ± 4 % for [0.15 μm-0.5 μm], 78 ± 5 % vs. 22 ± 5 % for [0.25 μm-1 μm] and 35 ± 11 % vs.65 ± 11 % for [1 μm-9 μm]. Results obtained with this new ex vivo respiratory tract model are in good agreement with the in vivo data obtained in studies with baboons and humans.
SOFTCOST - DEEP SPACE NETWORK SOFTWARE COST MODEL
Tausworthe, R. C.
1994-01-01
The early-on estimation of required resources and a schedule for the development and maintenance of software is usually the least precise aspect of the software life cycle. However, it is desirable to make some sort of an orderly and rational attempt at estimation in order to plan and organize an implementation effort. The Software Cost Estimation Model program, SOFTCOST, was developed to provide a consistent automated resource and schedule model which is more formalized than the often used guesswork model based on experience, intuition, and luck. SOFTCOST was developed after the evaluation of a number of existing cost estimation programs indicated that there was a need for a cost estimation program with a wide range of application and adaptability to diverse kinds of software. SOFTCOST combines several software cost models found in the open literature into one comprehensive set of algorithms that compensate for nearly fifty implementation factors relative to size of the task, inherited baseline, organizational and system environment, and difficulty of the task. SOFTCOST produces mean and variance estimates of software size, implementation productivity, recommended staff level, probable duration, amount of computer resources required, and amount and cost of software documentation. Since the confidence level for a project using mean estimates is small, the user is given the opportunity to enter risk-biased values for effort, duration, and staffing, to achieve higher confidence levels. SOFTCOST then produces a PERT/CPM file with subtask efforts, durations, and precedences defined so as to produce the Work Breakdown Structure (WBS) and schedule having the asked-for overall effort and duration. The SOFTCOST program operates in an interactive environment prompting the user for all of the required input. The program builds the supporting PERT data base in a file for later report generation or revision. The PERT schedule and the WBS schedule may be printed and stored in a
Ports: Definition and study of types, sizes and business models
Directory of Open Access Journals (Sweden)
Ivan Roa
2013-09-01
Full Text Available Purpose: In the world today there are thousands of port facilities of different types and sizes, competing to capture some market share of freight by sea, mainly. This article aims to determine the type of port and the most common size, in order to find out which business model is applied in that segment and what is the legal status of the companies of such infrastructure.Design/methodology/approach: To achieve this goal, we develop a research on a representative sample of 800 ports worldwide, which manage 90% of the containerized port loading. Then you can find out the legal status of the companies that manage them.Findings: The results indicate a port type and a dominant size, which are mostly managed by companies subject to a concession model.Research limitations/implications: In this research, we study only those ports that handle freight (basically containerized, ignoring other activities such as fishing, military, tourism or recreational.Originality/value: This is an investigation to show that the vast majority of the studied segment port facilities are governed by a similar corporate model and subject to pressure from the markets, which increasingly demand efficiency and service. Consequently, we tend to concession terminals to private operators in a process that might be called privatization, but in the strictest sense of the term, is not entirely realistic because the ownership of the land never ceases to be public
Shape, size, and robustness: feasible regions in the parameter space of biochemical networks.
Directory of Open Access Journals (Sweden)
Adel Dayarian
2009-01-01
Full Text Available The concept of robustness of regulatory networks has received much attention in the last decade. One measure of robustness has been associated with the volume of the feasible region, namely, the region in the parameter space in which the system is functional. In this paper, we show that, in addition to volume, the geometry of this region has important consequences for the robustness and the fragility of a network. We develop an approximation within which we could algebraically specify the feasible region. We analyze the segment polarity gene network to illustrate our approach. The study of random walks in the parameter space and how they exit the feasible region provide us with a rich perspective on the different modes of failure of this network model. In particular, we found that, between two alternative ways of activating Wingless, one is more robust than the other. Our method provides a more complete measure of robustness to parameter variation. As a general modeling strategy, our approach is an interesting alternative to Boolean representation of biochemical networks.
Species distribution model transferability and model grain size - finer may not always be better.
Manzoor, Syed Amir; Griffiths, Geoffrey; Lukac, Martin
2018-05-08
Species distribution models have been used to predict the distribution of invasive species for conservation planning. Understanding spatial transferability of niche predictions is critical to promote species-habitat conservation and forecasting areas vulnerable to invasion. Grain size of predictor variables is an important factor affecting the accuracy and transferability of species distribution models. Choice of grain size is often dependent on the type of predictor variables used and the selection of predictors sometimes rely on data availability. This study employed the MAXENT species distribution model to investigate the effect of the grain size on model transferability for an invasive plant species. We modelled the distribution of Rhododendron ponticum in Wales, U.K. and tested model performance and transferability by varying grain size (50 m, 300 m, and 1 km). MAXENT-based models are sensitive to grain size and selection of variables. We found that over-reliance on the commonly used bioclimatic variables may lead to less accurate models as it often compromises the finer grain size of biophysical variables which may be more important determinants of species distribution at small spatial scales. Model accuracy is likely to increase with decreasing grain size. However, successful model transferability may require optimization of model grain size.
Modelling of air-conditioned and heated spaces
Energy Technology Data Exchange (ETDEWEB)
Moehl, U
1987-01-01
A space represents a complex system involving numerous components, manipulated variables and disturbances which need to be described if dynamic behaviour of space air is to be determined. A justifiable amount of simulation input is determined by the application of adjusted modelling of the individual components. The determination of natural air exchange in heated spaces and of space-air flow in air-conditioned space are a primary source of uncertainties. (orig.).
Liu, Chenbin; Schild, Steven E; Chang, Joe Y; Liao, Zhongxing; Korte, Shawn; Shen, Jiajian; Ding, Xiaoning; Hu, Yanle; Kang, Yixiu; Keole, Sameer R; Sio, Terence T; Wong, William W; Sahoo, Narayan; Bues, Martin; Liu, Wei
2018-06-01
To investigate how spot size and spacing affect plan quality, robustness, and interplay effects of robustly optimized intensity modulated proton therapy (IMPT) for lung cancer. Two robustly optimized IMPT plans were created for 10 lung cancer patients: first by a large-spot machine with in-air energy-dependent large spot size at isocenter (σ: 6-15 mm) and spacing (1.3 σ), and second by a small-spot machine with in-air energy-dependent small spot size (σ: 2-6 mm) and spacing (5 mm). Both plans were generated by optimizing radiation dose to internal target volume on averaged 4-dimensional computed tomography scans using an in-house-developed IMPT planning system. The dose-volume histograms band method was used to evaluate plan robustness. Dose evaluation software was developed to model time-dependent spot delivery to incorporate interplay effects with randomized starting phases for each field per fraction. Patient anatomy voxels were mapped phase-to-phase via deformable image registration, and doses were scored using in-house-developed software. Dose-volume histogram indices, including internal target volume dose coverage, homogeneity, and organs at risk (OARs) sparing, were compared using the Wilcoxon signed-rank test. Compared with the large-spot machine, the small-spot machine resulted in significantly lower heart and esophagus mean doses, with comparable target dose coverage, homogeneity, and protection of other OARs. Plan robustness was comparable for targets and most OARs. With interplay effects considered, significantly lower heart and esophagus mean doses with comparable target dose coverage and homogeneity were observed using smaller spots. Robust optimization with a small spot-machine significantly improves heart and esophagus sparing, with comparable plan robustness and interplay effects compared with robust optimization with a large-spot machine. A small-spot machine uses a larger number of spots to cover the same tumors compared with a large
Modeling Dispersion of Chemical-Biological Agents in Three Dimensional Living Space
International Nuclear Information System (INIS)
William S. Winters
2002-01-01
This report documents a series of calculations designed to demonstrate Sandia's capability in modeling the dispersal of chemical and biological agents in complex three-dimensional spaces. The transport of particles representing biological agents is modeled in a single room and in several connected rooms. The influence of particle size, particle weight and injection method are studied
Chulaki, A.; Kuznetsova, M. M.; Rastaetter, L.; MacNeice, P. J.; Shim, J. S.; Pulkkinen, A. A.; Taktakishvili, A.; Mays, M. L.; Mendoza, A. M. M.; Zheng, Y.; Mullinix, R.; Collado-Vega, Y. M.; Maddox, M. M.; Pembroke, A. D.; Wiegand, C.
2015-12-01
Community Coordinated Modeling Center (CCMC) is a NASA affiliated interagency partnership with the primary goal of aiding the transition of modern space science models into space weather forecasting while supporting space science research. Additionally, over the past ten years it has established itself as a global space science education resource supporting undergraduate and graduate education and research, and spreading space weather awareness worldwide. A unique combination of assets, capabilities and close ties to the scientific and educational communities enable this small group to serve as a hub for raising generations of young space scientists and engineers. CCMC resources are publicly available online, providing unprecedented global access to the largest collection of modern space science models (developed by the international research community). CCMC has revolutionized the way simulations are utilized in classrooms settings, student projects, and scientific labs and serves hundreds of educators, students and researchers every year. Another major CCMC asset is an expert space weather prototyping team primarily serving NASA's interplanetary space weather needs. Capitalizing on its unrivaled capabilities and experiences, the team provides in-depth space weather training to students and professionals worldwide, and offers an amazing opportunity for undergraduates to engage in real-time space weather monitoring, analysis, forecasting and research. In-house development of state-of-the-art space weather tools and applications provides exciting opportunities to students majoring in computer science and computer engineering fields to intern with the software engineers at the CCMC while also learning about the space weather from the NASA scientists.
Wegener, Veronika; Jorysz, Gabriele; Arnoldi, Andreas; Utzschneider, Sandra; Wegener, Bernd; Jansson, Volkmar; Heimkes, Bernhard
2017-03-01
Evaluation of hip joint space width during child growth is important to aid in the early diagnosis of hip pathology in children. We established reference values for hip joint space and femoral head size for each age. Hip joint space development during growth was retrospectively investigated medial and cranial in 1350 hip joints of children using standard anteroposterior supine plain pelvic radiographs. Maximum capital femoral epiphysis diameter and femoral radii were further more investigated. Hip joint space values show a slow decline during growth. Joint space was statistically significantly (p < 0.006) larger in boys than girls. Our hip joint space measurements on supine subjects seem slightly larger than those reported by Hughes on standing subjects. Evaluation of the femoral head diameter and the radii showed a size curve quite parallel to the known body growth charts. Radii medial and perpendicular to the physis are not statistically significantly different. We recommend to compare measurements of hip joint space at two locations to age dependent charts using the same imaging technique. During growth, a divergence in femoral head size from the expected values or loss of the spherical shape should raise the question of hip disorder. Clin. Anat. 30:267-275, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Testing the quantity–quality model of fertility: Estimation using unrestricted family size models
Mogstad, Magne; Wiswall, Matthew
2016-01-01
We examine the relationship between child quantity and quality. Motivated by the theoretical ambiguity regarding the sign of the marginal effects of additional siblings on children's outcomes, our empirical model allows for an unrestricted relationship between family size and child outcomes. We find that the conclusion in Black, Devereux, and Salvanes (2005) of no family size effect does not hold after relaxing their linear specification in family size. We find nonzero effects of family size ...
Building predictive models of soil particle-size distribution
Directory of Open Access Journals (Sweden)
Alessandro Samuel-Rosa
2013-04-01
Full Text Available Is it possible to build predictive models (PMs of soil particle-size distribution (psd in a region with complex geology and a young and unstable land-surface? The main objective of this study was to answer this question. A set of 339 soil samples from a small slope catchment in Southern Brazil was used to build PMs of psd in the surface soil layer. Multiple linear regression models were constructed using terrain attributes (elevation, slope, catchment area, convergence index, and topographic wetness index. The PMs explained more than half of the data variance. This performance is similar to (or even better than that of the conventional soil mapping approach. For some size fractions, the PM performance can reach 70 %. Largest uncertainties were observed in geologically more complex areas. Therefore, significant improvements in the predictions can only be achieved if accurate geological data is made available. Meanwhile, PMs built on terrain attributes are efficient in predicting the particle-size distribution (psd of soils in regions of complex geology.
Modelling the near-Earth space environment using LDEF data
Atkinson, Dale R.; Coombs, Cassandra R.; Crowell, Lawrence B.; Watts, Alan J.
1992-01-01
Near-Earth space is a dynamic environment, that is currently not well understood. In an effort to better characterize the near-Earth space environment, this study compares the results of actual impact crater measurement data and the Space Environment (SPENV) Program developed in-house at POD, to theoretical models established by Kessler (NASA TM-100471, 1987) and Cour-Palais (NASA SP-8013, 1969). With the continuing escalation of debris there will exist a definite hazard to unmanned satellites as well as manned operations. Since the smaller non-trackable debris has the highest impact rate, it is clearly necessary to establish the true debris environment for all particle sizes. Proper comprehension of the near-Earth space environment and its origin will permit improvement in spacecraft design and mission planning, thereby reducing potential disasters and extreme costs. Results of this study directly relate to the survivability of future spacecraft and satellites that are to travel through and/or reside in low Earth orbit (LEO). More specifically, these data are being used to: (1) characterize the effects of the LEO micrometeoroid an debris environment on satellite designs and components; (2) update the current theoretical micrometeoroid and debris models for LEO; (3) help assess the survivability of spacecraft and satellites that must travel through or reside in LEO, and the probability of their collision with already resident debris; and (4) help define and evaluate future debris mitigation and disposal methods. Combined model predictions match relatively well with the LDEF data for impact craters larger than approximately 0.05 cm, diameter; however, for smaller impact craters, the combined predictions diverge and do not reflect the sporadic clouds identified by the Interplanetary Dust Experiment (IDE) aboard LDEF. The divergences cannot currently be explained by the authors or model developers. The mean flux of small craters (approximately 0.05 cm diameter) is
A model of litter size distribution in cattle.
Bennett, G L; Echternkamp, S E; Gregory, K E
1998-07-01
Genetic increases in twinning of cattle could result in increased frequency of triplet or higher-order births. There are no estimates of the incidence of triplets in populations with genetic levels of twinning over 40% because these populations either have not existed or have not been documented. A model of the distribution of litter size in cattle is proposed. Empirical estimates of ovulation rate distribution in sheep were combined with biological hypotheses about the fate of embryos in cattle. Two phases of embryo loss were hypothesized. The first phase is considered to be preimplantation. Losses in this phase occur independently (i.e., the loss of one embryo does not affect the loss of the remaining embryos). The second phase occurs after implantation. The loss of one embryo in this stage results in the loss of all embryos. Fewer than 5% triplet births are predicted when 50% of births are twins and triplets. Above 60% multiple births, increased triplets accounted for most of the increase in litter size. Predictions were compared with data from 5,142 calvings by 14 groups of heifers and cows with average litter sizes ranging from 1.14 to 1.36 calves. The predicted number of triplets was not significantly different (chi2 = 16.85, df = 14) from the observed number. The model also predicted differences in conception rates. A cow ovulating two ova was predicted to have the highest conception rate in a single breeding cycle. As mean ovulation rate increased, predicted conception to one breeding cycle increased. Conception to two or three breeding cycles decreased as mean ovulation increased because late-pregnancy failures increased. An alternative model of the fate of ova in cattle based on embryo and uterine competency predicts very similar proportions of singles, twins, and triplets but different conception rates. The proposed model of litter size distribution in cattle accurately predicts the proportion of triplets found in cattle with genetically high twinning
Reliable critical sized defect rodent model for cleft palate research.
Mostafa, Nesrine Z; Doschak, Michael R; Major, Paul W; Talwar, Reena
2014-12-01
Suitable animal models are necessary to test the efficacy of new bone grafting therapies in cleft palate surgery. Rodent models of cleft palate are available but have limitations. This study compared and modified mid-palate cleft (MPC) and alveolar cleft (AC) models to determine the most reliable and reproducible model for bone grafting studies. Published MPC model (9 × 5 × 3 mm(3)) lacked sufficient information for tested rats. Our initial studies utilizing AC model (7 × 4 × 3 mm(3)) in 8 and 16 weeks old Sprague Dawley (SD) rats revealed injury to adjacent structures. After comparing anteroposterior and transverse maxillary dimensions in 16 weeks old SD and Wistar rats, virtual planning was performed to modify MPC and AC defects dimensions, taking the adjacent structures into consideration. Modified MPC (7 × 2.5 × 1 mm(3)) and AC (5 × 2.5 × 1 mm(3)) defects were employed in 16 weeks old Wistar rats and healing was monitored by micro-computed tomography and histology. Maxillary dimensions in SD and Wistar rats were not significantly different. Preoperative virtual planning enhanced postoperative surgical outcomes. Bone healing occurred at defect margin leaving central bone void confirming the critical size nature of the modified MPC and AC defects. Presented modifications for MPC and AC models created clinically relevant and reproducible defects. Copyright © 2014 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
Macro Level Simulation Model Of Space Shuttle Processing
2000-01-01
The contents include: 1) Space Shuttle Processing Simulation Model; 2) Knowledge Acquisition; 3) Simulation Input Analysis; 4) Model Applications in Current Shuttle Environment; and 5) Model Applications for Future Reusable Launch Vehicles (RLV's). This paper is presented in viewgraph form.
Enhanced surrogate models for statistical design exploiting space mapping technology
DEFF Research Database (Denmark)
Koziel, Slawek; Bandler, John W.; Mohamed, Achmed S.
2005-01-01
We present advances in microwave and RF device modeling exploiting Space Mapping (SM) technology. We propose new SM modeling formulations utilizing input mappings, output mappings, frequency scaling and quadratic approximations. Our aim is to enhance circuit models for statistical analysis...
Sesto, Mary E; Irwin, Curtis B; Chen, Karen B; Chourasia, Amrish O; Wiegmann, Douglas A
2012-06-01
The aim of this study was to investigate the effect of button size and spacing on touch characteristics (forces, impulses, and dwell times) during a digit entry touch screen task. A secondary objective was to investigate the effect of disability on touch characteristics. Touch screens are common in public settings and workplaces. Although research has examined the effect of button size and spacing on performance, the effect on touch characteristics is unknown. A total of 52 participants (n = 23, fine motor control disability; n = 14, gross motor control disability; n = 15, no disability) completed a digit entry task. Button sizes varied from 10 mm to 30 mm, and button spacing was 1 mm or 3 mm. Touch characteristics were significantly affected by button size. The exerted peak forces increased 17% between the largest and the smallest buttons, whereas impulses decreased 28%. Compared with the fine motor and nondisabled groups, the gross motor group had greater impulses (98% and 167%, respectively) and dwell times (60% and 129%, respectively). Peak forces were similar for all groups. Button size but not spacing influenced touch characteristics during a digit entry task. The gross motor group had significantly greater dwell times and impulses than did the fine motor and nondisabled groups. Research on touch characteristics, in conjunction with that on user performance, can be used to guide human computer interface design strategies to improve accessibility of touch screen interfaces. Further research is needed to evaluate the effect of the exerted peak forces and impulses on user performance and fatigue.
Various verifying tests using full size partial models of PCCV
International Nuclear Information System (INIS)
Nagata, Kaoru; Fukihara, Masaaki; Takemoto, Yasushi.
1987-01-01
The prestressed concrete containment vessel (PCCV) for Tsuruga No.2 plant of Japan Atomic Power Co. was adopted for the first time in Japan, and the necessity of experimental verification was pointed out about a number of items in the design and construction. In this report, the various tests carried out with full size models are described. The tendon system adopted for this PCCV is BBRV type, in which PC wires are bundled in parallel to make cables, and involves many matters inexperienced in Japan, such as the stretching capacity is as large as 1000 t class, the longest cable is 160 m, and it is the unbonded system of injecting rust inhibitor. It was demanded to confirm by testing the propriety of the small coefficient of friction at the time of stretching tendons. For the tests, the materials, equipment and their size were prepared all as those for actual works. The test works became the rehearsal of the actual prestressing works. Besides, by utilizing these full size test beds, the workability test on concrete at the time of their construction, the confirmation test on tendon strength and the safety of concrete at fixing part at the time of friction test, thereafter, greasing test, the simulation test of in-service inspection, and the thermal loading test on liners were carried out. The results of these tests are briefly reported. (Kako, I.)
A general-model-space diagrammatic perturbation theory
International Nuclear Information System (INIS)
Hose, G.; Kaldor, U.
1980-01-01
A diagrammatic many-body perturbation theory applicable to arbitrary model spaces is presented. The necessity of having a complete model space (all possible occupancies of the partially-filled shells) is avoided. This requirement may be troublesome for systems with several well-spaced open shells, such as most atomic and molecular excited states, as a complete model space spans a very broad energy range and leaves out states within that range, leading to poor or no convergence of the perturbation series. The method presented here would be particularly useful for such states. The solution of a model problem (He 2 excited Σ + sub(g) states) is demonstrated. (Auth.)
Energy Technology Data Exchange (ETDEWEB)
Correa, E.B.S. [Universidade Federal do Sul e Sudeste do Para, Instituto de Ciencias Exatas, Maraba (Brazil); Centro Brasileiro de Pesquisas Fisicas-CBPF/MCTI, Rio de Janeiro (Brazil); Linhares, C.A. [Universidade do Estado do Rio de Janeiro, Instituto de Fisica, Rio de Janeiro (Brazil); Malbouisson, A.P.C. [Centro Brasileiro de Pesquisas Fisicas-CBPF/MCTI, Rio de Janeiro (Brazil); Malbouisson, J.M.C. [Universidade Federal da Bahia, Instituto de Fisica, Salvador (Brazil); Santana, A.E. [Universidade de Brasilia, Instituto de Fisica, Brasilia, DF (Brazil)
2017-04-15
We study effects coming from finite size, chemical potential and from a magnetic background on a massive version of a four-fermion interacting model. This is performed in four dimensions as an application of recent developments for dealing with field theories defined on toroidal spaces. We study effects of the magnetic field and chemical potential on the size-dependent phase structure of the model, in particular, how the applied magnetic field affects the size-dependent critical temperature. A connection with some aspects of the hadronic phase transition is established. (orig.)
International Nuclear Information System (INIS)
Lee, TK
2015-01-01
Purpose In proton beam configuration for spot scanning proton therapy (SSPT), one can define the spacing between spots and lines of scanning as a ratio of given spot size. If the spacing increases, the number of spots decreases which can potentially decrease scan time, and so can whole treatment time, and vice versa. However, if the spacing is too large, the uniformity of scanned field decreases. Also, the field uniformity can be affected by motion during SSPT beam delivery. In the present study, the interplay between spot/ line spacing and motion is investigated. Methods We used four Gaussian-shape spot sizes with 0.5cm, 1.0cm, 1.5cm, and 2.0cm FWHM, three spot/line spacing that creates uniform field profile which are 1/3*FWHM, σ/3*FWHM and 2/3*FWHM, and three random motion amplitudes within, +/−0.3mm, +/−0.5mm, and +/−1.0mm. We planned with 2Gy uniform single layer of 10×10cm2 and 20×20cm2 fields. Then, mean dose within 80% area of given field size, contrubuting MU per each spot assuming 1cGy/MU calibration for all spot sizes, number of spots and uniformity were calculated. Results The plans with spot/line spacing equal to or smaller than 2/3*FWHM without motion create ∼100% uniformity. However, it was found that the uniformity decreases with increased spacing, and it is more pronounced with smaller spot sizes, but is not affected by scanned field sizes. Conclusion It was found that the motion during proton beam delivery can alter the dose uniformity and the amount of alteration changes with spot size which changes with energy and spot/line spacing. Currently, robust evaluation in TPS (e.g. Eclipse system) performs range uncertainty evaluation using isocenter shift and CT calibration error. Based on presented study, it is recommended to add interplay effect evaluation to robust evaluation process. For future study, the additional interplay between the energy layers and motion is expected to present volumetric effect
Axiomatics of uniform space-time models
International Nuclear Information System (INIS)
Levichev, A.V.
1983-01-01
The mathematical statement of space-time axiomatics of the special theory of relativity is given; it postulates that the space-time M is the binding single boundary Hausedorf local-compact four-dimensional topological space with the given order. The theorem is proved: if the invariant order in the four-dimensional group M is given by the semi-group P, which contingency K contains inner points , then M is commutative. The analogous theorem is correct for the group of two and three dimensionalities
Petroni, Giorgio; Bigliardi, Barbara; Galati, Francesco; Petroni, Alberto
2018-01-01
This study investigates the benefits and limits deriving from membership with ESA of six medium-sized space agencies in terms of strengthening and development (or not) of space technologies, as well as their contribution to the growth of productive activities and to the increase of services for citizens. This research contributes to the more general issue of the usefulness of space activities, not only for scientific or military-political purposes but also for economic and social development. Results show that, on the one hand, the membership with ESA has allowed smaller Countries to access space programs, to develop advanced technologies and to support the growth of their firms in some significant markets, but, on the other hand, the membership has also limited the access to space to few companies, without encouraging the broad dissemination of technological knowledge.
Cosmic structure sizes in generic dark energy models
Energy Technology Data Exchange (ETDEWEB)
Bhattacharya, Sourav [Indian Institute of Technology Ropar, Department of Physics, Rupnagar, Punjab (India); Tomaras, Theodore N. [ITCP and Department of Physics, University of Crete, Heraklion (Greece)
2017-08-15
The maximum allowable size of a spherical cosmic structure as a function of its mass is determined by the maximum turn around radius R{sub TA,max}, the distance from its center where the attraction on a radial test particle due to the spherical mass is balanced with the repulsion due to the ambient dark energy. In this work, we extend the existing results in several directions. (a) We first show that, for w ≠ -1, the expression for R{sub TA,max} found earlier, using the cosmological perturbation theory, can be derived using a static geometry as well. (b) In the generic dark energy model with arbitrary time dependent state parameter w(t), taking into account the effect of inhomogeneities upon the dark energy as well, it is shown that the data constrain w(t = today) > -2.3. (c) We address the quintessence and the generalized Chaplygin gas models, both of which are shown to predict structure sizes consistent with observations. (orig.)
Berry, R.; Shandas, V.; Makido, Y.
2017-12-01
Many cities are unintentionally designed to be heat sinks, which absorb the sun's short-wave radiation and reemit as long-wave radiation. Long time reorganization of this `urban heat island' (UHI) phenomena has led researchers and city planners into developing strategies for reducing ambient temperatures through urban design. Specifically, greening areas have proven to reduce the temperature in UHI's, including strategies such as green streets, green facades, and green roofs have been implemented. Among the scientific community there is promoted study of how myriad greening strategies can reduce temperature, relatively limited work has focused on the distribution, density, and quantity of tree campaigns. This paper examines how the spacing and size of trees reduce temperatures differently. A major focus of the paper is to understand how to lower the temperature through tree planting, and provide recommendations to cities that are attempting to solve their own urban heat island issues. Because different cities have different room for planting greenery, we examined which strategies are more efficient given an area constraint. Areas that have less available room might not be able to plant a high density of trees. We compared the different experimental groups varying in density and size of trees against the control to see the effect the trees had. Through calibration with local weather stations, we used a micrometeorology program (ENVI-Met) to model and simulate the different experimental models and how they affect the temperature. The results suggest that some urban designs can reduce ambient temperatures by over 7 0C, and the inclusion of large form trees have the greatest contribution, by reducing temperatures over 15 0C. The results suggest that using specific strategies that combine placement of specific tree configurations with alternative distribution of urban development patterns can help to solve the current challenges of UHI's, and thereby support management
Charlton, Benjamin D; Ellis, William A H; McKinnon, Allan J; Cowin, Gary J; Brumm, Jacqui; Nilsson, Karen; Fitch, W Tecumseh
2011-10-15
Determining the information content of vocal signals and understanding morphological modifications of vocal anatomy are key steps towards revealing the selection pressures acting on a given species' vocal communication system. Here, we used a combination of acoustic and anatomical data to investigate whether male koala bellows provide reliable information on the caller's body size, and to confirm whether male koalas have a permanently descended larynx. Our results indicate that the spectral prominences of male koala bellows are formants (vocal tract resonances), and show that larger males have lower formant spacing. In contrast, no relationship between body size and the fundamental frequency was found. Anatomical investigations revealed that male koalas have a permanently descended larynx: the first example of this in a marsupial. Furthermore, we found a deeply anchored sternothyroid muscle that could allow male koalas to retract their larynx into the thorax. While this would explain the low formant spacing of the exhalation and initial inhalation phases of male bellows, further research will be required to reveal the anatomical basis for the formant spacing of the later inhalation phases, which is predictive of vocal tract lengths of around 50 cm (nearly the length of an adult koala's body). Taken together, these findings show that the formant spacing of male koala bellows has the potential to provide receivers with reliable information on the caller's body size, and reveal that vocal adaptations allowing callers to exaggerate (or maximise) the acoustic impression of their size have evolved independently in marsupials and placental mammals.
Model Adaptation in Parametric Space for POD-Galerkin Models
Gao, Haotian; Wei, Mingjun
2017-11-01
The development of low-order POD-Galerkin models is largely motivated by the expectation to use the model developed with a set of parameters at their native values to predict the dynamic behaviors of the same system under different parametric values, in other words, a successful model adaptation in parametric space. However, most of time, even small deviation of parameters from their original value may lead to large deviation or unstable results. It has been shown that adding more information (e.g. a steady state, mean value of a different unsteady state, or an entire different set of POD modes) may improve the prediction of flow with other parametric states. For a simple case of the flow passing a fixed cylinder, an orthogonal mean mode at a different Reynolds number may stabilize the POD-Galerkin model when Reynolds number is changed. For a more complicated case of the flow passing an oscillatory cylinder, a global POD-Galerkin model is first applied to handle the moving boundaries, then more information (e.g. more POD modes) is required to predicate the flow under different oscillatory frequencies. Supported by ARL.
Size effects in ductile cellular solids. Part I : modeling
Onck, P.R.; Andrews, E.W.; Gibson, L.J.
2001-01-01
In the mechanical testing of metallic foams, an important issue is the effect of the specimen size, relative to the cell size, on the measured properties. Here we analyze size effects for the modulus and strength of regular, hexagonal honeycombs under uniaxial and shear loadings. Size effects for
A composite model of the space-time and 'colors'
International Nuclear Information System (INIS)
Terazawa, Hidezumi.
1987-03-01
A pregeometric and pregauge model of the space-time and ''colors'' in which the space-time metric and ''color'' gauge fields are both composite is presented. By the non-triviality of the model, the number of space-time dimensions is restricted to be not larger than the number of ''colors''. The long conjectured space-color correspondence is realized in the model action of the Nambu-Goto type which is invariant under both general-coordinate and local-gauge transformations. (author)
Phase-space dynamics of Bianchi IX cosmological models
International Nuclear Information System (INIS)
Soares, I.D.
1985-01-01
The complex phase-space dynamical behaviour of a class of Biachi IX cosmological models is discussed, as the chaotic gravitational collapse due Poincare's homoclinic phenomena, and the n-furcation of periodic orbits and tori in the phase space of the models. Poincare maps which show this behaviour are constructed merically and applications are discussed. (Author) [pt
Queuing theory models used for port equipment sizing
Dragu, V.; Dinu, O.; Ruscă, A.; Burciu, Ş.; Roman, E. A.
2017-08-01
The significant growth of volumes and distances on road transportation led to the necessity of finding solutions to increase water transportation market share together with the handling and transfer technologies within its terminals. It is widely known that the biggest times are consumed within the transport terminals (loading/unloading/transfer) and so the necessity of constantly developing handling techniques and technologies in concordance with the goods flows size so that the total waiting time of ships within ports is reduced. Port development should be achieved by harmonizing the contradictory interests of port administration and users. Port administrators aim profit increase opposite to users that want savings by increasing consumers’ surplus. The difficulty consists in the fact that the transport demand - supply equilibrium must be realised at costs and goods quantities transiting the port in order to satisfy the interests of both parties involved. This paper presents a port equipment sizing model by using queueing theory so that the sum of costs for ships waiting operations and equipment usage would be minimum. Ship operation within the port is assimilated to a mass service waiting system in which parameters are later used to determine the main costs for ships and port equipment.
Particle size - An important factor in environmental consequence modeling
International Nuclear Information System (INIS)
Yuan, Y.C.; MacFarlane, D.
1991-01-01
Most available environmental transport and dosimetry codes for radiological consequence analysis are designed primarily for estimating dose and health consequences to specific off-site individuals as well as the population as a whole from nuclear facilities operating under either normal or accident conditions. Models developed for these types of analyses are generally based on assumptions that the receptors are at great distances (several kilometers), and the releases are prolonged and filtered. This allows the use of simplified approaches such as averaged meteorological conditions and the use of a single (small) particle size for atmospheric transport and dosimetry analysis. Source depletion from particle settling, settle-out, and deposition is often ignored. This paper estimates the effects of large particles on the resulting dose consequences from an atmospheric release. The computer program AI-RISK has been developed to perform multiparticle-sized atmospheric transport, dose, and pathway analyses for estimating potential human health consequences from the accidental release of radioactive materials. The program was originally developed to facilitate comprehensive analyses of health consequences, ground contamination, and cleanup associated with possible energetic chemical reactions in high-level radioactive waste (HLW) tanks at a US Department of Energy site
Simulation Models to Size and Retrofit District Heating Systems
Directory of Open Access Journals (Sweden)
Kevin Sartor
2017-12-01
Full Text Available District heating networks are considered as convenient systems to supply heat to consumers while reducing CO 2 emissions and increasing renewable energies use. However, to make them as profitable as possible, they have to be developed, operated and sized carefully. In order to cope with these objectives, simulation tools are required to analyze several configuration schemes and control methods. Indeed, the most common problems are heat losses, the electric pump consumption and the peak heat demand while ensuring the comfort of the users. In this contribution, a dynamic simulation model of all the components of the network is described. It is dedicated to assess some energetic, environmental and economic indicators. Finally, the methodology is used on an existing application test case namely the district heating network of the University of Liège to study the pump control and minimize the district heating network heat losses.
Glottal aerodynamics in compliant, life-sized vocal fold models
McPhail, Michael; Dowell, Grant; Krane, Michael
2013-11-01
This talk presents high-speed PIV measurements in compliant, life-sized models of the vocal folds. A clearer understanding of the fluid-structure interaction of voiced speech, how it produces sound, and how it varies with pathology is required to improve clinical diagnosis and treatment of vocal disorders. Physical models of the vocal folds can answer questions regarding the fundamental physics of speech, as well as the ability of clinical measures to detect the presence and extent of disorder. Flow fields were recorded in the supraglottal region of the models to estimate terms in the equations of fluid motion, and their relative importance. Experiments were conducted over a range of driving pressures with flow rates, given by a ball flowmeter, and subglottal pressures, given by a micro-manometer, reported for each case. Imaging of vocal fold motion, vector fields showing glottal jet behavior, and terms estimated by control volume analysis will be presented. The use of these results for a comparison with clinical measures, and for the estimation of aeroacoustic source strengths will be discussed. Acknowledge support from NIH R01 DC005642.
SIMPLIFIED MATHEMATICAL MODEL OF SMALL SIZED UNMANNED AIRCRAFT VEHICLE LAYOUT
Directory of Open Access Journals (Sweden)
2016-01-01
Full Text Available Strong reduction of new aircraft design period using new technology based on artificial intelligence is the key problem mentioned in forecasts of leading aerospace industry research centers. This article covers the approach to devel- opment of quick aerodynamic design methods based on artificial intelligence neural system. The problem is being solved for the classical scheme of small sized unmanned aircraft vehicle (UAV. The principal parts of the method are the mathe- matical model of layout, layout generator of this type of aircraft is built on aircraft neural networks, automatic selection module for cleaning variety of layouts generated in automatic mode, robust direct computational fluid dynamics method, aerodynamic characteristics approximators on artificial neural networks.Methods based on artificial neural networks have intermediate position between computational fluid dynamics methods or experiments and simplified engineering approaches. The use of ANN for estimating aerodynamic characteris-tics put limitations on input data. For this task the layout must be presented as a vector with dimension not exceeding sev-eral hundred. Vector components must include all main parameters conventionally used for layouts description and com- pletely replicate the most important aerodynamics and structural properties.The first stage of the work is presented in the paper. Simplified mathematical model of small sized UAV was developed. To estimate the range of geometrical parameters of layouts the review of existing vehicle was done. The result of the work is the algorithm and computer software for generating the layouts based on ANN technolo-gy. 10000 samples were generated and the dataset containig geometrical and aerodynamic characteristics of layoutwas created.
Brisset, Julie; Heißelmann, Daniel; Kothe, Stefan; Weidling, René; Blum, Jürgen
2013-09-01
The Suborbital Particle Aggregation and Collision Experiment (SPACE) is a novel approach to study the collision properties of submillimeter-sized, highly porous dust aggregates. The experiment was designed, built, and carried out to increase our knowledge about the processes dominating the first phase of planet formation. During this phase, the growth of planetary precursors occurs by agglomeration of micrometer-sized dust grains into aggregates of at least millimeters to centimeters in size. However, the formation of larger bodies from the so-formed building blocks is not yet fully understood. Recent numerical models on dust growth lack a particular support by experimental studies in the size range of submillimeters, because these particles are predicted to collide at very gentle relative velocities of below 1 cm/s that can only be achieved in a reduced-gravity environment. The SPACE experiment investigates the collision behavior of an ensemble of silicate-dust aggregates inside several evacuated glass containers which are being agitated by a shaker to induce the desired collisions at chosen velocities. The dust aggregates are being observed by a high-speed camera, allowing for the determination of the collision properties of the protoplanetary dust analog material. The data obtained from the suborbital flight with the REXUS (Rocket Experiments for University Students) 12 rocket will be directly implemented into a state-of-the-art dust growth and collision model.
The space-time model according to dimensional continuous space-time theory
International Nuclear Information System (INIS)
Martini, Luiz Cesar
2014-01-01
This article results from the Dimensional Continuous Space-Time Theory for which the introductory theoretician was presented in [1]. A theoretical model of the Continuous Space-Time is presented. The wave equation of time into absolutely stationary empty space referential will be described in detail. The complex time, that is the time fixed on the infinite phase time speed referential, is deduced from the New View of Relativity Theory that is being submitted simultaneously with this article in this congress. Finally considering the inseparable Space-Time is presented the duality equation wave-particle.
Space Particle Hazard Measurement and Modeling
2007-11-30
the spacecraft and perturbations of the environment generated by the spacecraft. Koons et al. (1999) compiled and studied all spacecraft anomalies...unrealistic for D12 than for Dα0p). However, unlike the stability problems associated with the original cross diffusion terms, they are quite manageable ...E), to mono-energetic beams of charged particles of known energies which enables one, in principle , to unfold the space environment spectrum, j(E
Applying MDA to SDR for Space to Model Real-time Issues
Blaser, Tammy M.
2007-01-01
NASA space communications systems have the challenge of designing SDRs with highly-constrained Size, Weight and Power (SWaP) resources. A study is being conducted to assess the effectiveness of applying the MDA Platform-Independent Model (PIM) and one or more Platform-Specific Models (PSM) specifically to address NASA space domain real-time issues. This paper will summarize our experiences with applying MDA to SDR for Space to model real-time issues. Real-time issues to be examined, measured, and analyzed are: meeting waveform timing requirements and efficiently applying Real-time Operating System (RTOS) scheduling algorithms, applying safety control measures, and SWaP verification. Real-time waveform algorithms benchmarked with the worst case environment conditions under the heaviest workload will drive the SDR for Space real-time PSM design.
Track structure model of cell damage in space flight
Katz, Robert; Cucinotta, Francis A.; Wilson, John W.; Shinn, Judy L.; Ngo, Duc M.
1992-01-01
The phenomenological track-structure model of cell damage is discussed. A description of the application of the track-structure model with the NASA Langley transport code for laboratory and space radiation is given. Comparisons to experimental results for cell survival during exposure to monoenergetic, heavy-ion beams are made. The model is also applied to predict cell damage rates and relative biological effectiveness for deep-space exposures.
Mathematical Model of the Public Understanding of Space Science
Prisniakov, V.; Prisniakova, L.
The success in deployment of the space programs now in many respects depends on comprehension by the citizens of necessity of programs, from "space" erudition of country. Purposefulness and efficiency of the "space" teaching and educational activity depend on knowledge of relationships between separate variables of such process. The empirical methods of ``space'' well-information of the taxpayers should be supplemented by theoretical models permitting to demonstrate a ways of control by these processes. Authors on the basis of their experience of educational activity during 50- years of among the students of space-rocket profession obtain an equation of ``space" state of the society determining a degree of its knowledge about Space, about achievements in its development, about indispensable lines of investigations, rates of informatization of the population. It is supposed, that the change of the space information consists of two parts: (1) - from going of the information about practical achievements, about development special knowledge requiring of independent financing, and (2) from intensity of dissemination of the ``free" information of a general educational line going to the population through mass-media, book, in family, in educational institutions, as a part of obligatory knowledge of any man, etc. In proposed model the level space well-information of the population depends on intensity of dissemination in the society of the space information, and also from a volume of financing of space-rocket technology, from a part of population of the employment in the space-rocket programs, from a factor of education of the population in adherence to space problems, from welfare and mentality of the people, from a rate of unemployment and material inequality. Obtained in the report on these principles the equation of a space state of the society corresponds to catastrophe such as cusp, the analysis has shown which one ways of control of the public understanding of space
Preliminary Multi-Variable Parametric Cost Model for Space Telescopes
Stahl, H. Philip; Hendrichs, Todd
2010-01-01
This slide presentation reviews creating a preliminary multi-variable cost model for the contract costs of making a space telescope. There is discussion of the methodology for collecting the data, definition of the statistical analysis methodology, single variable model results, testing of historical models and an introduction of the multi variable models.
Superfield Lax formalism of supersymmetric sigma model on symmetric spaces
International Nuclear Information System (INIS)
Saleem, U.; Hassan, M.
2006-01-01
We present a superfield Lax formalism of the superspace sigma model based on the target space G/H and show that a one-parameter family of flat superfield connections exists if the target space G/H is a symmetric space. The formalism has been related to the existence of an infinite family of local and non-local superfield conserved quantities. A few examples have been given to illustrate the results. (orig.)
Gravity mediated Dark Matter models in the de Sitter space
Vancea, Ion V.
2018-01-01
In this paper, we generalize the simplified Dark Matter models with graviton mediator to the curved space-time, in particular to the de Sitter space. We obtain the generating functional of the Green's functions in the Euclidean de Sitter space for the covariant free gravitons. We determine the generating functional of the interacting theory between Dark Matter particles and the covariant gravitons. Also, we calculate explicitly the 2-point and 3-point interacting Green's functions for the sym...
Space Science Cloud: a Virtual Space Science Research Platform Based on Cloud Model
Hu, Xiaoyan; Tong, Jizhou; Zou, Ziming
Through independent and co-operational science missions, Strategic Pioneer Program (SPP) on Space Science, the new initiative of space science program in China which was approved by CAS and implemented by National Space Science Center (NSSC), dedicates to seek new discoveries and new breakthroughs in space science, thus deepen the understanding of universe and planet earth. In the framework of this program, in order to support the operations of space science missions and satisfy the demand of related research activities for e-Science, NSSC is developing a virtual space science research platform based on cloud model, namely the Space Science Cloud (SSC). In order to support mission demonstration, SSC integrates interactive satellite orbit design tool, satellite structure and payloads layout design tool, payload observation coverage analysis tool, etc., to help scientists analyze and verify space science mission designs. Another important function of SSC is supporting the mission operations, which runs through the space satellite data pipelines. Mission operators can acquire and process observation data, then distribute the data products to other systems or issue the data and archives with the services of SSC. In addition, SSC provides useful data, tools and models for space researchers. Several databases in the field of space science are integrated and an efficient retrieve system is developing. Common tools for data visualization, deep processing (e.g., smoothing and filtering tools), analysis (e.g., FFT analysis tool and minimum variance analysis tool) and mining (e.g., proton event correlation analysis tool) are also integrated to help the researchers to better utilize the data. The space weather models on SSC include magnetic storm forecast model, multi-station middle and upper atmospheric climate model, solar energetic particle propagation model and so on. All the services above-mentioned are based on the e-Science infrastructures of CAS e.g. cloud storage and
Validation of nuclear models used in space radiation shielding applications
International Nuclear Information System (INIS)
Norman, Ryan B.; Blattnig, Steve R.
2013-01-01
A program of verification and validation has been undertaken to assess the applicability of models to space radiation shielding applications and to track progress as these models are developed over time. In this work, simple validation metrics applicable to testing both model accuracy and consistency with experimental data are developed. The developed metrics treat experimental measurement uncertainty as an interval and are therefore applicable to cases in which epistemic uncertainty dominates the experimental data. To demonstrate the applicability of the metrics, nuclear physics models used by NASA for space radiation shielding applications are compared to an experimental database consisting of over 3600 experimental cross sections. A cumulative uncertainty metric is applied to the question of overall model accuracy, while a metric based on the median uncertainty is used to analyze the models from the perspective of model development by examining subsets of the model parameter space.
Payload maintenance cost model for the space telescope
White, W. L.
1980-01-01
An optimum maintenance cost model for the space telescope for a fifteen year mission cycle was developed. Various documents and subsequent updates of failure rates and configurations were made. The reliability of the space telescope for one year, two and one half years, and five years were determined using the failure rates and configurations. The failure rates and configurations were also used in the maintenance simulation computer model which simulate the failure patterns for the fifteen year mission life of the space telescope. Cost algorithms associated with the maintenance options as indicated by the failure patterns were developed and integrated into the model.
State-space prediction model for chaotic time series
Alparslan, A. K.; Sayar, M.; Atilgan, A. R.
1998-08-01
A simple method for predicting the continuation of scalar chaotic time series ahead in time is proposed. The false nearest neighbors technique in connection with the time-delayed embedding is employed so as to reconstruct the state space. A local forecasting model based upon the time evolution of the topological neighboring in the reconstructed phase space is suggested. A moving root-mean-square error is utilized in order to monitor the error along the prediction horizon. The model is tested for the convection amplitude of the Lorenz model. The results indicate that for approximately 100 cycles of the training data, the prediction follows the actual continuation very closely about six cycles. The proposed model, like other state-space forecasting models, captures the long-term behavior of the system due to the use of spatial neighbors in the state space.
Properties of Brownian Image Models in Scale-Space
DEFF Research Database (Denmark)
Pedersen, Kim Steenstrup
2003-01-01
Brownian images) will be discussed in relation to linear scale-space theory, and it will be shown empirically that the second order statistics of natural images mapped into jet space may, within some scale interval, be modeled by the Brownian image model. This is consistent with the 1/f 2 power spectrum...... law that apparently governs natural images. Furthermore, the distribution of Brownian images mapped into jet space is Gaussian and an analytical expression can be derived for the covariance matrix of Brownian images in jet space. This matrix is also a good approximation of the covariance matrix......In this paper it is argued that the Brownian image model is the least committed, scale invariant, statistical image model which describes the second order statistics of natural images. Various properties of three different types of Gaussian image models (white noise, Brownian and fractional...
A Learning State-Space Model for Image Retrieval
Directory of Open Access Journals (Sweden)
Lee Greg C
2007-01-01
Full Text Available This paper proposes an approach based on a state-space model for learning the user concepts in image retrieval. We first design a scheme of region-based image representation based on concept units, which are integrated with different types of feature spaces and with different region scales of image segmentation. The design of the concept units aims at describing similar characteristics at a certain perspective among relevant images. We present the details of our proposed approach based on a state-space model for interactive image retrieval, including likelihood and transition models, and we also describe some experiments that show the efficacy of our proposed model. This work demonstrates the feasibility of using a state-space model to estimate the user intuition in image retrieval.
Wiggermann, Neal; Smith, Kathryn; Kumpar, Dee
2017-01-01
Background A bed that is too small to allow patients to turn from supine to side lying increases the difficulty of mobilizing patients, which can increase risk of musculoskeletal injury to caregivers, increase risk of pressure injuries to patients, and reduce patient comfort. Currently, no guidance is available for what patient sizes are accommodated by the standard 91cm (36 in.)-wide hospital bed, and no studies have evaluated the relationship between anthropometric attributes and space requ...
International Nuclear Information System (INIS)
Raghavan, Narendran; Simunovic, Srdjan; Dehoff, Ryan; Plotkowski, Alex; Turner, John; Kirka, Michael; Babu, Suresh
2017-01-01
In addition to design geometry, surface roughness, and solid-state phase transformation, solidification microstructure plays a crucial role in controlling the performance of additively manufactured components. Crystallographic texture, primary dendrite arm spacing (PDAS), and grain size are directly correlated to local solidification conditions. We have developed a new melt-scan strategy for inducing site specific, on-demand control of solidification microstructure. We were able to induce variations in grain size (30 μm–150 μm) and PDAS (4 μm - 10 μm) in Inconel 718 parts produced by the electron beam additive manufacturing system (Arcam ® ). A conventional raster melt-scan resulted in a grain size of about 600 μm. The observed variations in grain size with different melt-scan strategies are rationalized using a numerical thermal and solidification model which accounts for the transient curvature of the melt pool and associated thermal gradients and liquid-solid interface velocities. The refinement in grain size at high cooling rates (>10 4 K/s) is also attributed to the potential heterogeneous nucleation of grains ahead of the epitaxially growing solidification front. The variation in PDAS is rationalized using a coupled numerical-theoretical model as a function of local solidification conditions (thermal gradient and liquid-solid interface velocity) of the melt pool.
Sizing and modelling of photovoltaic water pumping system
Al-Badi, A.; Yousef, H.; Al Mahmoudi, T.; Al-Shammaki, M.; Al-Abri, A.; Al-Hinai, A.
2018-05-01
With the decline in price of the photovoltaics (PVs) their use as a power source for water pumping is the most attractive solution instead of using diesel generators or electric motors driven by a grid system. In this paper, a method to design a PV pumping system is presented and discussed, which is then used to calculate the required size of the PV for an existing farm. Furthermore, the amount of carbon dioxide emissions saved by the use of PV water pumping system instead of using diesel-fuelled generators or electrical motor connected to the grid network is calculated. In addition, an experimental set-up is developed for the PV water pumping system using both DC and AC motors with batteries. The experimental tests are used to validate the developed MATLAB model. This research work demonstrates that using the PV water pumping system is not only improving the living conditions in rural areas but it is also protecting the environment and can be a cost-effective application in remote locations.
Portfolio size as funktion of the premium: modeling and optimization
DEFF Research Database (Denmark)
Asmussen, Søren; Christensen, Bent Jesper; Taksar, Michael I
An insurance company has a large number N of potential customers characterized by i.i.d. r.v.'s A1,…,AN giving the arrival rates of claims. Customers are risk averse, and a customer accepts an offered premium p according to his A-value. The modeling further involves a discount rate d>r of customers......, where r is the risk-free interest rate. Based on calculations of the customers' present values of the alternative strategies of insuring and not insuring, the portfolio size n(p) is derived, and also the rate of claims from the insured customers is given. Further, the value of p which is optimal...... for minimizing the ruin probability is derived in a diffusion approximation to the Cramér-Lundberg risk process with an added liability rate L of the company. The solution involves the Lambert W function. Similar discussion is given for extensions involving customers having only partial information...
Directory of Open Access Journals (Sweden)
Benjamin D Charlton
Full Text Available Examining how increasing distance affects the information content of vocal signals is fundamental for determining the active space of a given species' vocal communication system. In the current study we played back male koala bellows in a Eucalyptus forest to determine the extent that individual classification of male koala bellows becomes less accurate over distance, and also to quantify how individually distinctive acoustic features of bellows and size-related information degrade over distance. Our results show that the formant frequencies of bellows derived from Linear Predictive Coding can be used to classify calls to male koalas over distances of 1-50 m. Further analysis revealed that the upper formant frequencies and formant frequency spacing were the most stable acoustic features of male bellows as they propagated through the Eucalyptus canopy. Taken together these findings suggest that koalas could recognise known individuals at distances of up to 50 m and indicate that they should attend to variation in the upper formant frequencies and formant frequency spacing when assessing the identity of callers. Furthermore, since the formant frequency spacing is also a cue to male body size in this species and its variation over distance remained very low compared to documented inter-individual variation, we suggest that male koalas would still be reliably classified as small, medium or large by receivers at distances of up to 150 m.
Charlton, Benjamin D.; Reby, David; Ellis, William A. H.; Brumm, Jacqui; Fitch, W. Tecumseh
2012-01-01
Examining how increasing distance affects the information content of vocal signals is fundamental for determining the active space of a given species’ vocal communication system. In the current study we played back male koala bellows in a Eucalyptus forest to determine the extent that individual classification of male koala bellows becomes less accurate over distance, and also to quantify how individually distinctive acoustic features of bellows and size-related information degrade over distance. Our results show that the formant frequencies of bellows derived from Linear Predictive Coding can be used to classify calls to male koalas over distances of 1–50 m. Further analysis revealed that the upper formant frequencies and formant frequency spacing were the most stable acoustic features of male bellows as they propagated through the Eucalyptus canopy. Taken together these findings suggest that koalas could recognise known individuals at distances of up to 50 m and indicate that they should attend to variation in the upper formant frequencies and formant frequency spacing when assessing the identity of callers. Furthermore, since the formant frequency spacing is also a cue to male body size in this species and its variation over distance remained very low compared to documented inter-individual variation, we suggest that male koalas would still be reliably classified as small, medium or large by receivers at distances of up to 150 m. PMID:23028996
Wiggermann, Neal; Smith, Kathryn; Kumpar, Dee
A bed that is too small to allow patients to turn from supine to side lying increases the difficulty of mobilizing patients, which can increase risk of musculoskeletal injury to caregivers, increase risk of pressure injuries to patients, and reduce patient comfort. Currently, no guidance is available for what patient sizes are accommodated by the standard 91cm (36 in.)-wide hospital bed, and no studies have evaluated the relationship between anthropometric attributes and space required to turn in bed. The purpose of this research was to determine how much space individuals occupy when turning from supine to side lying as predicted by their anthropometry (i.e., body dimensions) to establish guidance on selecting the appropriate bed size. Forty-seven adult participants (24 female) with body mass index (BMI) from 20 to 76 kg/m participated in a laboratory study. Body dimensions were measured, and the envelope of space required to turn was determined using motion capture. Linear regressions estimated the relationship between anthropometric attributes and space occupied when turning. BMI was strongly correlated (R = .88) with the space required to turn. Based on the linear regressions, individuals with BMI up to 35 kg/m could turn left and right within 91 cm and individuals with BMI up to 45 kg/m could turn one direction within 91 cm. BMI is a good predictor of the space required to turn from supine to lateral. Nurses should consider placing patients that are unable to laterally reposition themselves on a wider bed when BMI is greater than 35 kg/m and should consider placing all patients greater than 45 kg/m on a wider bed regardless of mobility. Hospital administrators can use historical demographic information about the BMI of their patient populations to plan facility-level equipment procurement for equipment that accommodates their patients.
A Simulation and Modeling Framework for Space Situational Awareness
International Nuclear Information System (INIS)
Olivier, S.S.
2008-01-01
This paper describes the development and initial demonstration of a new, integrated modeling and simulation framework, encompassing the space situational awareness enterprise, for quantitatively assessing the benefit of specific sensor systems, technologies and data analysis techniques. The framework is based on a flexible, scalable architecture to enable efficient, physics-based simulation of the current SSA enterprise, and to accommodate future advancements in SSA systems. In particular, the code is designed to take advantage of massively parallel computer systems available, for example, at Lawrence Livermore National Laboratory. The details of the modeling and simulation framework are described, including hydrodynamic models of satellite intercept and debris generation, orbital propagation algorithms, radar cross section calculations, optical brightness calculations, generic radar system models, generic optical system models, specific Space Surveillance Network models, object detection algorithms, orbit determination algorithms, and visualization tools. The use of this integrated simulation and modeling framework on a specific scenario involving space debris is demonstrated
Space ecoliteracy- five informal education models for community empowerment
Venkataramaiah, Jagannatha; Jagannath, Sahana; J, Spandana; J, Sadhana; Jagannath, Shobha
Space ecoliteracy is a historical necessity and vital aspect of space age.Space Situational Awareness has taught lessons for mankind to look inward while stretching beyond cradle in human endeavours. Quality of life for every one on the only home of mankind-TERRA shall be a feasibility only after realizing Space ecoliteracy amongst all stakeholders in space quest. Objectives of Informal Environmental Education(UNESCO/UNEP/IEEP,1977) mandates awareness, attitude, knowledge, skill and participation at Individual and Community domains. Application of Space Technology at both Telecommunications and Remote Sensing domain have started making the fact that mankind has a challenge to learn and affirm earthmanship. Community empowerment focus after Earth Summit 1992 mandate of Sustainable Development has demonstrated a deluge of best practices in Agriculture,Urban, Industries and service sectors all over the globe. Further, deployment of Space technologies have proved the immense potential only after pre-empting the participatory approach at individual and community levels.Indian Space Programme with its 44th year of space service to national development has demonstrated self reliance in space technology for human development. Space technology for the most underdeveloped is a success story both in communication and information tools for quality of life. In this presentation Five Space Ecoliteracy models designed and validated since 1985 till date on informal environmental education namely 1) Ecological Environmental Studies by Students-EESS (1988): cited as one of the 20 best eco -education models by Earth Day Network,2)Community Eco Literacy Campaign-CEL,(2000): cited as a partner under Clean Up the World Campaign,UN, 3) Space Eco Literacy(2011)-an informa 8 week space eco literacy training reported at 39th COSPAR 12 assembly and 4) Space Eco Literacy by Practice(2014)- interface with formal education at institutions and 5) Space Ecoliteracy Mission as a space out reach in
Field space entanglement entropy, zero modes and Lifshitz models
Huffel, Helmuth; Kelnhofer, Gerald
2017-12-01
The field space entanglement entropy of a quantum field theory is obtained by integrating out a subset of its fields. We study an interacting quantum field theory consisting of massless scalar fields on a closed compact manifold M. To this model we associate its Lifshitz dual model. The ground states of both models are invariant under constant shifts. We interpret this invariance as gauge symmetry and subject the models to proper gauge fixing. By applying the heat kernel regularization one can show that the field space entanglement entropies of the massless scalar field model and of its Lifshitz dual are agreeing.
Field space entanglement entropy, zero modes and Lifshitz models
Directory of Open Access Journals (Sweden)
Helmuth Huffel
2017-12-01
Full Text Available The field space entanglement entropy of a quantum field theory is obtained by integrating out a subset of its fields. We study an interacting quantum field theory consisting of massless scalar fields on a closed compact manifold M. To this model we associate its Lifshitz dual model. The ground states of both models are invariant under constant shifts. We interpret this invariance as gauge symmetry and subject the models to proper gauge fixing. By applying the heat kernel regularization one can show that the field space entanglement entropies of the massless scalar field model and of its Lifshitz dual are agreeing.
Sensitivity of Mantel Haenszel Model and Rasch Model as Viewed From Sample Size
ALWI, IDRUS
2011-01-01
The aims of this research is to study the sensitivity comparison of Mantel Haenszel and Rasch Model for detection differential item functioning, observed from the sample size. These two differential item functioning (DIF) methods were compared using simulate binary item respon data sets of varying sample size, 200 and 400 examinees were used in the analyses, a detection method of differential item functioning (DIF) based on gender difference. These test conditions were replication 4 tim...
Calculation and measurement of space charge in MV-size xxtruded cables systems under load conditions
Morshuis, P.H.F.; Bodega, R.; Fabiani, D.; Montanari, G.C.; Dissado, L.A.; Smit, J.J.
2007-01-01
A load current in dc high voltage cables results in a temperature drop across the insulation and hence a radial distribution of the insulation conductivity is found. Direct consequence is an accumulation of space charge in the bulk of the nsulation, that may significantly affect its reliability.
Numerical method for estimating the size of chaotic regions of phase space
International Nuclear Information System (INIS)
Henyey, F.S.; Pomphrey, N.
1987-10-01
A numerical method for estimating irregular volumes of phase space is derived. The estimate weights the irregular area on a surface of section with the average return time to the section. We illustrate the method by application to the stadium and oval billiard systems and also apply the method to the continuous Henon-Heiles system. 15 refs., 10 figs
Effect of Different Size Dust Grains on the Properties of Solitary Waves in Space Environments
International Nuclear Information System (INIS)
Elwakil, S.A.; Zahran, M.A.; El-Shewy, E.K.; Abdelwahed, H.G.
2009-01-01
Propagation of nonlinear dust-acoustic (DA) waves in an unmagnetized collisionless dusty plasma consisting of dust grains obey power law dust size distribution and nonthermal ions are investigated. For nonlinear DA waves, a reductive perturbation method was employed to obtain a Korteweg-de Vries (KdV) equation for the first-order potential. The effects of a dust size distribution, dust radius and the non-thermal distribution of ions on the soliton amplitude, width and energy of electrostatic solitary structures are presented
Sizing of air cleaning systems for access to nuclear plant spaces
International Nuclear Information System (INIS)
Estreich, P.J.
A mathematical basis is developed to provide the practicing engineer with a method for sizing air-cleaning systems for nuclear facilities. In particular, general formulas are provided to relate cleaning and contamination dynamics of an enclosure such that safe conditions are obtained when working crews enter. Included in these considerations is the sizing of an air-cleaning system to provide rapid decontamination of airborne radioactivity. Multiple-nuclide contamination sources, leak rate, direct radiation, contaminant mixing efficiency, filter efficiencies, air-cleaning-system operational modes, and criteria for maximum permissible concentrations are integrated into the procedure. (author)
Numerical modelling of elastic space tethers
DEFF Research Database (Denmark)
Kristiansen, Kristian Uldall; Palmer, P. L.; Roberts, R. M.
2012-01-01
In this paper the importance of the ill-posedness of the classical, non-dissipative massive tether model on an orbiting tether system is studied numerically. The computations document that via the regularisation of bending resistance a more reliable numerical integrator can be produced. Furthermo....... It is also shown that on the slow manifold the dynamics of the satellites are well-approximated by the finite dimensional slack-spring model....
A Situative Space Model for Mobile Mixed-Reality Computing
DEFF Research Database (Denmark)
Pederson, Thomas; Janlert, Lars-Erik; Surie, Dipak
2011-01-01
This article proposes a situative space model that links the physical and virtual realms and sets the stage for complex human-computer interaction defined by what a human agent can see, hear, and touch, at any given point in time.......This article proposes a situative space model that links the physical and virtual realms and sets the stage for complex human-computer interaction defined by what a human agent can see, hear, and touch, at any given point in time....
A reference model for space data system interconnection services
Pietras, John; Theis, Gerhard
1993-01-01
The widespread adoption of standard packet-based data communication protocols and services for spaceflight missions provides the foundation for other standard space data handling services. These space data handling services can be defined as increasingly sophisticated processing of data or information received from lower-level services, using a layering approach made famous in the International Organization for Standardization (ISO) Open System Interconnection Reference Model (OSI-RM). The Space Data System Interconnection Reference Model (SDSI-RM) incorporates the conventions of the OSIRM to provide a framework within which a complete set of space data handling services can be defined. The use of the SDSI-RM is illustrated through its application to data handling services and protocols that have been defined by, or are under consideration by, the Consultative Committee for Space Data Systems (CCSDS).
Space engineering modeling and optimization with case studies
Pintér, János
2016-01-01
This book presents a selection of advanced case studies that cover a substantial range of issues and real-world challenges and applications in space engineering. Vital mathematical modeling, optimization methodologies and numerical solution aspects of each application case study are presented in detail, with discussions of a range of advanced model development and solution techniques and tools. Space engineering challenges are discussed in the following contexts: •Advanced Space Vehicle Design •Computation of Optimal Low Thrust Transfers •Indirect Optimization of Spacecraft Trajectories •Resource-Constrained Scheduling, •Packing Problems in Space •Design of Complex Interplanetary Trajectories •Satellite Constellation Image Acquisition •Re-entry Test Vehicle Configuration Selection •Collision Risk Assessment on Perturbed Orbits •Optimal Robust Design of Hybrid Rocket Engines •Nonlinear Regression Analysis in Space Engineering< •Regression-Based Sensitivity Analysis and Robust Design ...
Validation of ecological state space models using the Laplace approximation
DEFF Research Database (Denmark)
Thygesen, Uffe Høgsbro; Albertsen, Christoffer Moesgaard; Berg, Casper Willestofte
2017-01-01
Many statistical models in ecology follow the state space paradigm. For such models, the important step of model validation rarely receives as much attention as estimation or hypothesis testing, perhaps due to lack of available algorithms and software. Model validation is often based on a naive...... for estimation in general mixed effects models. Implementing one-step predictions in the R package Template Model Builder, we demonstrate that it is possible to perform model validation with little effort, even if the ecological model is multivariate, has non-linear dynamics, and whether observations...... useful directions in which the model could be improved....
Phase space model for transmission of light beam
International Nuclear Information System (INIS)
Fu Shinian
1989-01-01
Based on Fermat's principle of ray optics, the Hamiltonian of an optical ray is derived by comparison with classical mechanics. A phase space model of light beam is proposed, assuming that the light beam, regarded as a group of rays, can be described by an ellipse in the μ-phase space. Therefore, the transmission of light beam is represented by the phase space matrix transformation. By means of this non-wave formulation, the same results are obtained as those from wave equation such as Kogelnik's ABCD law. As an example of the application on this model, the matching problem of optical cavity is solved
Detecting space-time disease clusters with arbitrary shapes and sizes using a co-clustering approach
Directory of Open Access Journals (Sweden)
Sami Ullah
2017-11-01
Full Text Available Ability to detect potential space-time clusters in spatio-temporal data on disease occurrences is necessary for conducting surveillance and implementing disease prevention policies. Most existing techniques use geometrically shaped (circular, elliptical or square scanning windows to discover disease clusters. In certain situations, where the disease occurrences tend to cluster in very irregularly shaped areas, these algorithms are not feasible in practise for the detection of space-time clusters. To address this problem, a new algorithm is proposed, which uses a co-clustering strategy to detect prospective and retrospective space-time disease clusters with no restriction on shape and size. The proposed method detects space-time disease clusters by tracking the changes in space–time occurrence structure instead of an in-depth search over space. This method was utilised to detect potential clusters in the annual and monthly malaria data in Khyber Pakhtunkhwa Province, Pakistan from 2012 to 2016 visualising the results on a heat map. The results of the annual data analysis showed that the most likely hotspot emerged in three sub-regions in the years 2013-2014. The most likely hotspots in monthly data appeared in the month of July to October in each year and showed a strong periodic trend.
Evaluation of Mid-Size Male Hybrid III Models for use in Spaceflight Occupant Protection Analysis
Putnam, J.; Somers, J.; Wells, J.; Newby, N.; Currie-Gregg, N.; Lawrence, C.
2016-01-01
Introduction: In an effort to improve occupant safety during dynamic phases of spaceflight, the National Aeronautics and Space Administration (NASA) has worked to develop occupant protection standards for future crewed spacecraft. One key aspect of these standards is the identification of injury mechanisms through anthropometric test devices (ATDs). Within this analysis, both physical and computational ATD evaluations are required to reasonably encompass the vast range of loading conditions any spaceflight crew may encounter. In this study the accuracy of publically available mid-size male HIII ATD finite element (FE) models are evaluated within applicable loading conditions against extensive sled testing performed on their physical counterparts. Methods: A series of sled tests were performed at the Wright Patterson Air force Base (WPAFB) employing variations of magnitude, duration, and impact direction to encompass the dynamic loading range for expected spaceflight. FE simulations were developed to the specifications of the test setup and driven using measured acceleration profiles. Both fast and detailed FE models of the mid-size male HIII were ran to quantify differences in their accuracy and thus assess the applicability of each within this field. Results: Preliminary results identify the dependence of model accuracy on loading direction, magnitude, and rate. Additionally the accuracy of individual response metrics are shown to vary across each model within evaluated test conditions. Causes for model inaccuracy are identified based on the observed relationships. Discussion: Computational modeling provides an essential component to ATD injury metric evaluation used to ensure the safety of future spaceflight occupants. The assessment of current ATD models lays the groundwork for how these models can be used appropriately in the future. Identification of limitations and possible paths for improvement aid in the development of these effective analysis tools.
Computational Fluid Dynamics Model for Saltstone Vault 4 Vapor Space
International Nuclear Information System (INIS)
Lee, Si Young
2005-01-01
Computational fluid dynamics (CFD) methods have been used to estimate the flow patterns for vapor space inside the Saltstone Vault No.4 under different operating scenarios. The purpose of this work is to examine the gas motions inside the vapor space under the current vault configurations. A CFD model took three-dimensional transient momentum-energy coupled approach for the vapor space domain of the vault. The modeling calculations were based on prototypic vault geometry and expected normal operating conditions as defined by Waste Solidification Engineering. The modeling analysis was focused on the air flow patterns near the ventilated corner zones of the vapor space inside the Saltstone vault. The turbulence behavior and natural convection mechanism used in the present model were benchmarked against the literature information and theoretical results. The verified model was applied to the Saltstone vault geometry for the transient assessment of the air flow patterns inside the vapor space of the vault region using the boundary conditions as provided by the customer. The present model considered two cases for the estimations of the flow patterns within the vapor space. One is the reference baseline case. The other is for the negative temperature gradient between the roof inner and top grout surface temperatures intended for the potential bounding condition. The flow patterns of the vapor space calculated by the CFD model demonstrate that the ambient air comes into the vapor space of the vault through the lower-end ventilation hole, and it gets heated up by the Benard-cell type circulation before leaving the vault via the higher-end ventilation hole. The calculated results are consistent with the literature information
Analysis of Approaches to the Near-Earth Orbit Cleanup from Space Debris of the Size Below10 cm
Directory of Open Access Journals (Sweden)
V. I. Maiorova
2016-01-01
Full Text Available Nowadays, there are a lot of concepts aimed at space debris removal from the near-Earth orbits being under way at different stages of detailed engineering and design. As opposed to large-size space debris (upper-stages, rocket bodies, non-active satellites, to track the small objects of space debris (SOSD, such as picosatellites, satellite fragments, pyrotechnic devices, and other items less than 10 cm in size, using the ground stations is, presently, a challenge.This SOSD feature allows the authors to propose the two most rational approaches, which use, respectively, a passive and an active (prompt maneuverable space vehicles (SV and appropriate schematic diagrams for their collection:1 Passive scheme – space vehicle (SV to be launched into an orbit is characterized by high mathematical expectation of collision with a large amount of SOSD and, accordingly, by high probability to be captured using both active or the passive tools. The SV does not execute any maneuvers, but can be equipped with a propulsion system required for orbit’s maintenance and correction and also for solving the tasks of long-range guidance.2 Active scheme – the SV is to be launched into the target or operating orbit and executes a number of maneuvers to capture the SOSD using both active and passive tools. Thus, such a SV has to be equipped with a rather high-trust propulsion system, which allows the change of its trajectory and also with the guidance system to provide it with target coordinates. The guidance system can be built on either radio or optical devices, it can be installed onboard the debris-removal SV or onboard the SV which operates as a supply unit (if such SVs are foreseen.The paper describes each approach, emphasizes advantages and disadvantages, and defines the cutting-edge technologies to be implemented.
Nuclear spectroscopy in large shell model spaces: recent advances
International Nuclear Information System (INIS)
Kota, V.K.B.
1995-01-01
Three different approaches are now available for carrying out nuclear spectroscopy studies in large shell model spaces and they are: (i) the conventional shell model diagonalization approach but taking into account new advances in computer technology; (ii) the recently introduced Monte Carlo method for the shell model; (iii) the spectral averaging theory, based on central limit theorems, in indefinitely large shell model spaces. The various principles, recent applications and possibilities of these three methods are described and the similarity between the Monte Carlo method and the spectral averaging theory is emphasized. (author). 28 refs., 1 fig., 5 tabs
International Nuclear Information System (INIS)
Liu, W; Ding, X; Hu, Y; Shen, J; Korte, S; Bues, M; Schild, S; Wong, W; Chang, J; Liao, Z; Sahoo, N; Herman, M
2016-01-01
Purpose: To investigate how spot size and spacing affect plan quality, especially, plan robustness and the impact of interplay effect, of robustly-optimized intensity-modulated proton therapy (IMPT) plans for lung cancer. Methods: Two robustly-optimized IMPT plans were created for 10 lung cancer patients: (1) one for a proton beam with in-air energy dependent large spot size at isocenter (σ: 5–15 mm) and spacing (1.53σ); (2) the other for a proton beam with small spot size (σ: 2–6 mm) and spacing (5 mm). Both plans were generated on the average CTs with internal-gross-tumor-volume density overridden to irradiate internal target volume (ITV). The root-mean-square-dose volume histograms (RVH) measured the sensitivity of the dose to uncertainties, and the areas under RVH curves were used to evaluate plan robustness. Dose evaluation software was developed to model time-dependent spot delivery to incorporate interplay effect with randomized starting phases of each field per fraction. Patient anatomy voxels were mapped from phase to phase via deformable image registration to score doses. Dose-volume-histogram indices including ITV coverage, homogeneity, and organs-at-risk (OAR) sparing were compared using Student-t test. Results: Compared to large spots, small spots resulted in significantly better OAR sparing with comparable ITV coverage and homogeneity in the nominal plan. Plan robustness was comparable for ITV and most OARs. With interplay effect considered, significantly better OAR sparing with comparable ITV coverage and homogeneity is observed using smaller spots. Conclusion: Robust optimization with smaller spots significantly improves OAR sparing with comparable plan robustness and similar impact of interplay effect compare to larger spots. Small spot size requires the use of larger number of spots, which gives optimizer more freedom to render a plan more robust. The ratio between spot size and spacing was found to be more relevant to determine plan
Energy Technology Data Exchange (ETDEWEB)
Liu, W; Ding, X; Hu, Y; Shen, J; Korte, S; Bues, M [Mayo Clinic Arizona, Phoenix, AZ (United States); Schild, S; Wong, W [Mayo Clinic AZ, Phoenix, AZ (United States); Chang, J [MD Anderson Cancer Center, Houston, TX (United States); Liao, Z; Sahoo, N [UT MD Anderson Cancer Center, Houston, TX (United States); Herman, M [Mayo Clinic, Rochester, MN (United States)
2016-06-15
Purpose: To investigate how spot size and spacing affect plan quality, especially, plan robustness and the impact of interplay effect, of robustly-optimized intensity-modulated proton therapy (IMPT) plans for lung cancer. Methods: Two robustly-optimized IMPT plans were created for 10 lung cancer patients: (1) one for a proton beam with in-air energy dependent large spot size at isocenter (σ: 5–15 mm) and spacing (1.53σ); (2) the other for a proton beam with small spot size (σ: 2–6 mm) and spacing (5 mm). Both plans were generated on the average CTs with internal-gross-tumor-volume density overridden to irradiate internal target volume (ITV). The root-mean-square-dose volume histograms (RVH) measured the sensitivity of the dose to uncertainties, and the areas under RVH curves were used to evaluate plan robustness. Dose evaluation software was developed to model time-dependent spot delivery to incorporate interplay effect with randomized starting phases of each field per fraction. Patient anatomy voxels were mapped from phase to phase via deformable image registration to score doses. Dose-volume-histogram indices including ITV coverage, homogeneity, and organs-at-risk (OAR) sparing were compared using Student-t test. Results: Compared to large spots, small spots resulted in significantly better OAR sparing with comparable ITV coverage and homogeneity in the nominal plan. Plan robustness was comparable for ITV and most OARs. With interplay effect considered, significantly better OAR sparing with comparable ITV coverage and homogeneity is observed using smaller spots. Conclusion: Robust optimization with smaller spots significantly improves OAR sparing with comparable plan robustness and similar impact of interplay effect compare to larger spots. Small spot size requires the use of larger number of spots, which gives optimizer more freedom to render a plan more robust. The ratio between spot size and spacing was found to be more relevant to determine plan
Modeling, simulation, and concept design for hybrid-electric medium-size military trucks
Rizzoni, Giorgio; Josephson, John R.; Soliman, Ahmed; Hubert, Christopher; Cantemir, Codrin-Gruie; Dembski, Nicholas; Pisu, Pierluigi; Mikesell, David; Serrao, Lorenzo; Russell, James; Carroll, Mark
2005-05-01
A large scale design space exploration can provide valuable insight into vehicle design tradeoffs being considered for the U.S. Army"s FMTV (Family of Medium Tactical Vehicles). Through a grant from TACOM (Tank-automotive and Armaments Command), researchers have generated detailed road, surface, and grade conditions representative of the performance criteria of this medium-sized truck and constructed a virtual powertrain simulator for both conventional and hybrid variants. The simulator incorporates the latest technology among vehicle design options, including scalable ultracapacitor and NiMH battery packs as well as a variety of generator and traction motor configurations. An energy management control strategy has also been developed to provide efficiency and performance. A design space exploration for the family of vehicles involves running a large number of simulations with systematically varied vehicle design parameters, where each variant is paced through several different mission profiles and multiple attributes of performance are measured. The resulting designs are filtered to remove dominated designs, exposing the multi-criterial surface of optimality (Pareto optimal designs), and revealing the design tradeoffs as they impact vehicle performance and economy. The results are not yet definitive because ride and drivability measures were not included, and work is not finished on fine-tuning the modeled dynamics of some powertrain components. However, the work so far completed demonstrates the effectiveness of the approach to design space exploration, and the results to date suggest the powertrain configuration best suited to the FMTV mission.
Modeling Trees with a Space Colonization Algorithm
Morell Higueras, Marc
2014-01-01
[CATALÀ] Aquest TFG tracta la implementació d'un algorisme de generació procedural que construeixi una estructura reminiscent a la d'un arbre de clima temperat, i també la implementació del pas de l'estructura a un model tridimensional, acompanyat de l'eina per a visualitzar el resultat i fer-ne l'exportació [ANGLÈS] This TFG consists of the implementation of a procedural generation algorithm that builds a structure reminiscent of that of a temperate climate tree, and also consists of the ...
Modeling space charge in beams for heavy-ion fusion
International Nuclear Information System (INIS)
Sharp, W.M.
1995-01-01
A new analytic model is presented which accurately estimates the radially averaged axial component of the space-charge field of an axisymmetric heavy-ion beam in a cylindrical beam pipe. The model recovers details of the field near the beam ends that are overlooked by simpler models, and the results compare well to exact solutions of Poisson's equation. Field values are shown for several simple beam profiles and are compared with values obtained from simpler models
A probabilistic model of RNA conformational space
DEFF Research Database (Denmark)
Frellsen, Jes; Moltke, Ida; Thiim, Martin
2009-01-01
, the discrete nature of the fragments necessitates the use of carefully tuned, unphysical energy functions, and their non-probabilistic nature impairs unbiased sampling. We offer a solution to the sampling problem that removes these important limitations: a probabilistic model of RNA structure that allows...... conformations for 9 out of 10 test structures, solely using coarse-grained base-pairing information. In conclusion, the method provides a theoretical and practical solution for a major bottleneck on the way to routine prediction and simulation of RNA structure and dynamics in atomic detail.......The increasing importance of non-coding RNA in biology and medicine has led to a growing interest in the problem of RNA 3-D structure prediction. As is the case for proteins, RNA 3-D structure prediction methods require two key ingredients: an accurate energy function and a conformational sampling...
Formulating state space models in R with focus on longitudinal regression models
DEFF Research Database (Denmark)
Dethlefsen, Claus; Lundbye-Christensen, Søren
We provide a language for formulating a range of state space models. The described methodology is implemented in the R -package sspir available from cran.r-project.org . A state space model is specified similarly to a generalized linear model in R , by marking the time-varying terms in the form...... We provide a language for formulating a range of state space models. The described methodology is implemented in the R -package sspir available from cran.r-project.org . A state space model is specified similarly to a generalized linear model in R , by marking the time-varying terms...
Stability patterns for a size-structured population model and its stage-structured counterpart
DEFF Research Database (Denmark)
Zhang, Lai; Pedersen, Michael; Lin, Zhigui
2015-01-01
In this paper we compare a general size-structured population model, where a size-structured consumer feeds upon an unstructured resource, to its simplified stage-structured counterpart in terms of equilibrium stability. Stability of the size-structured model is understood in terms of an equivale...... to the population level....
The attention-weighted sample-size model of visual short-term memory
DEFF Research Database (Denmark)
Smith, Philip L.; Lilburn, Simon D.; Corbett, Elaine A.
2016-01-01
exceeded that predicted by the sample-size model for both simultaneously and sequentially presented stimuli. Instead, the set-size effect and the serial position curves with sequential presentation were predicted by an attention-weighted version of the sample-size model, which assumes that one of the items...
Patching, Geoffrey R.; Englund, Mats P.; Hellstrom, Ake
2012-01-01
Despite the importance of both response probability and response time for testing models of choice, there is a dearth of chronometric studies examining systematic asymmetries that occur over time- and space-orders in the method of paired comparisons. In this study, systematic asymmetries in discriminating the magnitude of paired visual stimuli are…
Exactly solvable string models of curved space-time backgrounds
Russo, J.G.; Russo, J G; Tseytlin, A A
1995-01-01
We consider a new 3-parameter class of exact 4-dimensional solutions in closed string theory and solve the corresponding string model, determining the physical spectrum and the partition function. The background fields (4-metric, antisymmetric tensor, two Kaluza-Klein vector fields, dilaton and modulus) generically describe axially symmetric stationary rotating (electro)magnetic flux-tube type universes. Backgrounds of this class include both the dilatonic Melvin solution and the uniform magnetic field solution discussed earlier as well as some singular space-times. Solvability of the string sigma model is related to its connection via duality to a much simpler looking model which is a "twisted" product of a flat 2-space and a space dual to 2-plane. We discuss some physical properties of this model as well as a number of generalizations leading to larger classes of exact 4-dimensional string solutions.
Modeling electron fractionalization with unconventional Fock spaces.
Cobanera, Emilio
2017-08-02
It is shown that certain fractionally-charged quasiparticles can be modeled on D-dimensional lattices in terms of unconventional yet simple Fock algebras of creation and annihilation operators. These unconventional Fock algebras are derived from the usual fermionic algebra by taking roots (the square root, cubic root, etc) of the usual fermionic creation and annihilation operators. If the fermions carry non-Abelian charges, then this approach fractionalizes the Abelian charges only. In particular, the mth-root of a spinful fermion carries charge e/m and spin 1/2. Just like taking a root of a complex number, taking a root of a fermion yields a mildly non-unique result. As a consequence, there are several possible choices of quantum exchange statistics for fermion-root quasiparticles. These choices are tied to the dimensionality [Formula: see text] of the lattice by basic physical considerations. One particular family of fermion-root quasiparticles is directly connected to the parafermion zero-energy modes expected to emerge in certain mesoscopic devices involving fractional quantum Hall states. Hence, as an application of potential mesoscopic interest, I investigate numerically the hybridization of Majorana and parafermion zero-energy edge modes caused by fractionalizing but charge-conserving tunneling.
Phase-Space Models of Solitary Electron Hoies
DEFF Research Database (Denmark)
Lynov, Jens-Peter; Michelsen, Poul; Pécseli, Hans
1985-01-01
Two different phase-space models of solitary electron holes are investigated and compared with results from computer simulations of an actual laboratory experiment, carried out in a strongly magnetized, cylindrical plasma column. In the two models, the velocity distribution of the electrons...
Embedding a State Space Model Into a Markov Decision Process
DEFF Research Database (Denmark)
Nielsen, Lars Relund; Jørgensen, Erik; Højsgaard, Søren
2011-01-01
In agriculture Markov decision processes (MDPs) with finite state and action space are often used to model sequential decision making over time. For instance, states in the process represent possible levels of traits of the animal and transition probabilities are based on biological models...
Dynamic State Space Partitioning for External Memory Model Checking
DEFF Research Database (Denmark)
Evangelista, Sami; Kristensen, Lars Michael
2009-01-01
We describe a dynamic partitioning scheme usable by model checking techniques that divide the state space into partitions, such as most external memory and distributed model checking algorithms. The goal of the scheme is to reduce the number of transitions that link states belonging to different...
Characteristic length scale of input data in distributed models: implications for modeling grid size
Artan, G. A.; Neale, C. M. U.; Tarboton, D. G.
2000-01-01
The appropriate spatial scale for a distributed energy balance model was investigated by: (a) determining the scale of variability associated with the remotely sensed and GIS-generated model input data; and (b) examining the effects of input data spatial aggregation on model response. The semi-variogram and the characteristic length calculated from the spatial autocorrelation were used to determine the scale of variability of the remotely sensed and GIS-generated model input data. The data were collected from two hillsides at Upper Sheep Creek, a sub-basin of the Reynolds Creek Experimental Watershed, in southwest Idaho. The data were analyzed in terms of the semivariance and the integral of the autocorrelation. The minimum characteristic length associated with the variability of the data used in the analysis was 15 m. Simulated and observed radiometric surface temperature fields at different spatial resolutions were compared. The correlation between agreement simulated and observed fields sharply declined after a 10×10 m2 modeling grid size. A modeling grid size of about 10×10 m2 was deemed to be the best compromise to achieve: (a) reduction of computation time and the size of the support data; and (b) a reproduction of the observed radiometric surface temperature.
Artan, Guleid A.; Neale, C. M. U.; Tarboton, D. G.
2000-01-01
The appropriate spatial scale for a distributed energy balance model was investigated by: (a) determining the scale of variability associated with the remotely sensed and GIS-generated model input data; and (b) examining the effects of input data spatial aggregation on model response. The semi-variogram and the characteristic length calculated from the spatial autocorrelation were used to determine the scale of variability of the remotely sensed and GIS-generated model input data. The data were collected from two hillsides at Upper Sheep Creek, a sub-basin of the Reynolds Creek Experimental Watershed, in southwest Idaho. The data were analyzed in terms of the semivariance and the integral of the autocorrelation. The minimum characteristic length associated with the variability of the data used in the analysis was 15 m. Simulated and observed radiometric surface temperature fields at different spatial resolutions were compared. The correlation between agreement simulated and observed fields sharply declined after a 10×10 m2 modeling grid size. A modeling grid size of about 10×10 m2 was deemed to be the best compromise to achieve: (a) reduction of computation time and the size of the support data; and (b) a reproduction of the observed radiometric surface temperature.
Finite size scaling analysis on Nagel-Schreckenberg model for traffic flow
Balouchi, Ashkan; Browne, Dana
2015-03-01
The traffic flow problem as a many-particle non-equilibrium system has caught the interest of physicists for decades. Understanding the traffic flow properties and though obtaining the ability to control the transition from the free-flow phase to the jammed phase plays a critical role in the future world of urging self-driven cars technology. We have studied phase transitions in one-lane traffic flow through the mean velocity, distributions of car spacing, dynamic susceptibility and jam persistence -as candidates for an order parameter- using the Nagel-Schreckenberg model to simulate traffic flow. The length dependent transition has been observed for a range of maximum velocities greater than a certain value. Finite size scaling analysis indicates power-law scaling of these quantities at the onset of the jammed phase.
Modeling Coastal Vulnerability through Space and Time.
Hopper, Thomas; Meixler, Marcia S
2016-01-01
Coastal ecosystems experience a wide range of stressors including wave forces, storm surge, sea-level rise, and anthropogenic modification and are thus vulnerable to erosion. Urban coastal ecosystems are especially important due to the large populations these limited ecosystems serve. However, few studies have addressed the issue of urban coastal vulnerability at the landscape scale with spatial data that are finely resolved. The purpose of this study was to model and map coastal vulnerability and the role of natural habitats in reducing vulnerability in Jamaica Bay, New York, in terms of nine coastal vulnerability metrics (relief, wave exposure, geomorphology, natural habitats, exposure, exposure with no habitat, habitat role, erodible shoreline, and surge) under past (1609), current (2015), and future (2080) scenarios using InVEST 3.2.0. We analyzed vulnerability results both spatially and across all time periods, by stakeholder (ownership) and by distance to damage from Hurricane Sandy. We found significant differences in vulnerability metrics between past, current and future scenarios for all nine metrics except relief and wave exposure. The marsh islands in the center of the bay are currently vulnerable. In the future, these islands will likely be inundated, placing additional areas of the shoreline increasingly at risk. Significant differences in vulnerability exist between stakeholders; the Breezy Point Cooperative and Gateway National Recreation Area had the largest erodible shoreline segments. Significant correlations exist for all vulnerability (exposure/surge) and storm damage combinations except for exposure and distance to artificial debris. Coastal protective features, ranging from storm surge barriers and levees to natural features (e.g. wetlands), have been promoted to decrease future flood risk to communities in coastal areas around the world. Our methods of combining coastal vulnerability results with additional data and across multiple time
Space - A unique environment for process modeling R&D
Overfelt, Tony
1991-01-01
Process modeling, the application of advanced computational techniques to simulate real processes as they occur in regular use, e.g., welding, casting and semiconductor crystal growth, is discussed. Using the low-gravity environment of space will accelerate the technical validation of the procedures and enable extremely accurate determinations of the many necessary thermophysical properties. Attention is given to NASA's centers for the commercial development of space; joint ventures of universities, industries, and goverment agencies to study the unique attributes of space that offer potential for applied R&D and eventual commercial exploitation.
Quantum metric spaces as a model for pregeometry
International Nuclear Information System (INIS)
Alvarez, E.; Cespedes, J.; Verdaguer, E.
1992-01-01
A new arena for the dynamics of spacetime is proposed, in which the basic quantum variable is the two-point distance on a metric space. The scaling dimension (that is, the Kolmogorov capacity) in the neighborhood of each point then defines in a natural way a local concept of dimension. We study our model in the region of parameter space in which the resulting spacetime is not too different from a smooth manifold
Spectral decomposition of model operators in de Branges spaces
International Nuclear Information System (INIS)
Gubreev, Gennady M; Tarasenko, Anna A
2011-01-01
The paper is devoted to studying a class of completely continuous nonselfadjoint operators in de Branges spaces of entire functions. Among other results, a class of unconditional bases of de Branges spaces consisting of values of their reproducing kernels is constructed. The operators that are studied are model operators in the class of completely continuous non-dissipative operators with two-dimensional imaginary parts. Bibliography: 22 titles.
A Markov decision model for optimising economic production lot size ...
African Journals Online (AJOL)
Adopting such a Markov decision process approach, the states of a Markov chain represent possible states of demand. The decision of whether or not to produce additional inventory units is made using dynamic programming. This approach demonstrates the existence of an optimal state-dependent EPL size, and produces ...
Widowski, T. M; Caston, L. J; Casey-Trott, T. M; Hunniford, M. E
2017-01-01
Abstract Standards for feeder (a.k.a. feed trough) space allowance (SA) are based primarily on studies in conventional cages where laying hens tend to eat simultaneously, limiting feeder space. Large furnished cages (FC) offer more total space and opportunities to perform a greater variety of behaviors, which may affect feeding behavior and feeder space requirements. Our objective was to determine the effects of floor/feeder SA on behavior at the feeder. LSL-Lite hens were housed in FC equipped with a nest, perches, and a scratch mat. Hens with SA of either 520 cm2 (Low; 8.9 cm feeder space/hen) or 748 cm2 (High; 12.8 cm feeder space/hen) per bird resulted in groups of 40 vs. 28 birds in small FC (SFC) and 80 vs. 55 in large FC (LFC). Chain feeders ran at 0500, 0800, 1100, 1400, and 1700 with lights on at 0500 and off at 1900 hours. Digital recordings of FC were scanned at chain feeder onset and every 15 min for one h after (5 scans × 5 feeding times × 2 d) to count the number of birds with their head in the feeder. All occurrences of aggressive pecks and displacements during 2 continuous 30-minute observations at 0800 h and 1700 h also were counted. Mixed model repeated analyses tested the effects of SA, cage size, and time on the percent of hens feeding, and the frequency of aggressive pecks and displacements. Surprisingly, the percent of birds feeding simultaneously was similar regardless of cage size (LFC: 23.0 ± 0.9%; SFC: 24.0 ± 1.0%; P = 0.44) or SA (Low: 23.8 ± 0.9%; High: 23.3 ± 1.0%; P = 0.62). More birds were observed feeding at 1700 h (35.3 ± 0.1%) than any at other time (P < 0.001). Feeder use differed by cage area (nest, middle, or scratch) over the d (P < 0.001). The frequency of aggressive pecks was low overall and not affected by SA or cage size. Frequency of displacements was also low but greater at Low SA (P = 0.001). There was little evidence of feeder competition at the Low SA in this study. PMID:29050409
A stochastic space-time model for intermittent precipitation occurrences
Sun, Ying; Stein, Michael L.
2016-01-01
Modeling a precipitation field is challenging due to its intermittent and highly scale-dependent nature. Motivated by the features of high-frequency precipitation data from a network of rain gauges, we propose a threshold space-time t random field (tRF) model for 15-minute precipitation occurrences. This model is constructed through a space-time Gaussian random field (GRF) with random scaling varying along time or space and time. It can be viewed as a generalization of the purely spatial tRF, and has a hierarchical representation that allows for Bayesian interpretation. Developing appropriate tools for evaluating precipitation models is a crucial part of the model-building process, and we focus on evaluating whether models can produce the observed conditional dry and rain probabilities given that some set of neighboring sites all have rain or all have no rain. These conditional probabilities show that the proposed space-time model has noticeable improvements in some characteristics of joint rainfall occurrences for the data we have considered.
A stochastic space-time model for intermittent precipitation occurrences
Sun, Ying
2016-01-28
Modeling a precipitation field is challenging due to its intermittent and highly scale-dependent nature. Motivated by the features of high-frequency precipitation data from a network of rain gauges, we propose a threshold space-time t random field (tRF) model for 15-minute precipitation occurrences. This model is constructed through a space-time Gaussian random field (GRF) with random scaling varying along time or space and time. It can be viewed as a generalization of the purely spatial tRF, and has a hierarchical representation that allows for Bayesian interpretation. Developing appropriate tools for evaluating precipitation models is a crucial part of the model-building process, and we focus on evaluating whether models can produce the observed conditional dry and rain probabilities given that some set of neighboring sites all have rain or all have no rain. These conditional probabilities show that the proposed space-time model has noticeable improvements in some characteristics of joint rainfall occurrences for the data we have considered.
NASA Space Radiation Program Integrative Risk Model Toolkit
Kim, Myung-Hee Y.; Hu, Shaowen; Plante, Ianik; Ponomarev, Artem L.; Sandridge, Chris
2015-01-01
NASA Space Radiation Program Element scientists have been actively involved in development of an integrative risk models toolkit that includes models for acute radiation risk and organ dose projection (ARRBOD), NASA space radiation cancer risk projection (NSCR), hemocyte dose estimation (HemoDose), GCR event-based risk model code (GERMcode), and relativistic ion tracks (RITRACKS), NASA radiation track image (NASARTI), and the On-Line Tool for the Assessment of Radiation in Space (OLTARIS). This session will introduce the components of the risk toolkit with opportunity for hands on demonstrations. The brief descriptions of each tools are: ARRBOD for Organ dose projection and acute radiation risk calculation from exposure to solar particle event; NSCR for Projection of cancer risk from exposure to space radiation; HemoDose for retrospective dose estimation by using multi-type blood cell counts; GERMcode for basic physical and biophysical properties for an ion beam, and biophysical and radiobiological properties for a beam transport to the target in the NASA Space Radiation Laboratory beam line; RITRACKS for simulation of heavy ion and delta-ray track structure, radiation chemistry, DNA structure and DNA damage at the molecular scale; NASARTI for modeling of the effects of space radiation on human cells and tissue by incorporating a physical model of tracks, cell nucleus, and DNA damage foci with image segmentation for the automated count; and OLTARIS, an integrated tool set utilizing HZETRN (High Charge and Energy Transport) intended to help scientists and engineers study the effects of space radiation on shielding materials, electronics, and biological systems.
Geodetic Space Weather Monitoring by means of Ionosphere Modelling
Schmidt, Michael
2017-04-01
The term space weather indicates physical processes and phenomena in space caused by radiation of energy mainly from the Sun. Manifestations of space weather are (1) variations of the Earth's magnetic field, (2) the polar lights in the northern and southern hemisphere, (3) variations within the ionosphere as part of the upper atmosphere characterized by the existence of free electrons and ions, (4) the solar wind, i.e. the permanent emission of electrons and photons, (5) the interplanetary magnetic field, and (6) electric currents, e.g. the van Allen radiation belt. It can be stated that ionosphere disturbances are often caused by so-called solar storms. A solar storm comprises solar events such as solar flares and coronal mass ejections (CMEs) which have different effects on the Earth. Solar flares may cause disturbances in positioning, navigation and communication. CMEs can effect severe disturbances and in extreme cases damages or even destructions of modern infrastructure. Examples are interruptions to satellite services including the global navigation satellite systems (GNSS), communication systems, Earth observation and imaging systems or a potential failure of power networks. Currently the measurements of solar satellite missions such as STEREO and SOHO are used to forecast solar events. Besides these measurements the Earth's ionosphere plays another key role in monitoring the space weather, because it responses to solar storms with an increase of the electron density. Space-geodetic observation techniques, such as terrestrial GNSS, satellite altimetry, space-borne GPS (radio occultation), DORIS and VLBI provide valuable global information about the state of the ionosphere. Additionally geodesy has a long history and large experience in developing and using sophisticated analysis and combination techniques as well as empirical and physical modelling approaches. Consequently, geodesy is predestinated for strongly supporting space weather monitoring via
Hyperstate matrix models : extending demographic state spaces to higher dimensions
Roth, G.; Caswell, H.
2016-01-01
1. Demographic models describe population dynamics in terms of the movement of individuals among states (e.g. size, age, developmental stage, parity, frailty, physiological condition). Matrix population models originally classified individuals by a single characteristic. This was enlarged to two
Shell model in large spaces and statistical spectroscopy
International Nuclear Information System (INIS)
Kota, V.K.B.
1996-01-01
For many nuclear structure problems of current interest it is essential to deal with shell model in large spaces. For this, three different approaches are now in use and two of them are: (i) the conventional shell model diagonalization approach but taking into account new advances in computer technology; (ii) the shell model Monte Carlo method. A brief overview of these two methods is given. Large space shell model studies raise fundamental questions regarding the information content of the shell model spectrum of complex nuclei. This led to the third approach- the statistical spectroscopy methods. The principles of statistical spectroscopy have their basis in nuclear quantum chaos and they are described (which are substantiated by large scale shell model calculations) in some detail. (author)
Estimation methods for nonlinear state-space models in ecology
DEFF Research Database (Denmark)
Pedersen, Martin Wæver; Berg, Casper Willestofte; Thygesen, Uffe Høgsbro
2011-01-01
The use of nonlinear state-space models for analyzing ecological systems is increasing. A wide range of estimation methods for such models are available to ecologists, however it is not always clear, which is the appropriate method to choose. To this end, three approaches to estimation in the theta...... logistic model for population dynamics were benchmarked by Wang (2007). Similarly, we examine and compare the estimation performance of three alternative methods using simulated data. The first approach is to partition the state-space into a finite number of states and formulate the problem as a hidden...... Markov model (HMM). The second method uses the mixed effects modeling and fast numerical integration framework of the AD Model Builder (ADMB) open-source software. The third alternative is to use the popular Bayesian framework of BUGS. The study showed that state and parameter estimation performance...
Josef, Noam; Berenshtein, Igal; Rousseau, Meghan; Scata, Gabriella; Fiorito, Graziano; Shashar, Nadav
2016-01-01
Camouflage is common throughout the phylogenetic tree and is largely used to minimize detection by predator or prey. Cephalopods, and in particular Sepia officinalis cuttlefish, are common models for camouflage studies. Predator avoidance behavior is particularly important in this group of soft-bodied animals that lack significant physical defenses. While previous studies have suggested that immobile cephalopods selectively camouflage to objects in their immediate surroundings, the camouflage characteristics of cuttlefish during movement are largely unknown. In a heterogenic environment, the visual background and substrate feature changes quickly as the animal swim across it, wherein substrate patch is a distinctive and high contrast patch of substrate in the animal's trajectory. In the current study, we examine the effect of substrate patch size on cuttlefish camouflage, and specifically the minimal size of an object for eliciting intensity matching response while moving. Our results indicated that substrate patch size has a positive effect on animal's reflectance change, and that the threshold patch size resulting in camouflage response falls between 10 and 19 cm (width). These observations suggest that the animal's length (7.2-12.3 cm mantle length in our case) serves as a possible threshold filter below which objects are considered irrelevant for camouflage, reducing the frequency of reflectance changes-which may lead to detection. Accordingly, we have constructed a computational model capturing the main features of the observed camouflaging behavior, provided for cephalopod camouflage during movement.
Nishimura, T; Doi, K; Fujimoto, H
2015-08-01
Touch-sensitive screen terminals enabling intuitive operation are used as input interfaces in a wide range of fields. Tablet terminals are one of the most common devices with a touch-sensitive screen. They have a feature of good portability, enabling use under various conditions. On the other hand, they require a GUI designed to prevent decrease of usability under various conditions. For example, the angle of fingertip contact with the display changes according to finger posture during operation and how the case is held. When a human fingertip makes contact with an object, the contact area between the fingertip and contact object increases or decreases as the contact angle changes. A touch-sensitive screen detects positions using the change in capacitance of the area touched by the fingertip; hence, differences in contact area between the touch-sensitive screen and fingertip resulting from different forefinger angles during operation could possibly affect operability. However, this effect has never been studied. We therefore conducted an experiment to investigate the relationship between size/spacing and operability, while taking the effect of fingertip contact angle into account. As a result, we have been able to specify the button size and spacing conditions that enable accurate and fast operation regardless of the forefinger contact angle.
Structural Continuum Modeling of Space Shuttle External Tank Foam Insulation
Steeve, Brian; Ayala, Sam; Purlee, T. Eric; Shaw, Phillip
2006-01-01
This document is a viewgraph presentation reporting on work in modeling the foam insulation of the Space Shuttle External Tank. An analytical understanding of foam mechanics is required to design against structural failure. The Space Shuttle External Tank is covered primarily with closed cell foam to: Prevent ice, Protect structure from ascent aerodynamic and engine plume heating, and Delay break-up during re-entry. It is important that the foam does not shed unacceptable debris during ascent environment. Therefore a modeling of the foam insulation was undertaken.
SpaceWire model development technology for satellite architecture.
Energy Technology Data Exchange (ETDEWEB)
Eldridge, John M.; Leemaster, Jacob Edward; Van Leeuwen, Brian P.
2011-09-01
Packet switched data communications networks that use distributed processing architectures have the potential to simplify the design and development of new, increasingly more sophisticated satellite payloads. In addition, the use of reconfigurable logic may reduce the amount of redundant hardware required in space-based applications without sacrificing reliability. These concepts were studied using software modeling and simulation, and the results are presented in this report. Models of the commercially available, packet switched data interconnect SpaceWire protocol were developed and used to create network simulations of data networks containing reconfigurable logic with traffic flows for timing system distribution.
Formulating state space models in R with focus on longitudinal regression models
DEFF Research Database (Denmark)
Dethlefsen, Claus; Lundbye-Christensen, Søren
2006-01-01
We provide a language for formulating a range of state space models with response densities within the exponential family. The described methodology is implemented in the R-package sspir. A state space model is specified similarly to a generalized linear model in R, and then the time-varying terms...
Wang, Wen; Jiang, Ping; Yuan, Fuping; Wu, Xiaolei
2018-05-01
The size effects of nano-spaced basal stacking faults (SFs) on the tensile strength and deformation mechanisms of nanocrystalline pure cobalt and magnesium have been investigated by a series of large-scale 2D columnar and 3D molecular dynamics simulations. Unlike the strengthening effect of basal SFs on Mg alloys, the nano-spaced basal SFs are observed to have no strengthening effect on the nanocrystalline pure cobalt and magnesium from MD simulations. These observations could be attributed to the following two reasons: (i) Lots of new basal SFs are formed before (for cobalt) or simultaneously with (for magnesium) the other deformation mechanisms (i.e. the formation of twins and the edge dislocations) during the tensile deformation; (ii) In hcp alloys, the segregation of alloy elements and impurities at typical interfaces, such as SFs, can stablilise them for enhancing the interactions with dislocation and thus elevating the strength. Without such segregation in pure hcp metals, the edge dislocations can cut through the basal SFs although the interactions between the dislocations and the pre-existing SFs/newly formed SFs are observed. The nano-spaced basal SFs are also found to have no restriction effect on the formation of deformation twins.
Requirements for modeling airborne microbial contamination in space stations
Van Houdt, Rob; Kokkonen, Eero; Lehtimäki, Matti; Pasanen, Pertti; Leys, Natalie; Kulmala, Ilpo
2018-03-01
Exposure to bioaerosols is one of the facets that affect indoor air quality, especially for people living in densely populated or confined habitats, and is associated to a wide range of health effects. Good indoor air quality is thus vital and a prerequisite for fully confined environments such as space habitats. Bioaerosols and microbial contamination in these confined space stations can have significant health impacts, considering the unique prevailing conditions and constraints of such habitats. Therefore, biocontamination in space stations is strictly monitored and controlled to ensure crew and mission safety. However, efficient bioaerosol control measures rely on solid understanding and knowledge on how these bioaerosols are created and dispersed, and which factors affect the survivability of the associated microorganisms. Here we review the current knowledge gained from relevant studies in this wide and multidisciplinary area of bioaerosol dispersion modeling and biological indoor air quality control, specifically taking into account the specific space conditions.
Redshift space clustering of galaxies and cold dark matter model
Bahcall, Neta A.; Cen, Renyue; Gramann, Mirt
1993-01-01
The distorting effect of peculiar velocities on the power speturm and correlation function of IRAS and optical galaxies is studied. The observed redshift space power spectra and correlation functions of IRAS and optical the galaxies over the entire range of scales are directly compared with the corresponding redshift space distributions using large-scale computer simulations of cold dark matter (CDM) models in order to study the distortion effect of peculiar velocities on the power spectrum and correlation function of the galaxies. It is found that the observed power spectrum of IRAS and optical galaxies is consistent with the spectrum of an Omega = 1 CDM model. The problems that such a model currently faces may be related more to the high value of Omega in the model than to the shape of the spectrum. A low-density CDM model is also investigated and found to be consistent with the data.
Space-time modeling of electricity spot prices
DEFF Research Database (Denmark)
Abate, Girum Dagnachew; Haldrup, Niels
In this paper we derive a space-time model for electricity spot prices. A general spatial Durbin model that incorporates the temporal as well as spatial lags of spot prices is presented. Joint modeling of space-time effects is necessarily important when prices and loads are determined in a network...... in the spot price dynamics. Estimation of the spatial Durbin model show that the spatial lag variable is as important as the temporal lag variable in describing the spot price dynamics. We use the partial derivatives impact approach to decompose the price impacts into direct and indirect effects and we show...... that price effects transmit to neighboring markets and decline with distance. In order to examine the evolution of the spatial correlation over time, a time varying parameters spot price spatial Durbin model is estimated using recursive estimation. It is found that the spatial correlation within the Nord...
Applying Model Checking to Industrial-Sized PLC Programs
AUTHOR|(CDS)2079190; Darvas, Daniel; Blanco Vinuela, Enrique; Tournier, Jean-Charles; Bliudze, Simon; Blech, Jan Olaf; Gonzalez Suarez, Victor M
2015-01-01
Programmable logic controllers (PLCs) are embedded computers widely used in industrial control systems. Ensuring that a PLC software complies with its specification is a challenging task. Formal verification has become a recommended practice to ensure the correctness of safety-critical software but is still underused in industry due to the complexity of building and managing formal models of real applications. In this paper, we propose a general methodology to perform automated model checking of complex properties expressed in temporal logics (\\eg CTL, LTL) on PLC programs. This methodology is based on an intermediate model (IM), meant to transform PLC programs written in various standard languages (ST, SFC, etc.) to different modeling languages of verification tools. We present the syntax and semantics of the IM and the transformation rules of the ST and SFC languages to the nuXmv model checker passing through the intermediate model. Finally, two real cases studies of \\CERN PLC programs, written mainly in th...
Extended Cellular Automata Models of Particles and Space-Time
Beedle, Michael
2005-04-01
Models of particles and space-time are explored through simulations and theoretical models that use Extended Cellular Automata models. The expanded Cellular Automata Models consist go beyond simple scalar binary cell-fields, into discrete multi-level group representations like S0(2), SU(2), SU(3), SPIN(3,1). The propagation and evolution of these expanded cellular automatas are then compared to quantum field theories based on the "harmonic paradigm" i.e. built by an infinite number of harmonic oscillators, and with gravitational models.
Validated TRNSYS Model for Solar Assisted Space Heating System
International Nuclear Information System (INIS)
Abdalla, Nedal
2014-01-01
The present study involves a validated TRNSYS model for solar assisted space heating system as applied to a residential building in Jordan using new detailed radiation models of the TRNSYS 17.1 and geometric building model Trnsys3d for the Google SketchUp 3D drawing program. The annual heating load for a building (Solar House) which is located at the Royal ScientiFIc Society (RS5) in Jordan is estimated under climatological conditions of Amman. The aim of this Paper is to compare measured thermal performance of the Solar House with that modeled using TRNSYS. The results showed that the annual measured space heating load for the building was 6,188 kWh while the heati.ng load for the modeled building was 6,391 kWh. Moreover, the measured solar fraction for the solar system was 50% while the modeled solar fraction was 55%. A comparison of modeled and measured data resulted in percentage mean absolute errors for solar energy for space heating, auxiliary heating and solar fraction of 13%, 7% and 10%, respectively. The validated model will be useful for long-term performance simulation under different weather and operating conditions.(author)
Exactly solvable string models of curved space-time backgrounds
International Nuclear Information System (INIS)
Russo, J.G.
1995-01-01
We consider a new 3-parameter class of exact 4-dimensional solutions in closed string theory and solve the corresponding string model, determining the physical spectrum and the partition function. The background fields (4-metric, antisymmetric tensor, two Kaluza-Klein vector fields, dilaton and modulus) generically describe axially symmetric stationary rotating (electro)magnetic flux-tube type universes. Backgrounds of this class include both the ''dilatonic'' (a=1) and ''Kaluza-Klein'' (a=√(3)) Melvin solutions and the uniform magnetic field solution, as well as some singular space-times. Solvability of the string σ-model is related to its connection via duality to a simpler model which is a ''twisted'' product of a flat 2-space and a space dual to 2-plane. We discuss some physical properties of this model (tachyonic instabilities in the spectrum, gyromagnetic ratio, issue of singularities, etc.). It provides one of the first examples of a consistent solvable conformal string model with explicit D=4 curved space-time interpretation. (orig.)
Space-Charge-Limited Emission Models for Particle Simulation
Verboncoeur, J. P.; Cartwright, K. L.; Murphy, T.
2004-11-01
Space-charge-limited (SCL) emission of electrons from various materials is a common method of generating the high current beams required to drive high power microwave (HPM) sources. In the SCL emission process, sufficient space charge is extracted from a surface, often of complicated geometry, to drive the electric field normal to the surface close to zero. The emitted current is highly dominated by space charge effects as well as ambient fields near the surface. In this work, we consider computational models for the macroscopic SCL emission process including application of Gauss's law and the Child-Langmuir law for space-charge-limited emission. Models are described for ideal conductors, lossy conductors, and dielectrics. Also considered is the discretization of these models, and the implications for the emission physics. Previous work on primary and dual-cell emission models [Watrous et al., Phys. Plasmas 8, 289-296 (2001)] is reexamined, and aspects of the performance, including fidelity and noise properties, are improved. Models for one-dimensional diodes are considered, as well as multidimensional emitting surfaces, which include corners and transverse fields.
Grms or graphical representation of model spaces. Vol. I Basics
International Nuclear Information System (INIS)
Duch, W.
1986-01-01
This book presents a novel approach to the many-body problem in quantum chemistry, nuclear shell-theory and solid-state theory. Many-particle model spaces are visualized using graphs, each path of a graph labeling a single basis function or a subspace of functions. Spaces of a very high dimension are represented by small graphs. Model spaces have structure that is reflected in the architecture of the corresponding graphs, that in turn is reflected in the structure of the matrices corresponding to operators acting in these spaces. Insight into this structure leads to formulation of very efficient computer algorithms. Calculation of matrix elements is reduced to comparison of paths in a graph, without ever looking at the functions themselves. Using only very rudimentary mathematical tools graphical rules of matrix element calculation in abelian cases are derived, in particular segmentation rules obtained in the unitary group approached are rederived. The graphs are solutions of Diophantine equations of the type appearing in different branches of applied mathematics. Graphical representation of model spaces should find as many applications as has been found for diagramatical methods in perturbation theory
A generalized statistical model for the size distribution of wealth
International Nuclear Information System (INIS)
Clementi, F; Gallegati, M; Kaniadakis, G
2012-01-01
In a recent paper in this journal (Clementi et al 2009 J. Stat. Mech. P02037), we proposed a new, physically motivated, distribution function for modeling individual incomes, having its roots in the framework of the κ-generalized statistical mechanics. The performance of the κ-generalized distribution was checked against real data on personal income for the United States in 2003. In this paper we extend our previous model so as to be able to account for the distribution of wealth. Probabilistic functions and inequality measures of this generalized model for wealth distribution are obtained in closed form. In order to check the validity of the proposed model, we analyze the US household wealth distributions from 1984 to 2009 and conclude an excellent agreement with the data that is superior to any other model already known in the literature. (paper)
A generalized statistical model for the size distribution of wealth
Clementi, F.; Gallegati, M.; Kaniadakis, G.
2012-12-01
In a recent paper in this journal (Clementi et al 2009 J. Stat. Mech. P02037), we proposed a new, physically motivated, distribution function for modeling individual incomes, having its roots in the framework of the κ-generalized statistical mechanics. The performance of the κ-generalized distribution was checked against real data on personal income for the United States in 2003. In this paper we extend our previous model so as to be able to account for the distribution of wealth. Probabilistic functions and inequality measures of this generalized model for wealth distribution are obtained in closed form. In order to check the validity of the proposed model, we analyze the US household wealth distributions from 1984 to 2009 and conclude an excellent agreement with the data that is superior to any other model already known in the literature.
Interfacial and Wall Transport Models for SPACE-CAP Code
International Nuclear Information System (INIS)
Hong, Soon Joon; Choo, Yeon Joon; Han, Tae Young; Hwang, Su Hyun; Lee, Byung Chul; Choi, Hoon; Ha, Sang Jun
2009-01-01
The development project for the domestic design code was launched to be used for the safety and performance analysis of pressurized light water reactors. And CAP (Containment Analysis Package) code has been also developed for the containment safety and performance analysis side by side with SPACE. The CAP code treats three fields (gas, continuous liquid, and dispersed drop) for the assessment of containment specific phenomena, and is featured by its multidimensional assessment capabilities. Thermal hydraulics solver was already developed and now under testing of its stability and soundness. As a next step, interfacial and wall transport models was setup. In order to develop the best model and correlation package for the CAP code, various models currently used in major containment analysis codes, which are GOTHIC, CONTAIN2.0, and CONTEMPT-LT, have been reviewed. The origins of the selected models used in these codes have also been examined to find out if the models have not conflict with a proprietary right. In addition, a literature survey of the recent studies has been performed in order to incorporate the better models for the CAP code. The models and correlations of SPACE were also reviewed. CAP models and correlations are composed of interfacial heat/mass, and momentum transport models, and wall heat/mass, and momentum transport models. This paper discusses on those transport models in the CAP code
Interfacial and Wall Transport Models for SPACE-CAP Code
Energy Technology Data Exchange (ETDEWEB)
Hong, Soon Joon; Choo, Yeon Joon; Han, Tae Young; Hwang, Su Hyun; Lee, Byung Chul [FNC Tech., Seoul (Korea, Republic of); Choi, Hoon; Ha, Sang Jun [Korea Electric Power Research Institute, Daejeon (Korea, Republic of)
2009-10-15
The development project for the domestic design code was launched to be used for the safety and performance analysis of pressurized light water reactors. And CAP (Containment Analysis Package) code has been also developed for the containment safety and performance analysis side by side with SPACE. The CAP code treats three fields (gas, continuous liquid, and dispersed drop) for the assessment of containment specific phenomena, and is featured by its multidimensional assessment capabilities. Thermal hydraulics solver was already developed and now under testing of its stability and soundness. As a next step, interfacial and wall transport models was setup. In order to develop the best model and correlation package for the CAP code, various models currently used in major containment analysis codes, which are GOTHIC, CONTAIN2.0, and CONTEMPT-LT, have been reviewed. The origins of the selected models used in these codes have also been examined to find out if the models have not conflict with a proprietary right. In addition, a literature survey of the recent studies has been performed in order to incorporate the better models for the CAP code. The models and correlations of SPACE were also reviewed. CAP models and correlations are composed of interfacial heat/mass, and momentum transport models, and wall heat/mass, and momentum transport models. This paper discusses on those transport models in the CAP code.
Improved Mathematical Models for Particle-Size Distribution Data
African Journals Online (AJOL)
BirukEdimon
School of Civil & Environmental Engineering, Addis Ababa Institute of Technology,. 3. Murray Rix ... two improved mathematical models to describe ... demand further improvement to handle the PSD ... statistics and the range of the optimized.
Statistical learning modeling method for space debris photometric measurement
Sun, Wenjing; Sun, Jinqiu; Zhang, Yanning; Li, Haisen
2016-03-01
Photometric measurement is an important way to identify the space debris, but the present methods of photometric measurement have many constraints on star image and need complex image processing. Aiming at the problems, a statistical learning modeling method for space debris photometric measurement is proposed based on the global consistency of the star image, and the statistical information of star images is used to eliminate the measurement noises. First, the known stars on the star image are divided into training stars and testing stars. Then, the training stars are selected as the least squares fitting parameters to construct the photometric measurement model, and the testing stars are used to calculate the measurement accuracy of the photometric measurement model. Experimental results show that, the accuracy of the proposed photometric measurement model is about 0.1 magnitudes.
QSAR modeling and chemical space analysis of antimalarial compounds
Sidorov, Pavel; Viira, Birgit; Davioud-Charvet, Elisabeth; Maran, Uko; Marcou, Gilles; Horvath, Dragos; Varnek, Alexandre
2017-05-01
Generative topographic mapping (GTM) has been used to visualize and analyze the chemical space of antimalarial compounds as well as to build predictive models linking structure of molecules with their antimalarial activity. For this, a database, including 3000 molecules tested in one or several of 17 anti- Plasmodium activity assessment protocols, has been compiled by assembling experimental data from in-house and ChEMBL databases. GTM classification models built on subsets corresponding to individual bioassays perform similarly to the earlier reported SVM models. Zones preferentially populated by active and inactive molecules, respectively, clearly emerge in the class landscapes supported by the GTM model. Their analysis resulted in identification of privileged structural motifs of potential antimalarial compounds. Projection of marketed antimalarial drugs on this map allowed us to delineate several areas in the chemical space corresponding to different mechanisms of antimalarial activity. This helped us to make a suggestion about the mode of action of the molecules populating these zones.
State Space Reduction for Model Checking Agent Programs
S.-S.T.Q. Jongmans (Sung-Shik); K.V. Hindriks; M.B. van Riemsdijk; L. Dennis; O. Boissier; R.H. Bordini (Rafael)
2012-01-01
htmlabstractState space reduction techniques have been developed to increase the efficiency of model checking in the context of imperative programming languages. Unfortunately, these techniques cannot straightforwardly be applied to agents: the nature of states in the two programming paradigms
Magnetic Testing, and Modeling, Simulation and Analysis for Space Applications
Boghosian, Mary; Narvaez, Pablo; Herman, Ray
2012-01-01
The Aerospace Corporation (Aerospace) and Lockheed Martin Space Systems (LMSS) participated with Jet Propulsion Laboratory (JPL) in the implementation of a magnetic cleanliness program of the NASA/JPL JUNO mission. The magnetic cleanliness program was applied from early flight system development up through system level environmental testing. The JUNO magnetic cleanliness program required setting-up a specialized magnetic test facility at Lockheed Martin Space Systems for testing the flight system and a testing program with facility for testing system parts and subsystems at JPL. The magnetic modeling, simulation and analysis capability was set up and performed by Aerospace to provide qualitative and quantitative magnetic assessments of the magnetic parts, components, and subsystems prior to or in lieu of magnetic tests. Because of the sensitive nature of the fields and particles scientific measurements being conducted by the JUNO space mission to Jupiter, the imposition of stringent magnetic control specifications required a magnetic control program to ensure that the spacecraft's science magnetometers and plasma wave search coil were not magnetically contaminated by flight system magnetic interferences. With Aerospace's magnetic modeling, simulation and analysis and JPL's system modeling and testing approach, and LMSS's test support, the project achieved a cost effective approach to achieving a magnetically clean spacecraft. This paper presents lessons learned from the JUNO magnetic testing approach and Aerospace's modeling, simulation and analysis activities used to solve problems such as remnant magnetization, performance of hard and soft magnetic materials within the targeted space system in applied external magnetic fields.
Classical model of the Dirac electron in curved space
International Nuclear Information System (INIS)
Barut, A.O.; Pavsic, M.
1987-01-01
The action for the classical model of the electron exhibiting Zitterbewegung is generalized to curved space by introducing a spin connection. The dynamical equations and the symplectic structure are given for several different choices of the variables. In particular, we obtain the equation of motion for spin and compare it with the Papapetrou equation. (author)
Space Object Radiometric Modeling for Hardbody Optical Signature Database Generation
2009-09-01
Introduction This presentation summarizes recent activity in monitoring spacecraft health status using passive remote optical nonimaging ...Approved for public release; distribution is unlimited. Space Object Radiometric Modeling for Hardbody Optical Signature Database Generation...It is beneficial to the observer/analyst to understand the fundamental optical signature variability associated with these detection and
Widowski, T M; Caston, L J; Casey-Trott, T M; Hunniford, M E
2017-09-01
Standards for feeder (a.k.a. feed trough) space allowance (SA) are based primarily on studies in conventional cages where laying hens tend to eat simultaneously, limiting feeder space. Large furnished cages (FC) offer more total space and opportunities to perform a greater variety of behaviors, which may affect feeding behavior and feeder space requirements. Our objective was to determine the effects of floor/feeder SA on behavior at the feeder. LSL-Lite hens were housed in FC equipped with a nest, perches, and a scratch mat. Hens with SA of either 520 cm2 (Low; 8.9 cm feeder space/hen) or 748 cm2 (High; 12.8 cm feeder space/hen) per bird resulted in groups of 40 vs. 28 birds in small FC (SFC) and 80 vs. 55 in large FC (LFC). Chain feeders ran at 0500, 0800, 1100, 1400, and 1700 with lights on at 0500 and off at 1900 hours. Digital recordings of FC were scanned at chain feeder onset and every 15 min for one h after (5 scans × 5 feeding times × 2 d) to count the number of birds with their head in the feeder. All occurrences of aggressive pecks and displacements during 2 continuous 30-minute observations at 0800 h and 1700 h also were counted. Mixed model repeated analyses tested the effects of SA, cage size, and time on the percent of hens feeding, and the frequency of aggressive pecks and displacements. Surprisingly, the percent of birds feeding simultaneously was similar regardless of cage size (LFC: 23.0 ± 0.9%; SFC: 24.0 ± 1.0%; P = 0.44) or SA (Low: 23.8 ± 0.9%; High: 23.3 ± 1.0%; P = 0.62). More birds were observed feeding at 1700 h (35.3 ± 0.1%) than any at other time (P Feeder use differed by cage area (nest, middle, or scratch) over the d (P feeder competition at the Low SA in this study. © The Author 2017. Published by Oxford University Press on behalf of Poultry Science Association.
Diffeomorphisms as symplectomorphisms in history phase space: Bosonic string model
International Nuclear Information System (INIS)
Kouletsis, I.; Kuchar, K.V.
2002-01-01
The structure of the history phase space G of a covariant field system and its history group (in the sense of Isham and Linden) is analyzed on an example of a bosonic string. The history space G includes the time map T from the spacetime manifold (the two-sheet) Y to a one-dimensional time manifold T as one of its configuration variables. A canonical history action is posited on G such that its restriction to the configuration history space yields the familiar Polyakov action. The standard Dirac-ADM action is shown to be identical with the canonical history action, the only difference being that the underlying action is expressed in two different coordinate charts on G. The canonical history action encompasses all individual Dirac-ADM actions corresponding to different choices T of foliating Y. The history Poisson brackets of spacetime fields on G induce the ordinary Poisson brackets of spatial fields in the instantaneous phase space G 0 of the Dirac-ADM formalism. The canonical history action is manifestly invariant both under spacetime diffeomorphisms Diff Y and temporal diffeomorphisms Diff T. Both of these diffeomorphisms are explicitly represented by symplectomorphisms on the history phase space G. The resulting classical history phase space formalism is offered as a starting point for projection operator quantization and consistent histories interpretation of the bosonic string model
A Management Model for International Participation in Space Exploration Missions
George, Patrick J.; Pease, Gary M.; Tyburski, Timothy E.
2005-01-01
This paper proposes an engineering management model for NASA's future space exploration missions based on past experiences working with the International Partners of the International Space Station. The authors have over 25 years of combined experience working with the European Space Agency, Japan Aerospace Exploration Agency, Canadian Space Agency, Italian Space Agency, Russian Space Agency, and their respective contractors in the design, manufacturing, verification, and integration of their elements electric power system into the United States on-orbit segment. The perspective presented is one from a specific sub-system integration role and is offered so that the lessons learned from solving issues of technical and cultural nature may be taken into account during the formulation of international partnerships. Descriptions of the types of unique problems encountered relative to interactions between international partnerships are reviewed. Solutions to the problems are offered, taking into consideration the technical implications. Through the process of investigating each solution, the important and significant issues associated with working with international engineers and managers are outlined. Potential solutions are then characterized by proposing a set of specific methodologies to jointly develop spacecraft configurations that benefits all international participants, maximizes mission success and vehicle interoperability while minimizing cost.
Applying Model Based Systems Engineering to NASA's Space Communications Networks
Bhasin, Kul; Barnes, Patrick; Reinert, Jessica; Golden, Bert
2013-01-01
System engineering practices for complex systems and networks now require that requirement, architecture, and concept of operations product development teams, simultaneously harmonize their activities to provide timely, useful and cost-effective products. When dealing with complex systems of systems, traditional systems engineering methodology quickly falls short of achieving project objectives. This approach is encumbered by the use of a number of disparate hardware and software tools, spreadsheets and documents to grasp the concept of the network design and operation. In case of NASA's space communication networks, since the networks are geographically distributed, and so are its subject matter experts, the team is challenged to create a common language and tools to produce its products. Using Model Based Systems Engineering methods and tools allows for a unified representation of the system in a model that enables a highly related level of detail. To date, Program System Engineering (PSE) team has been able to model each network from their top-level operational activities and system functions down to the atomic level through relational modeling decomposition. These models allow for a better understanding of the relationships between NASA's stakeholders, internal organizations, and impacts to all related entities due to integration and sustainment of existing systems. Understanding the existing systems is essential to accurate and detailed study of integration options being considered. In this paper, we identify the challenges the PSE team faced in its quest to unify complex legacy space communications networks and their operational processes. We describe the initial approaches undertaken and the evolution toward model based system engineering applied to produce Space Communication and Navigation (SCaN) PSE products. We will demonstrate the practice of Model Based System Engineering applied to integrating space communication networks and the summary of its
Phase space analysis of some interacting Chaplygin gas models
Energy Technology Data Exchange (ETDEWEB)
Khurshudyan, M. [Academy of Sciences of Armenia, Institute for Physical Research, Ashtarak (Armenia); Tomsk State University of Control Systems and Radioelectronics, Laboratory for Theoretical Cosmology, Tomsk (Russian Federation); Tomsk State Pedagogical University, Department of Theoretical Physics, Tomsk (Russian Federation); Myrzakulov, R. [Eurasian National University, Eurasian International Center for Theoretical Physics, Astana (Kazakhstan)
2017-02-15
In this paper we discuss a phase space analysis of various interacting Chaplygin gas models in general relativity. Linear and nonlinear sign changeable interactions are considered. For each case appropriate late time attractors of field equations are found. The Chaplygin gas is one of the dark fluids actively considered in modern cosmology due to the fact that it is a joint model of dark energy and dark matter. (orig.)
Model Experiments for the Determination of Airflow in Large Spaces
DEFF Research Database (Denmark)
Nielsen, Peter V.
Model experiments are one of the methods used for the determination of airflow in large spaces. This paper will discuss the formation of the governing dimensionless numbers. It is shown that experiments with a reduced scale often will necessitate a fully developed turbulence level of the flow....... Details of the flow from supply openings are very important for the determination of room air distribution. It is in some cases possible to make a simplified supply opening for the model experiment....
Space Environment Modelling with the Use of Artificial Intelligence Methods
Lundstedt, H.; Wintoft, P.; Wu, J.-G.; Gleisner, H.; Dovheden, V.
1996-12-01
Space based technological systems are affected by the space weather in many ways. Several severe failures of satellites have been reported at times of space storms. Our society also increasingly depends on satellites for communication, navigation, exploration, and research. Predictions of the conditions in the satellite environment have therefore become very important. We will here present predictions made with the use of artificial intelligence (AI) techniques, such as artificial neural networks (ANN) and hybrids of AT methods. We are developing a space weather model based on intelligence hybrid systems (IHS). The model consists of different forecast modules, each module predicts the space weather on a specific time-scale. The time-scales range from minutes to months with the fundamental time-scale of 1-5 minutes, 1-3 hours, 1-3 days, and 27 days. Solar and solar wind data are used as input data. From solar magnetic field measurements, either made on the ground at Wilcox Solar Observatory (WSO) at Stanford, or made from space by the satellite SOHO, solar wind parameters can be predicted and modelled with ANN and MHD models. Magnetograms from WSO are available on a daily basis. However, from SOHO magnetograms will be available every 90 minutes. SOHO magnetograms as input to ANNs will therefore make it possible to even predict solar transient events. Geomagnetic storm activity can today be predicted with very high accuracy by means of ANN methods using solar wind input data. However, at present real-time solar wind data are only available during part of the day from the satellite WIND. With the launch of ACE in 1997, solar wind data will on the other hand be available during 24 hours per day. The conditions of the satellite environment are not only disturbed at times of geomagnetic storms but also at times of intense solar radiation and highly energetic particles. These events are associated with increased solar activity. Predictions of these events are therefore
Preliminary Multi-Variable Cost Model for Space Telescopes
Stahl, H. Philip; Hendrichs, Todd
2010-01-01
Parametric cost models are routinely used to plan missions, compare concepts and justify technology investments. This paper reviews the methodology used to develop space telescope cost models; summarizes recently published single variable models; and presents preliminary results for two and three variable cost models. Some of the findings are that increasing mass reduces cost; it costs less per square meter of collecting aperture to build a large telescope than a small telescope; and technology development as a function of time reduces cost at the rate of 50% per 17 years.
State space modeling of Memristor-based Wien oscillator
Talukdar, Abdul Hafiz Ibne
2011-12-01
State space modeling of Memristor based Wien \\'A\\' oscillator has been demonstrated for the first time considering nonlinear ion drift in Memristor. Time dependant oscillating resistance of Memristor is reported in both state space solution and SPICE simulation which plausibly provide the basis of realizing parametric oscillation by Memristor based Wien oscillator. In addition to this part Memristor is shown to stabilize the final oscillation amplitude by means of its nonlinear dynamic resistance which hints for eliminating diode in the feedback network of conventional Wien oscillator. © 2011 IEEE.
State space modeling of Memristor-based Wien oscillator
Talukdar, Abdul Hafiz Ibne; Radwan, Ahmed G.; Salama, Khaled N.
2011-01-01
State space modeling of Memristor based Wien 'A' oscillator has been demonstrated for the first time considering nonlinear ion drift in Memristor. Time dependant oscillating resistance of Memristor is reported in both state space solution and SPICE simulation which plausibly provide the basis of realizing parametric oscillation by Memristor based Wien oscillator. In addition to this part Memristor is shown to stabilize the final oscillation amplitude by means of its nonlinear dynamic resistance which hints for eliminating diode in the feedback network of conventional Wien oscillator. © 2011 IEEE.
Life sciences research in space: The requirement for animal models
Fuller, C. A.; Philips, R. W.; Ballard, R. W.
1987-01-01
Use of animals in NASA space programs is reviewed. Animals are needed because life science experimentation frequently requires long-term controlled exposure to environments, statistical validation, invasive instrumentation or biological tissue sampling, tissue destruction, exposure to dangerous or unknown agents, or sacrifice of the subject. The availability and use of human subjects inflight is complicated by the multiple needs and demands upon crew time. Because only living organisms can sense, integrate and respond to the environment around them, the sole use of tissue culture and computer models is insufficient for understanding the influence of the space environment on intact organisms. Equipment for spaceborne experiments with animals is described.
Truncated conformal space approach to scaling Lee-Yang model
International Nuclear Information System (INIS)
Yurov, V.P.; Zamolodchikov, Al.B.
1989-01-01
A numerical approach to 2D relativstic field theories is suggested. Considering a field theory model as an ultraviolet conformal field theory perturbed by suitable relevant scalar operator one studies it in finite volume (on a circle). The perturbed Hamiltonian acts in the conformal field theory space of states and its matrix elements can be extracted from the conformal field theory. Truncation of the space at reasonable level results in a finite dimensional problem for numerical analyses. The nonunitary field theory with the ultraviolet region controlled by the minimal conformal theory μ(2/5) is studied in detail. 9 refs.; 17 figs
Space modeling with SolidWorks and NX
Duhovnik, Jože; Drešar, Primož
2015-01-01
Through a series of step-by-step tutorials and numerous hands-on exercises, this book aims to equip the reader with both a good understanding of the importance of space in the abstract world of engineers and the ability to create a model of a product in virtual space – a skill essential for any designer or engineer who needs to present ideas concerning a particular product within a professional environment. The exercises progress logically from the simple to the more complex; while SolidWorks or NX is the software used, the underlying philosophy is applicable to all modeling software. In each case, the explanation covers the entire procedure from the basic idea and production capabilities through to the real model; the conversion from 3D model to 2D manufacturing drawing is also clearly explained. Topics covered include modeling of prism, axisymmetric, symmetric, and sophisticated shapes; digitization of physical models using modeling software; creation of a CAD model starting from a physical model; free fo...
Kinetic model for transformation from nano-sized amorphous $TiO_2$ to anatase
Madras, Giridhar; McCoy, Benjamin J
2006-01-01
We propose a kinetic model for the transformation of nano-sized amorphous $TiO_2$ to anatase with associated coarsening by coalescence. Based on population balance (distribution kinetics) equations for the size distributions, the model applies a first-order rate expression for transformation combined with Smoluchowski coalescence for the coarsening particles. Size distribution moments (number and mass of particles) lead to dynamic expressions for extent of reaction and average anatase particl...
National Oceanic and Atmospheric Administration, Department of Commerce — This dataset represents sediment size predictions from a sediment spatial model developed for the New York offshore spatial planning area. The model also includes...
State-Space Modelling of Loudspeakers using Fractional Derivatives
DEFF Research Database (Denmark)
King, Alexander Weider; Agerkvist, Finn T.
2015-01-01
This work investigates the use of fractional order derivatives in modeling moving-coil loudspeakers. A fractional order state-space solution is developed, leading the way towards incorporating nonlinearities into a fractional order system. The method is used to calculate the response of a fractio......This work investigates the use of fractional order derivatives in modeling moving-coil loudspeakers. A fractional order state-space solution is developed, leading the way towards incorporating nonlinearities into a fractional order system. The method is used to calculate the response...... of a fractional harmonic oscillator, representing the mechanical part of a loudspeaker, showing the effect of the fractional derivative and its relationship to viscoelasticity. Finally, a loudspeaker model with a fractional order viscoelastic suspension and fractional order voice coil is fit to measurement data...
A growing social network model in geographical space
Antonioni, Alberto; Tomassini, Marco
2017-09-01
In this work we propose a new model for the generation of social networks that includes their often ignored spatial aspects. The model is a growing one and links are created either taking space into account, or disregarding space and only considering the degree of target nodes. These two effects can be mixed linearly in arbitrary proportions through a parameter. We numerically show that for a given range of the combination parameter, and for given mean degree, the generated network class shares many important statistical features with those observed in actual social networks, including the spatial dependence of connections. Moreover, we show that the model provides a good qualitative fit to some measured social networks.
The standard model on non-commutative space-time
International Nuclear Information System (INIS)
Calmet, X.; Jurco, B.; Schupp, P.; Wohlgenannt, M.; Wess, J.
2002-01-01
We consider the standard model on a non-commutative space and expand the action in the non-commutativity parameter θ μν . No new particles are introduced; the structure group is SU(3) x SU(2) x U(1). We derive the leading order action. At zeroth order the action coincides with the ordinary standard model. At leading order in θ μν we find new vertices which are absent in the standard model on commutative space-time. The most striking features are couplings between quarks, gluons and electroweak bosons and many new vertices in the charged and neutral currents. We find that parity is violated in non-commutative QCD. The Higgs mechanism can be applied. QED is not deformed in the minimal version of the NCSM to the order considered. (orig.)
The standard model on non-commutative space-time
Energy Technology Data Exchange (ETDEWEB)
Calmet, X.; Jurco, B.; Schupp, P.; Wohlgenannt, M. [Sektion Physik, Universitaet Muenchen (Germany); Wess, J. [Sektion Physik, Universitaet Muenchen (Germany); Max-Planck-Institut fuer Physik, Muenchen (Germany)
2002-03-01
We consider the standard model on a non-commutative space and expand the action in the non-commutativity parameter {theta}{sup {mu}}{sup {nu}}. No new particles are introduced; the structure group is SU(3) x SU(2) x U(1). We derive the leading order action. At zeroth order the action coincides with the ordinary standard model. At leading order in {theta}{sup {mu}}{sup {nu}} we find new vertices which are absent in the standard model on commutative space-time. The most striking features are couplings between quarks, gluons and electroweak bosons and many new vertices in the charged and neutral currents. We find that parity is violated in non-commutative QCD. The Higgs mechanism can be applied. QED is not deformed in the minimal version of the NCSM to the order considered. (orig.)
Mapping from Speech to Images Using Continuous State Space Models
DEFF Research Database (Denmark)
Lehn-Schiøler, Tue; Hansen, Lars Kai; Larsen, Jan
2005-01-01
In this paper a system that transforms speech waveforms to animated faces are proposed. The system relies on continuous state space models to perform the mapping, this makes it possible to ensure video with no sudden jumps and allows continuous control of the parameters in 'face space...... a subjective point of view the model is able to construct an image sequence from an unknown noisy speech sequence even though the number of training examples are limited.......'. The performance of the system is critically dependent on the number of hidden variables, with too few variables the model cannot represent data, and with too many overfitting is noticed. Simulations are performed on recordings of 3-5 sec.\\$\\backslash\\$ video sequences with sentences from the Timit database. From...
Waif goodbye! Average-size female models promote positive body image and appeal to consumers.
Diedrichs, Phillippa C; Lee, Christina
2011-10-01
Despite consensus that exposure to media images of thin fashion models is associated with poor body image and disordered eating behaviours, few attempts have been made to enact change in the media. This study sought to investigate an effective alternative to current media imagery, by exploring the advertising effectiveness of average-size female fashion models, and their impact on the body image of both women and men. A sample of 171 women and 120 men were assigned to one of three advertisement conditions: no models, thin models and average-size models. Women and men rated average-size models as equally effective in advertisements as thin and no models. For women with average and high levels of internalisation of cultural beauty ideals, exposure to average-size female models was associated with a significantly more positive body image state in comparison to exposure to thin models and no models. For men reporting high levels of internalisation, exposure to average-size models was also associated with a more positive body image state in comparison to viewing thin models. These findings suggest that average-size female models can promote positive body image and appeal to consumers.
Estimation of the PCR efficiency based on a size-dependent modelling of the amplification process
Lalam, N.; Jacob, C.; Jagers, P.
2005-01-01
We propose a stochastic modelling of the PCR amplification process by a size-dependent branching process starting as a supercritical Bienaymé–Galton–Watson transient phase and then having a saturation near-critical size-dependent phase. This model based on the concept of saturation allows one to
A critical evaluation of the insect body size model and causes of metamorphosis in solitary bees
The insect body size model posits that adult size is determined by growth rate and the duration of growth during the larval stage of development. Within the model, growth rate is regulated by many mechanistic elements that are influenced by both internal and external factors. However, the duration o...
Size exclusion chromatography models and its comparison with experiment
Czech Academy of Sciences Publication Activity Database
Netopilík, Miloš
2017-01-01
Roč. 8, 4 (Suppl) (2017), s. 29 E-ISSN 2157-7064. [International Conference and Exhibition on Advances in Chromatography & HPLC Techniques /3./. 13.07.2017-14.07.2017, Berlin] R&D Projects: GA ČR(CZ) GC17-04258J Institutional support: RVO:61389013 Keywords : model of separation * flow-rate influence Subject RIV: CD - Macromolecular Chemistry
Nonlinear sigma models with compact hyperbolic target spaces
International Nuclear Information System (INIS)
Gubser, Steven; Saleem, Zain H.; Schoenholz, Samuel S.; Stoica, Bogdan; Stokes, James
2016-01-01
We explore the phase structure of nonlinear sigma models with target spaces corresponding to compact quotients of hyperbolic space, focusing on the case of a hyperbolic genus-2 Riemann surface. The continuum theory of these models can be approximated by a lattice spin system which we simulate using Monte Carlo methods. The target space possesses interesting geometric and topological properties which are reflected in novel features of the sigma model. In particular, we observe a topological phase transition at a critical temperature, above which vortices proliferate, reminiscent of the Kosterlitz-Thouless phase transition in the O(2) model V.L. Berezinskii, Destruction of long-range order in one-dimensional and two-dimensional systems having a continuous symmetry group II. Quantum systems, Sov. Phys. JETP 34 (1972) 610. J.M. Kosterlitz and D.J. Thouless, Ordering, metastability and phase transitions in two-dimensional systems, J. Phys. C 6 (1973) 1181 [http://inspirehep.net/search?p=find+J+%22J.Phys.,C6,1181%22]. . Unlike in the O(2) case, there are many different types of vortices, suggesting a possible analogy to the Hagedorn treatment of statistical mechanics of a proliferating number of hadron species. Below the critical temperature the spins cluster around six special points in the target space known as Weierstrass points. The diversity of compact hyperbolic manifolds suggests that our model is only the simplest example of a broad class of statistical mechanical models whose main features can be understood essentially in geometric terms.
Nonlinear sigma models with compact hyperbolic target spaces
Energy Technology Data Exchange (ETDEWEB)
Gubser, Steven [Joseph Henry Laboratories, Princeton University, Princeton, NJ 08544 (United States); Saleem, Zain H. [Department of Physics and Astronomy, University of Pennsylvania,Philadelphia, PA 19104 (United States); National Center for Physics, Quaid-e-Azam University Campus,Islamabad 4400 (Pakistan); Schoenholz, Samuel S. [Department of Physics and Astronomy, University of Pennsylvania,Philadelphia, PA 19104 (United States); Stoica, Bogdan [Walter Burke Institute for Theoretical Physics, California Institute of Technology,452-48, Pasadena, CA 91125 (United States); Stokes, James [Department of Physics and Astronomy, University of Pennsylvania,Philadelphia, PA 19104 (United States)
2016-06-23
We explore the phase structure of nonlinear sigma models with target spaces corresponding to compact quotients of hyperbolic space, focusing on the case of a hyperbolic genus-2 Riemann surface. The continuum theory of these models can be approximated by a lattice spin system which we simulate using Monte Carlo methods. The target space possesses interesting geometric and topological properties which are reflected in novel features of the sigma model. In particular, we observe a topological phase transition at a critical temperature, above which vortices proliferate, reminiscent of the Kosterlitz-Thouless phase transition in the O(2) model V.L. Berezinskii, Destruction of long-range order in one-dimensional and two-dimensional systems having a continuous symmetry group II. Quantum systems, Sov. Phys. JETP 34 (1972) 610. J.M. Kosterlitz and D.J. Thouless, Ordering, metastability and phase transitions in two-dimensional systems, J. Phys. C 6 (1973) 1181 [http://inspirehep.net/search?p=find+J+%22J.Phys.,C6,1181%22]. . Unlike in the O(2) case, there are many different types of vortices, suggesting a possible analogy to the Hagedorn treatment of statistical mechanics of a proliferating number of hadron species. Below the critical temperature the spins cluster around six special points in the target space known as Weierstrass points. The diversity of compact hyperbolic manifolds suggests that our model is only the simplest example of a broad class of statistical mechanical models whose main features can be understood essentially in geometric terms.
Adaptive Modeling of the International Space Station Electrical Power System
Thomas, Justin Ray
2007-01-01
Software simulations provide NASA engineers the ability to experiment with spacecraft systems in a computer-imitated environment. Engineers currently develop software models that encapsulate spacecraft system behavior. These models can be inaccurate due to invalid assumptions, erroneous operation, or system evolution. Increasing accuracy requires manual calibration and domain-specific knowledge. This thesis presents a method for automatically learning system models without any assumptions regarding system behavior. Data stream mining techniques are applied to learn models for critical portions of the International Space Station (ISS) Electrical Power System (EPS). We also explore a knowledge fusion approach that uses traditional engineered EPS models to supplement the learned models. We observed that these engineered EPS models provide useful background knowledge to reduce predictive error spikes when confronted with making predictions in situations that are quite different from the training scenarios used when learning the model. Evaluations using ISS sensor data and existing EPS models demonstrate the success of the adaptive approach. Our experimental results show that adaptive modeling provides reductions in model error anywhere from 80% to 96% over these existing models. Final discussions include impending use of adaptive modeling technology for ISS mission operations and the need for adaptive modeling in future NASA lunar and Martian exploration.
Statistical properties of four effect-size measures for mediation models.
Miočević, Milica; O'Rourke, Holly P; MacKinnon, David P; Brown, Hendricks C
2018-02-01
This project examined the performance of classical and Bayesian estimators of four effect size measures for the indirect effect in a single-mediator model and a two-mediator model. Compared to the proportion and ratio mediation effect sizes, standardized mediation effect-size measures were relatively unbiased and efficient in the single-mediator model and the two-mediator model. Percentile and bias-corrected bootstrap interval estimates of ab/s Y , and ab(s X )/s Y in the single-mediator model outperformed interval estimates of the proportion and ratio effect sizes in terms of power, Type I error rate, coverage, imbalance, and interval width. For the two-mediator model, standardized effect-size measures were superior to the proportion and ratio effect-size measures. Furthermore, it was found that Bayesian point and interval summaries of posterior distributions of standardized effect-size measures reduced excessive relative bias for certain parameter combinations. The standardized effect-size measures are the best effect-size measures for quantifying mediated effects.
Li, P; Chai, G H; Zhu, K H; Lan, N; Sui, X H
2015-01-01
Tactile sensory feedback plays a key role in accomplishing the dexterous manipulation of prosthetic hands for the amputees, and the non-invasive transcutaneous electrical nerve stimulation (TENS) of the phantom finger perception (PFP) area would be an effective way to realize sensory feedback clinically. In order to realize the high-spatial-resolution tactile sensory feedback in the PFP region, we investigated the effects of electrode size and spacing on the tactile sensations for potentially optimizing the surface electrode array configuration. Six forearm-amputated subjects were recruited in the psychophysical studies. With the diameter of the circular electrode increasing from 3 mm to 12 mm, the threshold current intensity was enhanced correspondingly under different sensory modalities. The smaller electrode could potentially lead to high sensation spatial resolution. Whereas, the smaller the electrode, the less the number of sensory modalities. For an Φ-3 mm electrode, it is even hard for the subject to perceive any perception modalities under normal stimulating current. In addition, the two-electrode discrimination distance (TEDD) in the phantom thumb perception area decreased with electrode size decreasing in two directions of parallel or perpendicular to the forearm. No significant difference of TEDD existed along the two directions. Studies in this paper would guide the configuration optimization of the TENS electrode array for potential high spatial-resolution sensory feedback.
Foresight Model of Turkey's Defense Industries' Space Studies until 2040
Yuksel, Nurdan; Cifci, Hasan; Cakir, Serhat
2016-07-01
Being advanced in science and technology is inevitable reality in order to be able to have a voice in the globalized world. Therefore, for the countries, making policies in consistent with their societies' intellectual, economic and political infrastructure and attributing them to the vision having been embraced by all parties of the society is quite crucial for the success. The generated policies are supposed to ensure the usage of countries' resources in the most effective and fastest way, determine the priorities and needs of society and set their goals and related roadmaps. In this sense, technology foresight studies based on justified forecasting in science and technology have critical roles in the process of developing policies. In this article, Foresight Model of Turkey's Defense Industries' Space Studies, which is turned out to be the important part of community life and fundamental background of most technologies, up to 2040 is presented. Turkey got late in space technology studies. Hence, for being fast and efficient to use its national resources in a cost effective way and within national and international collaboration, it should be directed to its pre-set goals. By taking all these factors into consideration, the technology foresight model of Turkey's Defense Industry's Space Studies was presented in the study. In the model, the present condition of space studies in the World and Turkey was analyzed; literature survey and PEST analysis were made. PEST analysis will be the inputs of SWOT analysis and Delphi questionnaire will be used in the study. A two-round Delphi survey will be applied to the participants from universities, public and private organizations operating in space studies at Defense Industry. Critical space technologies will be distinguished according to critical technology measures determined by expert survey; space technology fields and goals will be established according to their importance and feasibility indexes. Finally, for the
Classically integrable boundary conditions for symmetric-space sigma models
International Nuclear Information System (INIS)
MacKay, N.J.; Young, C.A.S.
2004-01-01
We investigate boundary conditions for the non-linear sigma model on the compact symmetric space G/H. The Poisson brackets and the classical local conserved charges necessary for integrability are preserved by boundary conditions which correspond to involutions which commute with the involution defining H. Applied to SO(3)/SO(2), the non-linear sigma model on S 2 , these yield the great circles as boundary submanifolds. Applied to GxG/G, they reproduce known results for the principal chiral model
Modeling Natural Space Ionizing Radiation Effects on External Materials
Alstatt, Richard L.; Edwards, David L.; Parker, Nelson C. (Technical Monitor)
2000-01-01
Predicting the effective life of materials for space applications has become increasingly critical with the drive to reduce mission cost. Programs have considered many solutions to reduce launch costs including novel, low mass materials and thin thermal blankets to reduce spacecraft mass. Determining the long-term survivability of these materials before launch is critical for mission success. This presentation will describe an analysis performed on the outer layer of the passive thermal control blanket of the Hubble Space Telescope. This layer had degraded for unknown reasons during the mission, however ionizing radiation (IR) induced embrittlement was suspected. A methodology was developed which allowed direct comparison between the energy deposition of the natural environment and that of the laboratory generated environment. Commercial codes were used to predict the natural space IR environment model energy deposition in the material from both natural and laboratory IR sources, and design the most efficient test. Results were optimized for total and local energy deposition with an iterative spreadsheet. This method has been used successfully for several laboratory tests at the Marshall Space Flight Center. The study showed that the natural space IR environment, by itself, did not cause the premature degradation observed in the thermal blanket.
Application of parameters space analysis tools for empirical model validation
Energy Technology Data Exchange (ETDEWEB)
Paloma del Barrio, E. [LEPT-ENSAM UMR 8508, Talence (France); Guyon, G. [Electricite de France, Moret-sur-Loing (France)
2004-01-01
A new methodology for empirical model validation has been proposed in the framework of the Task 22 (Building Energy Analysis Tools) of the International Energy Agency. It involves two main steps: checking model validity and diagnosis. Both steps, as well as the underlying methods, have been presented in the first part of the paper. In this part, they are applied for testing modelling hypothesis in the framework of the thermal analysis of an actual building. Sensitivity analysis tools have been first used to identify the parts of the model that can be really tested on the available data. A preliminary diagnosis is then supplied by principal components analysis. Useful information for model behaviour improvement has been finally obtained by optimisation techniques. This example of application shows how model parameters space analysis is a powerful tool for empirical validation. In particular, diagnosis possibilities are largely increased in comparison with residuals analysis techniques. (author)
Modeling Growth and Yield of Schizolobium amazonicum under Different Spacings
Directory of Open Access Journals (Sweden)
Gilson Fernandes da Silva
2013-01-01
Full Text Available This study aimed to present an approach to model the growth and yield of the species Schizolobium amazonicum (Paricá based on a study of different spacings located in Pará, Brazil. Whole-stand models were employed, and two modeling strategies (Strategies A and B were tested. Moreover, the following three scenarios were evaluated to assess the accuracy of the model in estimating total and commercial volumes at five years of age: complete absence of data (S1; available information about the variables basal area, site index, dominant height, and number of trees at two years of age (S2; and this information available at five years of age (S3. The results indicated that the 3 × 2 spacing has a higher mortality rate than normal, and, in general, greater spacing corresponds to larger diameter and average height and smaller basal area and volume per hectare. In estimating the total and commercial volumes for the three scenarios tested, Strategy B seems to be the most appropriate method to estimate the growth and yield of Paricá plantations in the study region, particularly because Strategy A showed a significant bias in its estimates.
Discrete random walk models for space-time fractional diffusion
International Nuclear Information System (INIS)
Gorenflo, Rudolf; Mainardi, Francesco; Moretti, Daniele; Pagnini, Gianni; Paradisi, Paolo
2002-01-01
A physical-mathematical approach to anomalous diffusion may be based on generalized diffusion equations (containing derivatives of fractional order in space or/and time) and related random walk models. By space-time fractional diffusion equation we mean an evolution equation obtained from the standard linear diffusion equation by replacing the second-order space derivative with a Riesz-Feller derivative of order α is part of (0,2] and skewness θ (moduleθ≤{α,2-α}), and the first-order time derivative with a Caputo derivative of order β is part of (0,1]. Such evolution equation implies for the flux a fractional Fick's law which accounts for spatial and temporal non-locality. The fundamental solution (for the Cauchy problem) of the fractional diffusion equation can be interpreted as a probability density evolving in time of a peculiar self-similar stochastic process that we view as a generalized diffusion process. By adopting appropriate finite-difference schemes of solution, we generate models of random walk discrete in space and time suitable for simulating random variables whose spatial probability density evolves in time according to this fractional diffusion equation
Space, time, and the third dimension (model error)
Moss, Marshall E.
1979-01-01
The space-time tradeoff of hydrologic data collection (the ability to substitute spatial coverage for temporal extension of records or vice versa) is controlled jointly by the statistical properties of the phenomena that are being measured and by the model that is used to meld the information sources. The control exerted on the space-time tradeoff by the model and its accompanying errors has seldom been studied explicitly. The technique, known as Network Analyses for Regional Information (NARI), permits such a study of the regional regression model that is used to relate streamflow parameters to the physical and climatic characteristics of the drainage basin.The NARI technique shows that model improvement is a viable and sometimes necessary means of improving regional data collection systems. Model improvement provides an immediate increase in the accuracy of regional parameter estimation and also increases the information potential of future data collection. Model improvement, which can only be measured in a statistical sense, cannot be quantitatively estimated prior to its achievement; thus an attempt to upgrade a particular model entails a certain degree of risk on the part of the hydrologist.
Bagheri-Khoulenjani, Shadab; Mirzadeh, Hamid; Etrati-Khosroshahi, Mohammad; Shokrgozar, Mohammad Ali
2013-06-01
In this study, nanocomposite microspheres based on chitosan/gelatin/nanohydroxyapatite were fabricated, and effects of the nanohydroxyapatite/biopolymer (chitosan/gelatin) weight ratio (nHA/P), stirring rate, chitosan concentration and biopolymer concentration on the particle size, and morphology of nanocomposite microspheres were investigated. Particle size of microspheres was modeled by design of experiments using the surface response method. Particle size, morphology of microspheres, and distribution of nanoparticles within the composite microspheres were evaluated using an optical microscope, scanning electron microscopy (SEM) and transmission electron microscopy (TEM), respectively. X-ray diffraction and Fourier transform infrared spectroscopy were applied to study the physical and chemical characteristics of microspheres. Results showed that by modulating the nHA/P ratio, chitosan concentration, polymer concentration, and stirring rate, it is possible to fabricate microspheres in wide rages of particle size (5-150 μm). Analysis of variance confirmed that the modified quadratic model can be used to predict the particle size of nanocomposite microspheres within the design space. SEM studies showed that microspheres with different compositions had totally different morphologies from dense morphologies to porous ones. TEM images demonstrated that nanoparticles were distributed uniformly within the polymeric matrix. MTT assay and cell culture studies showed that microspheres with different compositions possessed good biocompatibility. © 2012 Wiley Periodicals, Inc. J Biomed Mater Res Part A, 2013. Copyright © 2012 Wiley Periodicals, Inc.
Directory of Open Access Journals (Sweden)
Parvaneh Shabanzadeh
2013-01-01
Full Text Available Artificial neural network (ANN models have the capacity to eliminate the need for expensive experimental investigation in various areas of manufacturing processes, including the casting methods. An understanding of the interrelationships between input variables is essential for interpreting the sensitivity data and optimizing the design parameters. Silver nanoparticles (Ag-NPs have attracted considerable attention for chemical, physical, and medical applications due to their exceptional properties. The nanocrystal silver was synthesized into an interlamellar space of montmorillonite by using the chemical reduction technique. The method has an advantage of size control which is essential in nanometals synthesis. Silver nanoparticles with nanosize and devoid of aggregation are favorable for several properties. In this investigation, the accuracy of artificial neural network training algorithm was applied in studying the effects of different parameters on the particles, including the AgNO3 concentration, reaction temperature, UV-visible wavelength, and montmorillonite (MMT d-spacing on the prediction of size of silver nanoparticles. Analysis of the variance showed that the AgNO3 concentration and temperature were the most significant factors affecting the size of silver nanoparticles. Using the best performing artificial neural network, the optimum conditions predicted were a concentration of AgNO3 of 1.0 (M, MMT d-spacing of 1.27 nm, reaction temperature of 27°C, and wavelength of 397.50 nm.
Alexander, W. M.; Tanner, W. G.; Anz, P. D.; Chen, A. L.
1986-01-01
Particulate matter possessing lunar escape velocity sufficient to enhance the cislunar meteroid flux was investigated. While the interplanetary flux was extensively studied, lunar ejecta created by the impact of this material on the lunar surface is only now being studied. Two recently reported flux models are employed to calculate the total mass impacting the lunar surface due to sporadic meteor flux. There is ample evidence to support the contention that the sporadic interplanetary meteoroid flux enhances the meteroid flux of cislunar space through the creation of micron and submicron lunar ejecta with lunar escape velocity.
Energy Technology Data Exchange (ETDEWEB)
Pogosov, W.V., E-mail: walter.pogosov@gmail.com [N.L. Dukhov All-Russia Research Institute of Automatics, Moscow (Russian Federation); Institute for Theoretical and Applied Electrodynamics, Russian Academy of Sciences, Moscow (Russian Federation); Moscow Institute of Physics and Technology, Dolgoprudny (Russian Federation); Shapiro, D.S. [N.L. Dukhov All-Russia Research Institute of Automatics, Moscow (Russian Federation); Moscow Institute of Physics and Technology, Dolgoprudny (Russian Federation); V.A. Kotel' nikov Institute of Radio Engineering and Electronics, Russian Academy of Sciences, Moscow (Russian Federation); National University of Science and Technology MISIS, Moscow (Russian Federation); Bork, L.V. [N.L. Dukhov All-Russia Research Institute of Automatics, Moscow (Russian Federation); Institute for Theoretical and Experimental Physics, Moscow (Russian Federation); Onishchenko, A.I. [Bogoliubov Laboratory of Theoretical Physics, Joint Institute for Nuclear Research, Dubna (Russian Federation); Moscow Institute of Physics and Technology, Dolgoprudny (Russian Federation); Skobeltsyn Institute of Nuclear Physics, Moscow State University, Moscow (Russian Federation)
2017-06-15
We consider an exactly solvable inhomogeneous Dicke model which describes an interaction between a disordered ensemble of two-level systems with single mode boson field. The existing method for evaluation of Richardson–Gaudin equations in the thermodynamical limit is extended to the case of Bethe equations in Dicke model. Using this extension, we present expressions both for the ground state and lowest excited states energies as well as leading-order finite-size corrections to these quantities for an arbitrary distribution of individual spin energies. We then evaluate these quantities for an equally-spaced distribution (constant density of states). In particular, we study evolution of the spectral gap and other related quantities. We also reveal regions on the phase diagram, where finite-size corrections are of particular importance.
Dynamical 3-Space Gravity Theory: Effects on Polytropic Solar Models
Directory of Open Access Journals (Sweden)
May R. D.
2011-01-01
Full Text Available Numerous experiments and observations have confirmed the existence of a dynamical 3-space, detectable directly by light-speed anisotropy experiments, and indirectly by means of novel gravitational effects, such as bore hole g anomalies, predictable black hole masses, flat spiral-galaxy rotation curves, and the expansion of the universe, all without dark matter and dark energy. The dynamics for this 3-space follows from a unique generalisation of Newtonian gravity, once that is cast into a velocity formalism. This new theory of gravity is applied to the solar model of the sun to compute new density, pressure and temperature profiles, using polytrope modelling of the equation of state for the matter. These results should be applied to a re-analysis of solar neutrino production, and to stellar evolution in general.
Grassmann phase space theory and the Jaynes–Cummings model
International Nuclear Information System (INIS)
Dalton, B.J.; Garraway, B.M.; Jeffers, J.; Barnett, S.M.
2013-01-01
The Jaynes–Cummings model of a two-level atom in a single mode cavity is of fundamental importance both in quantum optics and in quantum physics generally, involving the interaction of two simple quantum systems—one fermionic system (the TLA), the other bosonic (the cavity mode). Depending on the initial conditions a variety of interesting effects occur, ranging from ongoing oscillations of the atomic population difference at the Rabi frequency when the atom is excited and the cavity is in an n-photon Fock state, to collapses and revivals of these oscillations starting with the atom unexcited and the cavity mode in a coherent state. The observation of revivals for Rydberg atoms in a high-Q microwave cavity is key experimental evidence for quantisation of the EM field. Theoretical treatments of the Jaynes–Cummings model based on expanding the state vector in terms of products of atomic and n-photon states and deriving coupled equations for the amplitudes are a well-known and simple method for determining the effects. In quantum optics however, the behaviour of the bosonic quantum EM field is often treated using phase space methods, where the bosonic mode annihilation and creation operators are represented by c-number phase space variables, with the density operator represented by a distribution function of these variables. Fokker–Planck equations for the distribution function are obtained, and either used directly to determine quantities of experimental interest or used to develop c-number Langevin equations for stochastic versions of the phase space variables from which experimental quantities are obtained as stochastic averages. Phase space methods have also been developed to include atomic systems, with the atomic spin operators being represented by c-number phase space variables, and distribution functions involving these variables and those for any bosonic modes being shown to satisfy Fokker–Planck equations from which c-number Langevin equations are
Impact of Battery’s Model Accuracy on Size Optimization Process of a Standalone Photovoltaic System
Directory of Open Access Journals (Sweden)
Ibrahim Anwar Ibrahim
2016-09-01
Full Text Available This paper presents a comparative study between two proposed size optimization methods based on two battery’s models. Simple and complex battery models are utilized to optimally size a standalone photovoltaic system. Hourly meteorological data are used in this research for a specific site. Results show that by using the complex model of the battery, the cost of the system is reduced by 31%. In addition, by using the complex battery model, the sizes of the PV array and the battery are reduced by 5.6% and 30%, respectively, as compared to the case which is based on the simple battery model. This shows the importance of utilizing accurate battery models in sizing standalone photovoltaic systems.
BPHZ renormalization in configuration space for the A4-model
Pottel, Steffen
2018-02-01
Recent developments for BPHZ renormalization performed in configuration space are reviewed and applied to the model of a scalar quantum field with quartic self-interaction. An extension of the results regarding the short-distance expansion and the Zimmermann identity is shown for a normal product, which is quadratic in the field operator. The realization of the equation of motion is computed for the interacting field and the relation to parametric differential equations is indicated.
Geodiversity: Exploration of 3D geological model space
Lindsay, M. D.; Jessell, M. W.; Ailleres, L.; Perrouty, S.; de Kemp, E.; Betts, P. G.
2013-05-01
important geometrical characteristics. The configuration of the model space is determined through identifying ‘outlier’ model examples, which potentially represent undiscovered model ‘species’.
A Model of Representational Spaces in Human Cortex.
Guntupalli, J Swaroop; Hanke, Michael; Halchenko, Yaroslav O; Connolly, Andrew C; Ramadge, Peter J; Haxby, James V
2016-06-01
Current models of the functional architecture of human cortex emphasize areas that capture coarse-scale features of cortical topography but provide no account for population responses that encode information in fine-scale patterns of activity. Here, we present a linear model of shared representational spaces in human cortex that captures fine-scale distinctions among population responses with response-tuning basis functions that are common across brains and models cortical patterns of neural responses with individual-specific topographic basis functions. We derive a common model space for the whole cortex using a new algorithm, searchlight hyperalignment, and complex, dynamic stimuli that provide a broad sampling of visual, auditory, and social percepts. The model aligns representations across brains in occipital, temporal, parietal, and prefrontal cortices, as shown by between-subject multivariate pattern classification and intersubject correlation of representational geometry, indicating that structural principles for shared neural representations apply across widely divergent domains of information. The model provides a rigorous account for individual variability of well-known coarse-scale topographies, such as retinotopy and category selectivity, and goes further to account for fine-scale patterns that are multiplexed with coarse-scale topographies and carry finer distinctions. © The Author 2016. Published by Oxford University Press.
A HARDCORE model for constraining an exoplanet's core size
Suissa, Gabrielle; Chen, Jingjing; Kipping, David
2018-05-01
The interior structure of an exoplanet is hidden from direct view yet likely plays a crucial role in influencing the habitability of the Earth analogues. Inferences of the interior structure are impeded by a fundamental degeneracy that exists between any model comprising more than two layers and observations constraining just two bulk parameters: mass and radius. In this work, we show that although the inverse problem is indeed degenerate, there exists two boundary conditions that enables one to infer the minimum and maximum core radius fraction, CRFmin and CRFmax. These hold true even for planets with light volatile envelopes, but require the planet to be fully differentiated and that layers denser than iron are forbidden. With both bounds in hand, a marginal CRF can also be inferred by sampling in-between. After validating on the Earth, we apply our method to Kepler-36b and measure CRFmin = (0.50 ± 0.07), CRFmax = (0.78 ± 0.02), and CRFmarg = (0.64 ± 0.11), broadly consistent with the Earth's true CRF value of 0.55. We apply our method to a suite of hypothetical measurements of synthetic planets to serve as a sensitivity analysis. We find that CRFmin and CRFmax have recovered uncertainties proportional to the relative error on the planetary density, but CRFmarg saturates to between 0.03 and 0.16 once (Δρ/ρ) drops below 1-2 per cent. This implies that mass and radius alone cannot provide any better constraints on internal composition once bulk density constraints hit around a per cent, providing a clear target for observers.
Directory of Open Access Journals (Sweden)
Flavio F Ribeiro
Full Text Available This study quantified size-dependent cannibalism in barramundi Lates calcarifer through coupling a range of prey-predator pairs in a different range of fish sizes. Predictive models were developed using morphological traits with the alterative assumption of cannibalistic polyphenism. Predictive models were validated with the data from trials where cannibals were challenged with progressing increments of prey sizes. The experimental observations showed that cannibals of 25-131 mm total length could ingest the conspecific prey of 78-72% cannibal length. In the validation test, all predictive models underestimate the maximum ingestible prey size for cannibals of a similar size range. However, the model based on the maximal mouth width at opening closely matched the empirical observations, suggesting a certain degree of phenotypic plasticity of mouth size among cannibalistic individuals. Mouth size showed allometric growth comparing with body depth, resulting in a decreasing trend on the maximum size of ingestible prey as cannibals grow larger, which in parts explains why cannibalism in barramundi is frequently observed in the early developmental stage. Any barramundi has the potential to become a cannibal when the initial prey size was 58% of their size, suggesting that 50% of size difference can be the threshold to initiate intracohort cannibalism in a barramundi population. Cannibalistic polyphenism was likely to occur in barramundi that had a cannibalistic history. An experienced cannibal would have a greater ability to stretch its mouth size to capture a much larger prey than the models predict. The awareness of cannibalistic polyphenism has important application in fish farming management to reduce cannibalism.
A model to estimate the size of nanoparticle agglomerates in gas−solid fluidized beds
Energy Technology Data Exchange (ETDEWEB)
Martín, Lilian de, E-mail: L.DeMartinMonton@tudelft.nl; Ommen, J. Ruud van [Delft University of Technology, Department of Chemical Engineering (Netherlands)
2013-11-15
The estimation of nanoparticle agglomerates’ size in fluidized beds remains an open challenge, mainly due to the difficulty of characterizing the inter-agglomerate van der Waals force. The current approach is to describe micron-sized nanoparticle agglomerates as micron-sized particles with 0.1–0.2-μm asperities. This simplification does not capture the influence of the particle size on the van der Waals attraction between agglomerates. In this paper, we propose a new description where the agglomerates are micron-sized particles with nanoparticles on the surface, acting as asperities. As opposed to previous models, here the van der Waals force between agglomerates decreases with an increase in the particle size. We have also included an additional force due to the hydrogen bond formation between the surfaces of hydrophilic and dry nanoparticles. The average size of the fluidized agglomerates has been estimated equating the attractive force obtained from this method to the weight of the individual agglomerates. The results have been compared to 54 experimental values, most of them collected from the literature. Our model approximates without a systematic error the size of most of the nanopowders, both in conventional and centrifugal fluidized beds, outperforming current models. Although simple, the model is able to capture the influence of the nanoparticle size, particle density, and Hamaker coefficient on the inter-agglomerate forces.
A model to estimate the size of nanoparticle agglomerates in gas−solid fluidized beds
International Nuclear Information System (INIS)
Martín, Lilian de; Ommen, J. Ruud van
2013-01-01
The estimation of nanoparticle agglomerates’ size in fluidized beds remains an open challenge, mainly due to the difficulty of characterizing the inter-agglomerate van der Waals force. The current approach is to describe micron-sized nanoparticle agglomerates as micron-sized particles with 0.1–0.2-μm asperities. This simplification does not capture the influence of the particle size on the van der Waals attraction between agglomerates. In this paper, we propose a new description where the agglomerates are micron-sized particles with nanoparticles on the surface, acting as asperities. As opposed to previous models, here the van der Waals force between agglomerates decreases with an increase in the particle size. We have also included an additional force due to the hydrogen bond formation between the surfaces of hydrophilic and dry nanoparticles. The average size of the fluidized agglomerates has been estimated equating the attractive force obtained from this method to the weight of the individual agglomerates. The results have been compared to 54 experimental values, most of them collected from the literature. Our model approximates without a systematic error the size of most of the nanopowders, both in conventional and centrifugal fluidized beds, outperforming current models. Although simple, the model is able to capture the influence of the nanoparticle size, particle density, and Hamaker coefficient on the inter-agglomerate forces
Modal Analysis and Model Correlation of the Mir Space Station
Kim, Hyoung M.; Kaouk, Mohamed
2000-01-01
This paper will discuss on-orbit dynamic tests, modal analysis, and model refinement studies performed as part of the Mir Structural Dynamics Experiment (MiSDE). Mir is the Russian permanently manned Space Station whose construction first started in 1986. The MiSDE was sponsored by the NASA International Space Station (ISS) Phase 1 Office and was part of the Shuttle-Mir Risk Mitigation Experiment (RME). One of the main objectives for MiSDE is to demonstrate the feasibility of performing on-orbit modal testing on large space structures to extract modal parameters that will be used to correlate mathematical models. The experiment was performed over a one-year span on the Mir-alone and Mir with a Shuttle docked. A total of 45 test sessions were performed including: Shuttle and Mir thruster firings, Shuttle-Mir and Progress-Mir dockings, crew exercise and pushoffs, and ambient noise during night-to-day and day-to-night orbital transitions. Test data were recorded with a variety of existing and new instrumentation systems that included: the MiSDE Mir Auxiliary Sensor Unit (MASU), the Space Acceleration Measurement System (SAMS), the Russian Mir Structural Dynamic Measurement System (SDMS), the Mir and Shuttle Inertial Measurement Units (IMUs), and the Shuttle payload bay video cameras. Modal analysis was performed on the collected test data to extract modal parameters, i.e. frequencies, damping factors, and mode shapes. A special time-domain modal identification procedure was used on free-decay structural responses. The results from this study show that modal testing and analysis of large space structures is feasible within operational constraints. Model refinements were performed on both the Mir alone and the Shuttle-Mir mated configurations. The design sensitivity approach was used for refinement, which adjusts structural properties in order to match analytical and test modal parameters. To verify the refinement results, the analytical responses calculated using
A virtual reality browser for Space Station models
Goldsby, Michael; Pandya, Abhilash; Aldridge, Ann; Maida, James
1993-01-01
The Graphics Analysis Facility at NASA/JSC has created a visualization and learning tool by merging its database of detailed geometric models with a virtual reality system. The system allows an interactive walk-through of models of the Space Station and other structures, providing detailed realistic stereo images. The user can activate audio messages describing the function and connectivity of selected components within his field of view. This paper presents the issues and trade-offs involved in the implementation of the VR system and discusses its suitability for its intended purposes.
Modeling the effects of size on patch dynamics of an inert tracer
Directory of Open Access Journals (Sweden)
P. Xiu
2010-03-01
Full Text Available Mesoscale iron enrichment experiments have revealed that additional iron affects the phytoplankton productivity and carbon cycle. However, the role of initial size of fertilized patch in determining the patch evolution is poorly quantified due to the limited observational capability and complex of physical processes. Using a three-dimensional ocean circulation model, we simulated different sizes of inert tracer patches that were only regulated by physical circulation and diffusion. Model results showed that during the first few days since release of inert tracer, the calculated dilution rate was found to be a linear function with time, which was sensitive to the initial patch size with steeper slope for smaller size patch. After the initial phase of rapid decay, the relationship between dilution rate and time became an exponential function, which was also size dependent. Therefore, larger initial size patches can usually last longer and ultimately affect biogeochemical processes much stronger than smaller patches.
Comparison of Meteoroid Flux Models for Near Earth Space
Drolshagen, G.; Liou, J.-C.; Dikarev, V.; Landgraf, M.; Krag, H.; Kuiper, W.
2007-01-01
Over the last decade several new models for the sporadic interplanetary meteoroid flux have been developed. These include the Meteoroid Engineering Model (MEM), the Divine-Staubach model and the Interplanetary Meteoroid Engineering Model (IMEM). They typically cover mass ranges from 10-12 g (or lower) to 1 g and are applicable for model specific sun distance ranges between 0.2 A.U. and 10 A.U. Near 1 A.U. averaged fluxes (over direction and velocities) for all these models are tuned to the well established interplanetary model by Gr?n et. al. However, in many respects these models differ considerably. Examples are the velocity and directional distributions and the assumed meteoroid sources. In this paper flux predictions by the various models to Earth orbiting spacecraft are compared. Main differences are presented and analysed. The persisting differences even for near Earth space can be seen as surprising in view of the numerous ground based (optical, radar) and in-situ (captured IDPs, in-situ detectors and analysis of retrieved hardware) measurements and simulations. Remaining uncertainties and potential additional studies to overcome the existing model discrepancies are discussed.
Economic analysis of open space box model utilization in spacecraft
Mohammad, Atif F.; Straub, Jeremy
2015-05-01
It is a known fact that the amount of data about space that is stored is getting larger on an everyday basis. However, the utilization of Big Data and related tools to perform ETL (Extract, Transform and Load) applications will soon be pervasive in the space sciences. We have entered in a crucial time where using Big Data can be the difference (for terrestrial applications) between organizations underperforming and outperforming their peers. The same is true for NASA and other space agencies, as well as for individual missions and the highly-competitive process of mission data analysis and publication. In most industries, conventional opponents and new candidates alike will influence data-driven approaches to revolutionize and capture the value of Big Data archives. The Open Space Box Model is poised to take the proverbial "giant leap", as it provides autonomic data processing and communications for spacecraft. We can find economic value generated from such use of data processing in our earthly organizations in every sector, such as healthcare, retail. We also can easily find retailers, performing research on Big Data, by utilizing sensors driven embedded data in products within their stores and warehouses to determine how these products are actually used in the real world.
Discrete- vs. Continuous-Time Modeling of Unequally Spaced Experience Sampling Method Data
Directory of Open Access Journals (Sweden)
Silvia de Haan-Rietdijk
2017-10-01
Full Text Available The Experience Sampling Method is a common approach in psychological research for collecting intensive longitudinal data with high ecological validity. One characteristic of ESM data is that it is often unequally spaced, because the measurement intervals within a day are deliberately varied, and measurement continues over several days. This poses a problem for discrete-time (DT modeling approaches, which are based on the assumption that all measurements are equally spaced. Nevertheless, DT approaches such as (vector autoregressive modeling are often used to analyze ESM data, for instance in the context of affective dynamics research. There are equivalent continuous-time (CT models, but they are more difficult to implement. In this paper we take a pragmatic approach and evaluate the practical relevance of the violated model assumption in DT AR(1 and VAR(1 models, for the N = 1 case. We use simulated data under an ESM measurement design to investigate the bias in the parameters of interest under four different model implementations, ranging from the true CT model that accounts for all the exact measurement times, to the crudest possible DT model implementation, where even the nighttime is treated as a regular interval. An analysis of empirical affect data illustrates how the differences between DT and CT modeling can play out in practice. We find that the size and the direction of the bias in DT (VAR models for unequally spaced ESM data depend quite strongly on the true parameter in addition to data characteristics. Our recommendation is to use CT modeling whenever possible, especially now that new software implementations have become available.
A logistics model for large space power systems
Koelle, H. H.
Space Power Systems (SPS) have to overcome two hurdles: (1) to find an attractive design, manufacturing and assembly concept and (2) to have available a space transportation system that can provide economical logistic support during the construction and operational phases. An initial system feasibility study, some five years ago, was based on a reference system that used terrestrial resources only and was based partially on electric propulsion systems. The conclusion was: it is feasible but not yet economically competitive with other options. This study is based on terrestrial and extraterrestrial resources and on chemical (LH 2/LOX) propulsion systems. These engines are available from the Space Shuttle production line and require small changes only. Other so-called advanced propulsion systems investigated did not prove economically superior if lunar LOX is available! We assume that a Shuttle derived Heavy Lift Launch Vehicle (HLLV) will become available around the turn of the century and that this will be used to establish a research base on the lunar surface. This lunar base has the potential to grow into a lunar factory producing LOX and construction materials for supporting among other projects also the construction of space power systems in geostationary orbit. A model was developed to simulate the logistics support of such an operation for a 50-year life cycle. After 50 years 111 SPS units with 5 GW each and an availability of 90% will produce 100 × 5 = 500 GW. The model comprises 60 equations and requires 29 assumptions of the parameter involved. 60-state variables calculated with the 60 equations mentioned above are given on an annual basis and as averages for the 50-year life cycle. Recycling of defective parts in geostationary orbit is one of the features of the model. The state-of-the-art with respect to SPS technology is introduced as a variable Mg mass/MW electric power delivered. If the space manufacturing facility, a maintenance and repair facility
Curved-space classical solutions of a massive supermatrix model
International Nuclear Information System (INIS)
Azuma, Takehiro; Bagnoud, Maxime
2003-01-01
We investigate here a supermatrix model with a mass term and a cubic interaction. It is based on the super Lie algebra osp(1 vertical bar 32,R), which could play a role in the construction of the eleven-dimensional M-theory. This model contains a massive version of the IIB matrix model, where some fields have a tachyonic mass term. Therefore, the trivial vacuum of this theory is unstable. However, this model possesses several classical solutions where these fields build noncommutative curved spaces and these solutions are shown to be energetically more favorable than the trivial vacuum. In particular, we describe in details two cases, the SO(3)xSO(3)xSO(3) (three fuzzy 2-spheres) and the SO(9) (fuzzy 8-sphere) classical backgrounds
Mahmoudi, M.; Sklar, L. S.; Leclere, S.; Davis, J. D.; Stine, A.
2017-12-01
The size distributions of sediment produced on hillslopes and supplied to river channels influence a wide range of fluvial processes, from bedrock river incision to the creation of aquatic habitats. However, the factors that control hillslope sediment size are poorly understood, limiting our ability to predict sediment size and model the evolution of sediment size distributions across landscapes. Recently separate field and theoretical investigations have begun to address this knowledge gap. Here we compare the predictions of several emerging modeling approaches to landscapes where high quality field data are available. Our goals are to explore the sensitivity and applicability of the theoretical models in each field context, and ultimately to provide a foundation for incorporating hillslope sediment size into models of landscape evolution. The field data include published measurements of hillslope sediment size from the Kohala peninsula on the island of Hawaii and tributaries to the Feather River in the northern Sierra Nevada mountains of California, and an unpublished data set from the Inyo Creek catchment of the southern Sierra Nevada. These data are compared to predictions adapted from recently published modeling approaches that include elements of topography, geology, structure, climate and erosion rate. Predictive models for each site are built in ArcGIS using field condition datasets: DEM topography (slope, aspect, curvature), bedrock geology (lithology, mineralogy), structure (fault location, fracture density), climate data (mean annual precipitation and temperature), and estimates of erosion rates. Preliminary analysis suggests that models may be finely tuned to the calibration sites, particularly when field conditions most closely satisfy model assumptions, leading to unrealistic predictions from extrapolation. We suggest a path forward for developing a computationally tractable method for incorporating spatial variation in production of hillslope
An exact solution to the extended Hubbard model in 2D for finite size system
Harir, S.; Bennai, M.; Boughaleb, Y.
2008-08-01
An exact analytical diagonalization is used to solve the two-dimensional extended Hubbard model (EHM) for a system with finite size. We have considered an EHM including on-site and off-site interactions with interaction energies U and V, respectively, for a square lattice containing 4×4 sites at one-eighth filling with periodic boundary conditions, recently treated by Kovacs and Gulacsi (2006 Phil. Mag. 86 2073). Taking into account the symmetric properties of this square lattice and using a translation linear operator, we have constructed a r-space basis only with 85 state-vectors which describe all possible distributions for four electrons in the 4×4 square lattice. The diagonalization of the 85×85 matrix energy allows us to study the local properties of the above system as a function of the on-site and off-site interactions energies, where we have shown that the off-site interaction encourages the existence of the double occupancies at the first excited state and induces a supplementary conductivity of the system.
Rapid State Space Modeling Tool for Rectangular Wing Aeroservoelastic Studies
Suh, Peter M.; Conyers, Howard Jason; Mavris, Dimitri N.
2015-01-01
This report introduces a modeling and simulation tool for aeroservoelastic analysis of rectangular wings with trailing-edge control surfaces. The inputs to the code are planform design parameters such as wing span, aspect ratio, and number of control surfaces. Using this information, the generalized forces are computed using the doublet-lattice method. Using Roger's approximation, a rational function approximation is computed. The output, computed in a few seconds, is a state space aeroservoelastic model which can be used for analysis and control design. The tool is fully parameterized with default information so there is little required interaction with the model developer. All parameters can be easily modified if desired. The focus of this report is on tool presentation, verification, and validation. These processes are carried out in stages throughout the report. The rational function approximation is verified against computed generalized forces for a plate model. A model composed of finite element plates is compared to a modal analysis from commercial software and an independently conducted experimental ground vibration test analysis. Aeroservoelastic analysis is the ultimate goal of this tool, therefore, the flutter speed and frequency for a clamped plate are computed using damping-versus-velocity and frequency-versus-velocity analysis. The computational results are compared to a previously published computational analysis and wind-tunnel results for the same structure. A case study of a generic wing model with a single control surface is presented. Verification of the state space model is presented in comparison to damping-versus-velocity and frequency-versus-velocity analysis, including the analysis of the model in response to a 1-cos gust.
A model study of the size and composition distribution of aerosols in an aircraft exhaust
Energy Technology Data Exchange (ETDEWEB)
Sorokin, A.A. [SRC `ECOLEN`, Moscow (Russian Federation)
1997-12-31
A two-dimensional, axisymmetric flow field model which includes water and sulphate aerosol formation represented by moments of the size and composition distribution function is used to calculate the effect of radial turbulent jet mixing on the aerosol size distribution and mean modal composition. (author) 6 refs.
Effect Size Measures for Differential Item Functioning in a Multidimensional IRT Model
Suh, Youngsuk
2016-01-01
This study adapted an effect size measure used for studying differential item functioning (DIF) in unidimensional tests and extended the measure to multidimensional tests. Two effect size measures were considered in a multidimensional item response theory model: signed weighted P-difference and unsigned weighted P-difference. The performance of…
A model study of the size and composition distribution of aerosols in an aircraft exhaust
Energy Technology Data Exchange (ETDEWEB)
Sorokin, A A [SRC ` ECOLEN` , Moscow (Russian Federation)
1998-12-31
A two-dimensional, axisymmetric flow field model which includes water and sulphate aerosol formation represented by moments of the size and composition distribution function is used to calculate the effect of radial turbulent jet mixing on the aerosol size distribution and mean modal composition. (author) 6 refs.
Raabe, K.; Arnold, I.; Kool, C.J.M.
This paper presents a dynamic investment model that explains differences in the sensitivity of small- and large-sized firms to changes in the money market interest rate. In contrast to existing studies on the firm size effects of monetary policy, the importance of firms as monetary transmission
Grain Size and Parameter Recovery with TIMSS and the General Diagnostic Model
Skaggs, Gary; Wilkins, Jesse L. M.; Hein, Serge F.
2016-01-01
The purpose of this study was to explore the degree of grain size of the attributes and the sample sizes that can support accurate parameter recovery with the General Diagnostic Model (GDM) for a large-scale international assessment. In this resampling study, bootstrap samples were obtained from the 2003 Grade 8 TIMSS in Mathematics at varying…
A final size relation for epidemic models of vector-transmitted diseases
Fred Brauer
2017-01-01
We formulate and analyze an age of infection model for epidemics of diseases transmitted by a vector, including the possibility of direct transmission as well. We show how to determine a basic reproduction number. While there is no explicit final size relation as for diseases transmitted directly, we are able to obtain estimates for the final size of the epidemic.
A simple shape-free model for pore-size estimation with positron annihilation lifetime spectroscopy
International Nuclear Information System (INIS)
Wada, Ken; Hyodo, Toshio
2013-01-01
Positron annihilation lifetime spectroscopy is one of the methods for estimating pore size in insulating materials. We present a shape-free model to be used conveniently for such analysis. A basic model in classical picture is modified by introducing a parameter corresponding to an effective size of the positronium (Ps). This parameter is adjusted so that its Ps-lifetime to pore-size relation merges smoothly with that of the well-established Tao-Eldrup model (with modification involving the intrinsic Ps annihilation rate) applicable to very small pores. The combined model, i.e., modified Tao-Eldrup model for smaller pores and the modified classical model for larger pores, agrees surprisingly well with the quantum-mechanics based extended Tao-Eldrup model, which deals with Ps trapped in and thermally equilibrium with a rectangular pore.
A simple shape-free model for pore-size estimation with positron annihilation lifetime spectroscopy
Wada, Ken; Hyodo, Toshio
2013-06-01
Positron annihilation lifetime spectroscopy is one of the methods for estimating pore size in insulating materials. We present a shape-free model to be used conveniently for such analysis. A basic model in classical picture is modified by introducing a parameter corresponding to an effective size of the positronium (Ps). This parameter is adjusted so that its Ps-lifetime to pore-size relation merges smoothly with that of the well-established Tao-Eldrup model (with modification involving the intrinsic Ps annihilation rate) applicable to very small pores. The combined model, i.e., modified Tao-Eldrup model for smaller pores and the modified classical model for larger pores, agrees surprisingly well with the quantum-mechanics based extended Tao-Eldrup model, which deals with Ps trapped in and thermally equilibrium with a rectangular pore.
Theories and models on the biological of cells in space
Todd, P.; Klaus, D. M.
1996-01-01
A wide variety of observations on cells in space, admittedly made under constraining and unnatural conditions in may cases, have led to experimental results that were surprising or unexpected. Reproducibility, freedom from artifacts, and plausibility must be considered in all cases, even when results are not surprising. The papers in symposium on 'Theories and Models on the Biology of Cells in Space' are dedicated to the subject of the plausibility of cellular responses to gravity -- inertial accelerations between 0 and 9.8 m/sq s and higher. The mechanical phenomena inside the cell, the gravitactic locomotion of single eukaryotic and prokaryotic cells, and the effects of inertial unloading on cellular physiology are addressed in theoretical and experimental studies.
Validation of elastic cross section models for space radiation applications
Energy Technology Data Exchange (ETDEWEB)
Werneth, C.M., E-mail: charles.m.werneth@nasa.gov [NASA Langley Research Center (United States); Xu, X. [National Institute of Aerospace (United States); Norman, R.B. [NASA Langley Research Center (United States); Ford, W.P. [The University of Tennessee (United States); Maung, K.M. [The University of Southern Mississippi (United States)
2017-02-01
The space radiation field is composed of energetic particles that pose both acute and long-term risks for astronauts in low earth orbit and beyond. In order to estimate radiation risk to crew members, the fluence of particles and biological response to the radiation must be known at tissue sites. Given that the spectral fluence at the boundary of the shielding material is characterized, radiation transport algorithms may be used to find the fluence of particles inside the shield and body, and the radio-biological response is estimated from experiments and models. The fidelity of the radiation spectrum inside the shield and body depends on radiation transport algorithms and the accuracy of the nuclear cross sections. In a recent study, self-consistent nuclear models based on multiple scattering theory that include the option to study relativistic kinematics were developed for the prediction of nuclear cross sections for space radiation applications. The aim of the current work is to use uncertainty quantification to ascertain the validity of the models as compared to a nuclear reaction database and to identify components of the models that can be improved in future efforts.
Model based Computerized Ionospheric Tomography in space and time
Tuna, Hakan; Arikan, Orhan; Arikan, Feza
2018-04-01
Reconstruction of the ionospheric electron density distribution in space and time not only provide basis for better understanding the physical nature of the ionosphere, but also provide improvements in various applications including HF communication. Recently developed IONOLAB-CIT technique provides physically admissible 3D model of the ionosphere by using both Slant Total Electron Content (STEC) measurements obtained from a GPS satellite - receiver network and IRI-Plas model. IONOLAB-CIT technique optimizes IRI-Plas model parameters in the region of interest such that the synthetic STEC computations obtained from the IRI-Plas model are in accordance with the actual STEC measurements. In this work, the IONOLAB-CIT technique is extended to provide reconstructions both in space and time. This extension exploits the temporal continuity of the ionosphere to provide more reliable reconstructions with a reduced computational load. The proposed 4D-IONOLAB-CIT technique is validated on real measurement data obtained from TNPGN-Active GPS receiver network in Turkey.
Nonlinear State Space Modeling and System Identification for Electrohydraulic Control
Directory of Open Access Journals (Sweden)
Jun Yan
2013-01-01
Full Text Available The paper deals with nonlinear modeling and identification of an electrohydraulic control system for improving its tracking performance. We build the nonlinear state space model for analyzing the highly nonlinear system and then develop a Hammerstein-Wiener (H-W model which consists of a static input nonlinear block with two-segment polynomial nonlinearities, a linear time-invariant dynamic block, and a static output nonlinear block with single polynomial nonlinearity to describe it. We simplify the H-W model into a linear-in-parameters structure by using the key term separation principle and then use a modified recursive least square method with iterative estimation of internal variables to identify all the unknown parameters simultaneously. It is found that the proposed H-W model approximates the actual system better than the independent Hammerstein, Wiener, and ARX models. The prediction error of the H-W model is about 13%, 54%, and 58% less than the Hammerstein, Wiener, and ARX models, respectively.
Monitoring Murder Crime in Namibia Using Bayesian Space-Time Models
Directory of Open Access Journals (Sweden)
Isak Neema
2012-01-01
Full Text Available This paper focuses on the analysis of murder in Namibia using Bayesian spatial smoothing approach with temporal trends. The analysis was based on the reported cases from 13 regions of Namibia for the period 2002–2006 complemented with regional population sizes. The evaluated random effects include space-time structured heterogeneity measuring the effect of regional clustering, unstructured heterogeneity, time, space and time interaction and population density. The model consists of carefully chosen prior and hyper-prior distributions for parameters and hyper-parameters, with inference conducted using Gibbs sampling algorithm and sensitivity test for model validation. The posterior mean estimate of the parameters from the model using DIC as model selection criteria show that most of the variation in the relative risk of murder is due to regional clustering, while the effect of population density and time was insignificant. The sensitivity analysis indicates that both intrinsic and Laplace CAR prior can be adopted as prior distribution for the space-time heterogeneity. In addition, the relative risk map show risk structure of increasing north-south gradient, pointing to low risk in northern regions of Namibia, while Karas and Khomas region experience long-term increase in murder risk.
Energy Technology Data Exchange (ETDEWEB)
Tonks, Michael R [Idaho National Lab. (INL), Idaho Falls, ID (United States); Zhang, Yongfeng [Idaho National Lab. (INL), Idaho Falls, ID (United States); Bai, Xianming [Idaho National Lab. (INL), Idaho Falls, ID (United States)
2014-06-01
This report summarizes development work funded by the Nuclear Energy Advanced Modeling Simulation program's Fuels Product Line (FPL) to develop a mechanistic model for the average grain size in UO₂ fuel. The model is developed using a multiscale modeling and simulation approach involving atomistic simulations, as well as mesoscale simulations using INL's MARMOT code.
A Comparison of Uniform DIF Effect Size Estimators under the MIMIC and Rasch Models
Jin, Ying; Myers, Nicholas D.; Ahn, Soyeon; Penfield, Randall D.
2013-01-01
The Rasch model, a member of a larger group of models within item response theory, is widely used in empirical studies. Detection of uniform differential item functioning (DIF) within the Rasch model typically employs null hypothesis testing with a concomitant consideration of effect size (e.g., signed area [SA]). Parametric equivalence between…
Haveman, Steven; Bonnema, Gerrit Maarten
2013-01-01
Most formal models are used in detailed design and focus on a single domain. Few effective approaches exist that can effectively tie these lower level models to a high level system model during design space exploration. This complicates the validation of high level system requirements during
Finite-size effects in the three-state quantum asymmetric clock model
International Nuclear Information System (INIS)
Gehlen, G. v.; Rittenberg, V.
1983-04-01
The one-dimensional quantum Hamiltonian of the asymmetric three-state clock model is studied using finite-size scaling. Various boundary conditions are considered on chains containing up to eight sites. We calculate the boundary of the commensurate phase and the mass gap index. The model shows an interesting finite-size dependence in connexion with the presence of the incommensurate phase indicating that for the infinite system there is no Lifshitz point. (orig.)
Reid, Christopher; Harvill, Lauren; England, Scott; Young, Karen; Norcross, Jason; Rajulu, Sudhakar
2014-01-01
The objective of this project was to assess the performance differences between a nominally sized Extravehicular Mobility Unit (EMU) space suit and a nominal +1 (plus) sized EMU. Method: This study evaluated suit size conditions by using metabolic cost, arm mobility, and arm strength as performance metrics. Results: Differences between the suit sizes were found only in shoulder extension strength being 15.8% greater for the plus size. Discussion: While this study was able to identify motions and activities that were considered to be practically or statistically different, it does not signify that use of a plus sized suit should be prohibited. Further testing would be required that either pertained to a particular mission critical task or better simulates a microgravity environment that the EMU suit was designed to work in.
A model for optimal offspring size in fish, including live-bearing and parental effects.
Jørgensen, Christian; Auer, Sonya K; Reznick, David N
2011-05-01
Since Smith and Fretwell's seminal article in 1974 on the optimal offspring size, most theory has assumed a trade-off between offspring number and offspring fitness, where larger offspring have better survival or fitness, but with diminishing returns. In this article, we use two ubiquitous biological mechanisms to derive the shape of this trade-off: the offspring's growth rate combined with its size-dependent mortality (predation). For a large parameter region, we obtain the same sigmoid relationship between offspring size and offspring survival as Smith and Fretwell, but we also identify parameter regions where the optimal offspring size is as small or as large as possible. With increasing growth rate, the optimal offspring size is smaller. We then integrate our model with strategies of parental care. Egg guarding that reduces egg mortality favors smaller or larger offspring, depending on how mortality scales with size. For live-bearers, the survival of offspring to birth is a function of maternal survival; if the mother's survival increases with her size, then the model predicts that larger mothers should produce larger offspring. When using parameters for Trinidadian guppies Poecilia reticulata, differences in both growth and size-dependent predation are required to predict observed differences in offspring size between wild populations from high- and low-predation environments.
Space shuttle’s liftoff: a didactical model
Borghi, Riccardo; Spinozzi, Turi Maria
2017-07-01
The pedagogical aim of the present paper, thought for an undergraduate audience, is to help students to appreciate how the development of elementary models based on physics first principles is a fundamental and necessary preliminary step for the behaviour of complex real systems to be grasped with minimal amounts of math. In some particularly fortunate cases, such models also show reasonably good results when are compared to reality. The speed behaviour of the Space Shuttle during its first two minutes of flight from liftoff is here analysed from such a didactical point of view. Only the momentum conservation law is employed to develop the model, which is eventually applied to quantitatively interpret the telemetry of the 2011 last launches of Shuttle Discovery and Shuttle Endeavour. To the STS-51-L and STS-107 astronauts, in memoriam.
A Knowledge Discovery from POS Data using State Space Models
Sato, Tadahiko; Higuchi, Tomoyuki
The number of competing-brands changes by new product's entry. The new product introduction is endemic among consumer packaged goods firm and is an integral component of their marketing strategy. As a new product's entry affects markets, there is a pressing need to develop market response model that can adapt to such changes. In this paper, we develop a dynamic model that capture the underlying evolution of the buying behavior associated with the new product. This extends an application of a dynamic linear model, which is used by a number of time series analyses, by allowing the observed dimension to change at some point in time. Our model copes with a problem that dynamic environments entail: changes in parameter over time and changes in the observed dimension. We formulate the model with framework of a state space model. We realize an estimation of the model using modified Kalman filter/fixed interval smoother. We find that new product's entry (1) decreases brand differentiation for existing brands, as indicated by decreasing difference between cross-price elasticities; (2) decreases commodity power for existing brands, as indicated by decreasing trend; and (3) decreases the effect of discount for existing brands, as indicated by a decrease in the magnitude of own-brand price elasticities. The proposed framework is directly applicable to other fields in which the observed dimension might be change, such as economic, bioinformatics, and so forth.
Operations and support cost modeling of conceptual space vehicles
Ebeling, Charles
1994-01-01
The University of Dayton is pleased to submit this annual report to the National Aeronautics and Space Administration (NASA) Langley Research Center which documents the development of an operations and support (O&S) cost model as part of a larger life cycle cost (LCC) structure. It is intended for use during the conceptual design of new launch vehicles and spacecraft. This research is being conducted under NASA Research Grant NAG-1-1327. This research effort changes the focus from that of the first two years in which a reliability and maintainability model was developed to the initial development of an operations and support life cycle cost model. Cost categories were initially patterned after NASA's three axis work breakdown structure consisting of a configuration axis (vehicle), a function axis, and a cost axis. A revised cost element structure (CES), which is currently under study by NASA, was used to established the basic cost elements used in the model. While the focus of the effort was on operations and maintenance costs and other recurring costs, the computerized model allowed for other cost categories such as RDT&E and production costs to be addressed. Secondary tasks performed concurrent with the development of the costing model included support and upgrades to the reliability and maintainability (R&M) model. The primary result of the current research has been a methodology and a computer implementation of the methodology to provide for timely operations and support cost analysis during the conceptual design activities.
Space Launch System Scale Model Acoustic Test Ignition Overpressure Testing
Nance, Donald; Liever, Peter; Nielsen, Tanner
2015-01-01
The overpressure phenomenon is a transient fluid dynamic event occurring during rocket propulsion system ignition. This phenomenon results from fluid compression of the accelerating plume gas, subsequent rarefaction, and subsequent propagation from the exhaust trench and duct holes. The high-amplitude unsteady fluid-dynamic perturbations can adversely affect the vehicle and surrounding structure. Commonly known as ignition overpressure (IOP), this is an important design-to environment for the Space Launch System (SLS) that NASA is currently developing. Subscale testing is useful in validating and verifying the IOP environment. This was one of the objectives of the Scale Model Acoustic Test, conducted at Marshall Space Flight Center. The test data quantifies the effectiveness of the SLS IOP suppression system and improves the analytical models used to predict the SLS IOP environments. The reduction and analysis of the data gathered during the SMAT IOP test series requires identification and characterization of multiple dynamic events and scaling of the event waveforms to provide the most accurate comparisons to determine the effectiveness of the IOP suppression systems. The identification and characterization of the overpressure events, the waveform scaling, the computation of the IOP suppression system knockdown factors, and preliminary comparisons to the analytical models are discussed.
Space Launch System Scale Model Acoustic Test Ignition Overpressure Testing
Nance, Donald K.; Liever, Peter A.
2015-01-01
The overpressure phenomenon is a transient fluid dynamic event occurring during rocket propulsion system ignition. This phenomenon results from fluid compression of the accelerating plume gas, subsequent rarefaction, and subsequent propagation from the exhaust trench and duct holes. The high-amplitude unsteady fluid-dynamic perturbations can adversely affect the vehicle and surrounding structure. Commonly known as ignition overpressure (IOP), this is an important design-to environment for the Space Launch System (SLS) that NASA is currently developing. Subscale testing is useful in validating and verifying the IOP environment. This was one of the objectives of the Scale Model Acoustic Test (SMAT), conducted at Marshall Space Flight Center (MSFC). The test data quantifies the effectiveness of the SLS IOP suppression system and improves the analytical models used to predict the SLS IOP environments. The reduction and analysis of the data gathered during the SMAT IOP test series requires identification and characterization of multiple dynamic events and scaling of the event waveforms to provide the most accurate comparisons to determine the effectiveness of the IOP suppression systems. The identification and characterization of the overpressure events, the waveform scaling, the computation of the IOP suppression system knockdown factors, and preliminary comparisons to the analytical models are discussed.
Sample size calculation to externally validate scoring systems based on logistic regression models.
Directory of Open Access Journals (Sweden)
Antonio Palazón-Bru
Full Text Available A sample size containing at least 100 events and 100 non-events has been suggested to validate a predictive model, regardless of the model being validated and that certain factors can influence calibration of the predictive model (discrimination, parameterization and incidence. Scoring systems based on binary logistic regression models are a specific type of predictive model.The aim of this study was to develop an algorithm to determine the sample size for validating a scoring system based on a binary logistic regression model and to apply it to a case study.The algorithm was based on bootstrap samples in which the area under the ROC curve, the observed event probabilities through smooth curves, and a measure to determine the lack of calibration (estimated calibration index were calculated. To illustrate its use for interested researchers, the algorithm was applied to a scoring system, based on a binary logistic regression model, to determine mortality in intensive care units.In the case study provided, the algorithm obtained a sample size with 69 events, which is lower than the value suggested in the literature.An algorithm is provided for finding the appropriate sample size to validate scoring systems based on binary logistic regression models. This could be applied to determine the sample size in other similar cases.
Ren, Anna N; Neher, Robert E; Bell, Tyler; Grimm, James
2018-06-01
Preoperative planning is important to achieve successful implantation in primary total knee arthroplasty (TKA). However, traditional TKA templating techniques are not accurate enough to predict the component size to a very close range. With the goal of developing a general predictive statistical model using patient demographic information, ordinal logistic regression was applied to build a proportional odds model to predict the tibia component size. The study retrospectively collected the data of 1992 primary Persona Knee System TKA procedures. Of them, 199 procedures were randomly selected as testing data and the rest of the data were randomly partitioned between model training data and model evaluation data with a ratio of 7:3. Different models were trained and evaluated on the training and validation data sets after data exploration. The final model had patient gender, age, weight, and height as independent variables and predicted the tibia size within 1 size difference 96% of the time on the validation data, 94% of the time on the testing data, and 92% on a prospective cadaver data set. The study results indicated the statistical model built by ordinal logistic regression can increase the accuracy of tibia sizing information for Persona Knee preoperative templating. This research shows statistical modeling may be used with radiographs to dramatically enhance the templating accuracy, efficiency, and quality. In general, this methodology can be applied to other TKA products when the data are applicable. Copyright © 2018 Elsevier Inc. All rights reserved.
Modelling size-fractionated primary production in the Atlantic Ocean from remote sensing
Brewin, Robert J. W.; Tilstone, Gavin H.; Jackson, Thomas; Cain, Terry; Miller, Peter I.; Lange, Priscila K.; Misra, Ankita; Airs, Ruth L.
2017-11-01
Marine primary production influences the transfer of carbon dioxide between the ocean and atmosphere, and the availability of energy for the pelagic food web. Both the rate and the fate of organic carbon from primary production are dependent on phytoplankton size. A key aim of the Atlantic Meridional Transect (AMT) programme has been to quantify biological carbon cycling in the Atlantic Ocean and measurements of total primary production have been routinely made on AMT cruises, as well as additional measurements of size-fractionated primary production on some cruises. Measurements of total primary production collected on the AMT have been used to evaluate remote-sensing techniques capable of producing basin-scale estimates of primary production. Though models exist to estimate size-fractionated primary production from satellite data, these have not been well validated in the Atlantic Ocean, and have been parameterised using measurements of phytoplankton pigments rather than direct measurements of phytoplankton size structure. Here, we re-tune a remote-sensing primary production model to estimate production in three size fractions of phytoplankton (10 μm) in the Atlantic Ocean, using measurements of size-fractionated chlorophyll and size-fractionated photosynthesis-irradiance experiments conducted on AMT 22 and 23 using sequential filtration-based methods. The performance of the remote-sensing technique was evaluated using: (i) independent estimates of size-fractionated primary production collected on a number of AMT cruises using 14C on-deck incubation experiments and (ii) Monte Carlo simulations. Considering uncertainty in the satellite inputs and model parameters, we estimate an average model error of between 0.27 and 0.63 for log10-transformed size-fractionated production, with lower errors for the small size class (10 μm), and errors generally higher in oligotrophic waters. Application to satellite data in 2007 suggests the contribution of cells 2 μm to total
Dynamics in the Parameter Space of a Neuron Model
Paulo, C. Rech
2012-06-01
Some two-dimensional parameter-space diagrams are numerically obtained by considering the largest Lyapunov exponent for a four-dimensional thirteen-parameter Hindmarsh—Rose neuron model. Several different parameter planes are considered, and it is shown that depending on the combination of parameters, a typical scenario can be preserved: for some choice of two parameters, the parameter plane presents a comb-shaped chaotic region embedded in a large periodic region. It is also shown that there exist regions close to these comb-shaped chaotic regions, separated by the comb teeth, organizing themselves in period-adding bifurcation cascades.
Coset Space Dimensional Reduction approach to the Standard Model
International Nuclear Information System (INIS)
Farakos, K.; Kapetanakis, D.; Koutsoumbas, G.; Zoupanos, G.
1988-01-01
We present a unified theory in ten dimensions based on the gauge group E 8 , which is dimensionally reduced to the Standard Mode SU 3c xSU 2 -LxU 1 , which breaks further spontaneously to SU 3L xU 1em . The model gives similar predictions for sin 2 θ w and proton decay as the minimal SU 5 G.U.T., while a natural choice of the coset space radii predicts light Higgs masses a la Coleman-Weinberg
Sizing and scaling requirements of a large-scale physical model for code validation
International Nuclear Information System (INIS)
Khaleel, R.; Legore, T.
1990-01-01
Model validation is an important consideration in application of a code for performance assessment and therefore in assessing the long-term behavior of the engineered and natural barriers of a geologic repository. Scaling considerations relevant to porous media flow are reviewed. An analysis approach is presented for determining the sizing requirements of a large-scale, hydrology physical model. The physical model will be used to validate performance assessment codes that evaluate the long-term behavior of the repository isolation system. Numerical simulation results for sizing requirements are presented for a porous medium model in which the media properties are spatially uncorrelated
Construction of fuzzy spaces and their applications to matrix models
Abe, Yasuhiro
Quantization of spacetime by means of finite dimensional matrices is the basic idea of fuzzy spaces. There remains an issue of quantizing time, however, the idea is simple and it provides an interesting interplay of various ideas in mathematics and physics. Shedding some light on such an interplay is the main theme of this dissertation. The dissertation roughly separates into two parts. In the first part, we consider rather mathematical aspects of fuzzy spaces, namely, their construction. We begin with a review of construction of fuzzy complex projective spaces CP k (k = 1, 2, · · ·) in relation to geometric quantization. This construction facilitates defining symbols and star products on fuzzy CPk. Algebraic construction of fuzzy CPk is also discussed. We then present construction of fuzzy S 4, utilizing the fact that CP3 is an S2 bundle over S4. Fuzzy S4 is obtained by imposing an additional algebraic constraint on fuzzy CP3. Consequently it is proposed that coordinates on fuzzy S4 are described by certain block-diagonal matrices. It is also found that fuzzy S8 can analogously be constructed. In the second part of this dissertation, we consider applications of fuzzy spaces to physics. We first consider theories of gravity on fuzzy spaces, anticipating that they may offer a novel way of regularizing spacetime dynamics. We obtain actions for gravity on fuzzy S2 and on fuzzy CP3 in terms of finite dimensional matrices. Application to M(atrix) theory is also discussed. With an introduction of extra potentials to the theory, we show that it also has new brane solutions whose transverse directions are described by fuzzy S 4 and fuzzy CP3. The extra potentials can be considered as fuzzy versions of differential forms or fluxes, which enable us to discuss compactification models of M(atrix) theory. In particular, compactification down to fuzzy S4 is discussed and a realistic matrix model of M-theory in four-dimensions is proposed.
Directory of Open Access Journals (Sweden)
Felix Barber
2017-11-01
Full Text Available Organisms across all domains of life regulate the size of their cells. However, the means by which this is done is poorly understood. We study two abstracted “molecular” models for size regulation: inhibitor dilution and initiator accumulation. We apply the models to two settings: bacteria like Escherichia coli, that grow fully before they set a division plane and divide into two equally sized cells, and cells that form a bud early in the cell division cycle, confine new growth to that bud, and divide at the connection between that bud and the mother cell, like the budding yeast Saccharomyces cerevisiae. In budding cells, delaying cell division until buds reach the same size as their mother leads to very weak size control, with average cell size and standard deviation of cell size increasing over time and saturating up to 100-fold higher than those values for cells that divide when the bud is still substantially smaller than its mother. In budding yeast, both inhibitor dilution or initiator accumulation models are consistent with the observation that the daughters of diploid cells add a constant volume before they divide. This “adder” behavior has also been observed in bacteria. We find that in bacteria an inhibitor dilution model produces adder correlations that are not robust to noise in the timing of DNA replication initiation or in the timing from initiation of DNA replication to cell division (the C+D period. In contrast, in bacteria an initiator accumulation model yields robust adder correlations in the regime where noise in the timing of DNA replication initiation is much greater than noise in the C + D period, as reported previously (Ho and Amir, 2015. In bacteria, division into two equally sized cells does not broaden the size distribution.
System resiliency quantification using non-state-space and state-space analytic models
International Nuclear Information System (INIS)
Ghosh, Rahul; Kim, DongSeong; Trivedi, Kishor S.
2013-01-01
Resiliency is becoming an important service attribute for large scale distributed systems and networks. Key problems in resiliency quantification are lack of consensus on the definition of resiliency and systematic approach to quantify system resiliency. In general, resiliency is defined as the ability of (system/person/organization) to recover/defy/resist from any shock, insult, or disturbance [1]. Many researchers interpret resiliency as a synonym for fault-tolerance and reliability/availability. However, effect of failure/repair on systems is already covered by reliability/availability measures and that of on individual jobs is well covered under the umbrella of performability [2] and task completion time analysis [3]. We use Laprie [4] and Simoncini [5]'s definition in which resiliency is the persistence of service delivery that can justifiably be trusted, when facing changes. The changes we are referring to here are beyond the envelope of system configurations already considered during system design, that is, beyond fault tolerance. In this paper, we outline a general approach for system resiliency quantification. Using examples of non-state-space and state-space stochastic models, we analytically–numerically quantify the resiliency of system performance, reliability, availability and performability measures w.r.t. structural and parametric changes
Calculational models of close-spaced thermionic converters
International Nuclear Information System (INIS)
McVey, J.B.
1983-01-01
Two new calculational models have been developed in conjunction with the SAVTEC experimental program. These models have been used to analyze data from experimental close-spaced converters, providing values for spacing, electrode work functions, and converter efficiency. They have also been used to make performance predictions for such converters over a wide range of conditions. Both models are intended for use in the collisionless (Knudsen) regime. They differ from each other in that the simpler one uses a Langmuir-type formulation which only considers electrons emitted from the emitter. This approach is implemented in the LVD (Langmuir Vacuum Diode) computer program, which has the virtue of being both simple and fast. The more complex model also includes both Saha-Langmuir emission of positive cesium ions from the emitter and collector back emission. Computer implementation is by the KMD1 (Knudsen Mode Diode) program. The KMD1 model derives the particle distribution functions from the Vlasov equation. From these the particle densities are found for various interelectrode motive shapes. Substituting the particle densities into Poisson's equation gives a second order differential equation for potential. This equation can be integrated once analytically. The second integration, which gives the interelectrode motive, is performed numerically by the KMD1 program. This is complicated by the fact that the integrand is often singular at one end point of the integration interval. The program performs a transformation on the integrand to make it finite over the entire interval. Once the motive has been computed, the output voltage, current density, power density, and efficiency are found. The program is presently unable to operate when the ion richness ratio β is between about .8 and 1.0, due to the occurrence of oscillatory motives
Pope, J.G.; Rice, J.C.; Daan, N.; Jennings, S.; Gislason, H.
2006-01-01
To measure and predict the response of fish communities to exploitation, it is necessary to understand how the direct and indirect effects of fishing interact. Because fishing and predation are size-selective processes, the potential response can be explored with size-based models. We use a
Analysis of litter size and average litter weight in pigs using a recursive model
DEFF Research Database (Denmark)
Varona, Luis; Sorensen, Daniel; Thompson, Robin
2007-01-01
An analysis of litter size and average piglet weight at birth in Landrace and Yorkshire using a standard two-trait mixed model (SMM) and a recursive mixed model (RMM) is presented. The RMM establishes a one-way link from litter size to average piglet weight. It is shown that there is a one......-to-one correspondence between the parameters of SMM and RMM and that they generate equivalent likelihoods. As parameterized in this work, the RMM tests for the presence of a recursive relationship between additive genetic values, permanent environmental effects, and specific environmental effects of litter size......, on average piglet weight. The equivalent standard mixed model tests whether or not the covariance matrices of the random effects have a diagonal structure. In Landrace, posterior predictive model checking supports a model without any form of recursion or, alternatively, a SMM with diagonal covariance...
Influence of Li-ion Battery Models in the Sizing of Hybrid Storage Systems with Supercapacitors
DEFF Research Database (Denmark)
Pinto, Claudio; Barreras, Jorge Varela; de Castro, Ricardo
2014-01-01
This paper presents a comparative study of the influence of different aggregated electrical circuit battery models in the sizing process of a hybrid energy storage system (ESS), composed by Li-ion batteries and supercapacitors (SCs). The aim is to find the number of cells required to propel...... a certain vehicle over a predefined driving cycle. During this process, three battery models will be considered. The first consists in a linear static zeroeth order battery model over a restricted operating window. The second is a non-linear static model, while the third takes into account first......-order dynamics of the battery. Simulation results demonstrate that the adoption of a more accurate battery model in the sizing of hybrid ESSs prevents over-sizing, leading to a reduction in the number of cells of up to 29%, and a cost decrease of up to 10%....
Modeling size effects on the transformation behavior of shape memory alloy micropillars
International Nuclear Information System (INIS)
Hernandez, Edwin A Peraza; Lagoudas, Dimitris C
2015-01-01
The size dependence of the thermomechanical response of shape memory alloys (SMAs) at the micro and nano-scales has gained increasing attention in the engineering community due to existing and potential uses of SMAs as solid-state actuators and components for energy dissipation in small scale devices. Particularly, their recent uses in microelectromechanical systems (MEMS) have made SMAs attractive options as active materials in small scale devices. One factor limiting further application, however, is the inability to effectively and efficiently model the observed size dependence of the SMA behavior for engineering applications. Therefore, in this work, a constitutive model for the size-dependent behavior of SMAs is proposed. Experimental observations are used to motivate the extension of an existing thermomechanical constitutive model for SMAs to account for the scale effects. It is proposed that such effects can be captured via characteristic length dependent material parameters in a power-law manner. The size dependence of the transformation behavior of NiFeGa micropillars is investigated in detail and used as model prediction cases. The constitutive model is implemented in a finite element framework and used to simulate and predict the response of SMA micropillars with different sizes. The results show a good agreement with experimental data. A parametric study performed using the calibrated model shows that the influence of micropillar aspect ratio and taper angle on the compression response is significantly smaller than that of the micropillar average diameter. It is concluded that the model is able to capture the size dependent transformation response of the SMA micropillars. In addition, the simplicity of the calibration and implementation of the proposed model make it practical for the design and numerical analysis of small scale SMA components that exhibit size dependent responses. (paper)
Baranes, Adrien F; Oudeyer, Pierre-Yves; Gottlieb, Jacqueline
2014-01-01
Devising efficient strategies for exploration in large open-ended spaces is one of the most difficult computational problems of intelligent organisms. Because the available rewards are ambiguous or unknown during the exploratory phase, subjects must act in intrinsically motivated fashion. However, a vast majority of behavioral and neural studies to date have focused on decision making in reward-based tasks, and the rules guiding intrinsically motivated exploration remain largely unknown. To examine this question we developed a paradigm for systematically testing the choices of human observers in a free play context. Adult subjects played a series of short computer games of variable difficulty, and freely choose which game they wished to sample without external guidance or physical rewards. Subjects performed the task in three distinct conditions where they sampled from a small or a large choice set (7 vs. 64 possible levels of difficulty), and where they did or did not have the possibility to sample new games at a constant level of difficulty. We show that despite the absence of external constraints, the subjects spontaneously adopted a structured exploration strategy whereby they (1) started with easier games and progressed to more difficult games, (2) sampled the entire choice set including extremely difficult games that could not be learnt, (3) repeated moderately and high difficulty games much more frequently than was predicted by chance, and (4) had higher repetition rates and chose higher speeds if they could generate new sequences at a constant level of difficulty. The results suggest that intrinsically motivated exploration is shaped by several factors including task difficulty, novelty and the size of the choice set, and these come into play to serve two internal goals-maximize the subjects' knowledge of the available tasks (exploring the limits of the task set), and maximize their competence (performance and skills) across the task set.
Directory of Open Access Journals (Sweden)
Adrien Fredj Baranes
2014-10-01
Full Text Available Devising efficient strategies for exploration in large open-ended spaces is one of the most difficult computational problems of intelligent organisms. Because the available rewards are ambiguous or unknown during the exploratory phase, subjects must act in intrinsically motivated fashion. However, a vast majority of behavioral and neural studies to date have focused on decision making in reward-based tasks, and the rules guiding intrinsically motivated exploration remain largely unknown. To examine this question we developed a paradigm for systematically testing the choices of human observers in a free play context. Adult subjects played a series of short computer games of variable difficulty, and freely choose which game they wished to sample without external guidance or physical rewards. Subjects performed the task in three distinct conditions where they sampled from a small or a large choice set (7 vs 64 possible levels of difficulty, and where they did or did not have the possibility to sample new games at a constant level of difficulty. We show that despite the absence of external constraints, the subjects spontaneously adopted a structured exploration strategy whereby they (1 started with easier games and progressed to more difficult games, (2 sampled the entire choice set including extremely difficult games that could not be learnt, (3 repeated moderately and high difficulty games much more frequently than was predicted by chance, and (4 had higher repetition rates and chose higher speeds if they could generate new sequences at a constant level of difficulty. The results suggest that intrinsically motivated exploration is shaped by several factors including task difficulty, novelty and the size of the choice set, and these come into play to serve two internal goals - maximize the subjects’ knowledge of the available tasks (exploring the limits of the task set, and maximize their competence (performance and skills across the task set.
MODEL JOINT ECONOMIC LOT SIZE PADA KASUS PEMASOK-PEMBELI DENGAN PERMINTAAN PROBABILISTIK
Directory of Open Access Journals (Sweden)
Wakhid Ahmad Jauhari
2009-01-01
Full Text Available In this paper we consider single vendor single buyer integrated inventory model with probabilistic demand and equal delivery lot size. The model contributes to the current literature by relaxing the deterministic demand assumption which has been used for almost all integrated inventory models. The objective is to minimize expected total costs incurred by the vendor and the buyer. We develop effective iterative procedures for finding the optimal solution. Numerical examples are used to illustrate the benefit of integration. A sensitivity analysis is performed to explore the effect of key parameters on delivery lot size, safety factor, production lot size factor and the expected total cost. The results of the numerical examples indicate that our models can achieve a significant amount of savings. Finally, we compare the results of our proposed model with a simulation model. Abstract in Bahasa Indonesia: Pada penelitian ini akan dikembangkan model gabungan pemasok-pembeli dengan permintaan probabilistik dan ukuran pengiriman sama. Pada model setiap lot pemesanan akan dikirim dalam beberapa lot pengiriman dan pemasok akan memproduksi barang dalam ukuran batch produksi yang merupakan kelipatan integer dari lot pengiriman. Dikembangkan pula suatu algoritma untuk menyelesaikan model matematis yang telah dibuat. Selain itu, pengaruh perubahan parameter terhadap perilaku model diteliti dengan analisis sensitivitas terhadap beberapa parameter kunci, seperti ukuran lot, stok pengaman dan total biaya persediaan. Pada penelitian ini juga dibuat model simulasi untuk melihat performansi model matematis pada kondisi nyata. Kata kunci: model gabungan, permintaan probabilistik, lot pengiriman, supply chain
Women's Preferences for Penis Size: A New Research Method Using Selection among 3D Models.
Directory of Open Access Journals (Sweden)
Nicole Prause
Full Text Available Women's preferences for penis size may affect men's comfort with their own bodies and may have implications for sexual health. Studies of women's penis size preferences typically have relied on their abstract ratings or selecting amongst 2D, flaccid images. This study used haptic stimuli to allow assessment of women's size recall accuracy for the first time, as well as examine their preferences for erect penis sizes in different relationship contexts. Women (N = 75 selected amongst 33, 3D models. Women recalled model size accurately using this method, although they made more errors with respect to penis length than circumference. Women preferred a penis of slightly larger circumference and length for one-time (length = 6.4 inches/16.3 cm, circumference = 5.0 inches/12.7 cm versus long-term (length = 6.3 inches/16.0 cm, circumference = 4.8 inches/12.2 cm sexual partners. These first estimates of erect penis size preferences using 3D models suggest women accurately recall size and prefer penises only slightly larger than average.
Women's Preferences for Penis Size: A New Research Method Using Selection among 3D Models.
Prause, Nicole; Park, Jaymie; Leung, Shannon; Miller, Geoffrey
2015-01-01
Women's preferences for penis size may affect men's comfort with their own bodies and may have implications for sexual health. Studies of women's penis size preferences typically have relied on their abstract ratings or selecting amongst 2D, flaccid images. This study used haptic stimuli to allow assessment of women's size recall accuracy for the first time, as well as examine their preferences for erect penis sizes in different relationship contexts. Women (N = 75) selected amongst 33, 3D models. Women recalled model size accurately using this method, although they made more errors with respect to penis length than circumference. Women preferred a penis of slightly larger circumference and length for one-time (length = 6.4 inches/16.3 cm, circumference = 5.0 inches/12.7 cm) versus long-term (length = 6.3 inches/16.0 cm, circumference = 4.8 inches/12.2 cm) sexual partners. These first estimates of erect penis size preferences using 3D models suggest women accurately recall size and prefer penises only slightly larger than average.
Modeling and Simulation for Multi-Missions Space Exploration Vehicle
Chang, Max
2011-01-01
Asteroids and Near-Earth Objects [NEOs] are of great interest for future space missions. The Multi-Mission Space Exploration Vehicle [MMSEV] is being considered for future Near Earth Object missions and requires detailed planning and study of its Guidance, Navigation, and Control [GNC]. A possible mission of the MMSEV to a NEO would be to navigate the spacecraft to a stationary orbit with respect to the rotating asteroid and proceed to anchor into the surface of the asteroid with robotic arms. The Dynamics and Real-Time Simulation [DARTS] laboratory develops reusable models and simulations for the design and analysis of missions. In this paper, the development of guidance and anchoring models are presented together with their role in achieving mission objectives and relationships to other parts of the simulation. One important aspect of guidance is in developing methods to represent the evolution of kinematic frames related to the tasks to be achieved by the spacecraft and its robot arms. In this paper, we compare various types of mathematical interpolation methods for position and quaternion frames. Subsequent work will be on analyzing the spacecraft guidance system with different movements of the arms. With the analyzed data, the guidance system can be adjusted to minimize the errors in performing precision maneuvers.
Legett, C., IV; Glotch, T. D.; Lucey, P. G.
2015-12-01
Space weathering is a diverse set of processes that occur on the surfaces of airless bodies due to exposure to the space environment. One of the effects of space weathering is the generation of nanophase iron particles in glassy rims on mineral grains due to sputtering of iron-bearing minerals. These particles have a size-dependent effect on visible and near infrared (VNIR) reflectance spectra with smaller diameter particles (behavior), while larger particles (> 300 nm) darken without reddening. Between these two sizes, a gradual shift between these two behaviors occurs. In this work, we present results from the Multiple Sphere T-Matrix (MSTM) scattering model in combination with Hapke theory to explore the particle size and iron content parameter spaces with respect to VNIR (700-1700 nm) spectral slope. Previous work has shown that the MSTM-Hapke hybrid model offers improvements over Mie-Hapke models. Virtual particles are constructed out of an arbitrary number of spheres, and each sphere is assigned a refractive index and extinction coefficient for each wavelength of interest. The model then directly solves Maxwell's Equations at every wave-particle interface to predict the scattering, extinction and absorption efficiencies. These are then put into a simplified Hapke bidirectional reflectance model that yields a predicted reflectance. Preliminary results show an area of maximum slopes for iron particle diameters planned to better refine the extent of this region. Companion laboratory work using mixtures of powdered aerogel and nanophase iron particles provides a point of comparison to modeling efforts. The effects on reflectance and emissivity values due to particle size in a nearly ideal scatterer (aerogel) are also observed with comparisons to model data.
The King model for electrons in a finite-size ultracold plasma
Energy Technology Data Exchange (ETDEWEB)
Vrinceanu, D; Collins, L A [Theoretical Division, Los Alamos National Laboratory, Los Alamos, NM 87545 (United States); Balaraman, G S [School of Physics, Georgia Institute of Technology, Atlanta, GA 30332 (United States)
2008-10-24
A self-consistent model for a finite-size non-neutral ultracold plasma is obtained by extending a conventional model of globular star clusters. This model describes the dynamics of electrons at quasi-equilibrium trapped within the potential created by a cloud of stationary ions. A random sample of electron positions and velocities can be generated with the statistical properties defined by this model.
Haveman, Steven P.; Bonnema, G. Maarten
2013-01-01
Most formal models are used in detailed design and focus on a single domain. Few effective approaches exist that can effectively tie these lower level models to a high level system model during design space exploration. This complicates the validation of high level system requirements during detailed design. In this paper, we define requirements for a high level model that is firstly driven by key systems engineering challenges present in industry and secondly connects to several formal and d...
Model of magnetic reconnection in space and astrophysical plasmas
Energy Technology Data Exchange (ETDEWEB)
Boozer, Allen H. [Department of Applied Physics and Applied Mathematics, Columbia University, New York, New York 10027 (United States)
2013-03-15
Maxwell's equations imply that exponentially smaller non-ideal effects than commonly assumed can give rapid magnetic reconnection in space and astrophysical plasmas. In an ideal evolution, magnetic field lines act as stretchable strings, which can become ever more entangled but cannot be cut. High entanglement makes the lines exponentially sensitive to small non-ideal changes in the magnetic field. The cause is well known in popular culture as the butterfly effect and in the theory of deterministic dynamical systems as a sensitive dependence on initial conditions, but the importance to magnetic reconnection is not generally recognized. Two-coordinate models are too constrained geometrically for the required entanglement, but otherwise the effect is general and can be studied in simple models. A simple model is introduced, which is periodic in the x and y Cartesian coordinates and bounded by perfectly conducting planes in z. Starting from a constant magnetic field in the z direction, reconnection is driven by a spatially smooth, bounded force. The model is complete and could be used to study the impulsive transfer of energy between the magnetic field and the ions and electrons using a kinetic plasma model.
Model of magnetic reconnection in space and astrophysical plasmas
International Nuclear Information System (INIS)
Boozer, Allen H.
2013-01-01
Maxwell's equations imply that exponentially smaller non-ideal effects than commonly assumed can give rapid magnetic reconnection in space and astrophysical plasmas. In an ideal evolution, magnetic field lines act as stretchable strings, which can become ever more entangled but cannot be cut. High entanglement makes the lines exponentially sensitive to small non-ideal changes in the magnetic field. The cause is well known in popular culture as the butterfly effect and in the theory of deterministic dynamical systems as a sensitive dependence on initial conditions, but the importance to magnetic reconnection is not generally recognized. Two-coordinate models are too constrained geometrically for the required entanglement, but otherwise the effect is general and can be studied in simple models. A simple model is introduced, which is periodic in the x and y Cartesian coordinates and bounded by perfectly conducting planes in z. Starting from a constant magnetic field in the z direction, reconnection is driven by a spatially smooth, bounded force. The model is complete and could be used to study the impulsive transfer of energy between the magnetic field and the ions and electrons using a kinetic plasma model.
Environment modelling in near Earth space: Preliminary LDEF results
Coombs, C. R.; Atkinson, D. R.; Wagner, J. D.; Crowell, L. B.; Allbrooks, M.; Watts, A. J.
1992-01-01
Hypervelocity impacts by space debris cause not only local cratering or penetrations, but also cause large areas of damage in coated, painted or laminated surfaces. Features examined in these analyses display interesting morphological characteristics, commonly exhibiting a concentric ringed appearance. Virtually all features greater than 0.2 mm in diameter possess a spall zone in which all of the paint was removed from the aluminum surface. These spall zones vary in size from approximately 2 - 5 crater diameters. The actual craters in the aluminum substrate vary from central pits without raised rims, to morphologies more typical of craters formed in aluminum under hypervelocity laboratory conditions for the larger features. Most features also possess what is referred to as a 'shock zone' as well. These zones vary in size from approximately 1 - 20 crater diameters. In most cases, only the outer-most layer of paint was affected by this impact related phenomenon. Several impacts possess ridge-like structures encircling the area in which this outer-most paint layer was removed. In many ways, such features resemble the lunar impact basins, but on an extremely reduced scale. Overall, there were no noticeable penetrations, bulges or spallation features on the backside of the tray. On Row 12, approximately 85 degrees from the leading edge (RAM direction), there was approximately one impact per 15 cm(exp 2). On the trailing edge, there was approximately one impact per 72 cm(exp 2). Currently, craters on four aluminum experiment trays from Bay E09, directly on the leading edge are being measured and analyzed. Preliminary results have produced more than 2200 craters on approximately 1500 cm(exp 2) - or approximately 1 impact per 0.7 cm(exp 2).
Milledge, David G; Bellugi, Dino; McKean, Jim A; Densmore, Alexander L; Dietrich, William E
2014-11-01
The size of a shallow landslide is a fundamental control on both its hazard and geomorphic importance. Existing models are either unable to predict landslide size or are computationally intensive such that they cannot practically be applied across landscapes. We derive a model appropriate for natural slopes that is capable of predicting shallow landslide size but simple enough to be applied over entire watersheds. It accounts for lateral resistance by representing the forces acting on each margin of potential landslides using earth pressure theory and by representing root reinforcement as an exponential function of soil depth. We test our model's ability to predict failure of an observed landslide where the relevant parameters are well constrained by field data. The model predicts failure for the observed scar geometry and finds that larger or smaller conformal shapes are more stable. Numerical experiments demonstrate that friction on the boundaries of a potential landslide increases considerably the magnitude of lateral reinforcement, relative to that due to root cohesion alone. We find that there is a critical depth in both cohesive and cohesionless soils, resulting in a minimum size for failure, which is consistent with observed size-frequency distributions. Furthermore, the differential resistance on the boundaries of a potential landslide is responsible for a critical landslide shape which is longer than it is wide, consistent with observed aspect ratios. Finally, our results show that minimum size increases as approximately the square of failure surface depth, consistent with observed landslide depth-area data.
Woodbury, Sarah K.
2008-01-01
The introduction of United Space Alliance's Human Engineering Modeling and Performance Laboratory began in early 2007 in an attempt to address the problematic workspace design issues that the Space Shuttle has imposed on technicians performing maintenance and inspection operations. The Space Shuttle was not expected to require the extensive maintenance it undergoes between flights. As a result, extensive, costly resources have been expended on workarounds and modifications to accommodate ground processing personnel. Consideration of basic human factors principles for design of maintenance is essential during the design phase of future space vehicles, facilities, and equipment. Simulation will be needed to test and validate designs before implementation.
Models of Learning Space: Integrating Research on Space, Place and Learning in Higher Education
Ellis, R. A.; Goodyear, P.
2016-01-01
Learning space research is a relatively new field of study that seeks to inform the design, evaluation and management of learning spaces. This paper reviews a dispersed and fragmented literature relevant to understanding connections between university learning spaces and student learning activities. From this review, the paper distils a number of…
DEFF Research Database (Denmark)
Caparroy, P.; Thygesen, Uffe Høgsbro; Visser, Andre
2000-01-01
of being captured. By combining the attack success model with previously published hydrodynamic models of predator and prey perception, we examine how predator foraging behaviour and prey perceptive ability affect the size spectra of encountered and captured copepod prey. We examine food size spectra of (i......) a rheotactic cruising predator, (ii) a suspension-feeding hovering copepod and (iii) a larval fish. For rheotactic predators such as carnivorous copepods, a central assumption of the model is that attack is triggered by prey escape reaction, which in turn depends on the deformation rate of the fluid created...
Resolving Microzooplankton Functional Groups In A Size-Structured Planktonic Model
Taniguchi, D.; Dutkiewicz, S.; Follows, M. J.; Jahn, O.; Menden-Deuer, S.
2016-02-01
Microzooplankton are important marine grazers, often consuming a large fraction of primary productivity. They consist of a great diversity of organisms with different behaviors, characteristics, and rates. This functional diversity, and its consequences, are not currently reflected in large-scale ocean ecological simulations. How should these organisms be represented, and what are the implications for their biogeography? We develop a size-structured, trait-based model to characterize a diversity of microzooplankton functional groups. We compile and examine size-based laboratory data on the traits, revealing some patterns with size and functional group that we interpret with mechanistic theory. Fitting the model to the data provides parameterizations of key rates and properties, which we employ in a numerical ocean model. The diversity of grazing preference, rates, and trophic strategies enables the coexistence of different functional groups of micro-grazers under various environmental conditions, and the model produces testable predictions of the biogeography.
Simulation of size-dependent aerosol deposition in a realistic model of the upper human airways
Frederix, E.M.A.; Kuczaj, Arkadiusz K.; Nordlund, Markus; Belka, M.; Lizal, F.; Elcner, J.; Jicha, M.; Geurts, Bernardus J.
An Eulerian internally mixed aerosol model is used for predictions of deposition inside a realistic cast of the human upper airways. The model, formulated in the multi-species and compressible framework, is solved using the sectional discretization of the droplet size distribution function to
Linking Time and Space Scales in Distributed Hydrological Modelling - a case study for the VIC model
Melsen, Lieke; Teuling, Adriaan; Torfs, Paul; Zappa, Massimiliano; Mizukami, Naoki; Clark, Martyn; Uijlenhoet, Remko
2015-04-01
One of the famous paradoxes of the Greek philosopher Zeno of Elea (~450 BC) is the one with the arrow: If one shoots an arrow, and cuts its motion into such small time steps that at every step the arrow is standing still, the arrow is motionless, because a concatenation of non-moving parts does not create motion. Nowadays, this reasoning can be refuted easily, because we know that motion is a change in space over time, which thus by definition depends on both time and space. If one disregards time by cutting it into infinite small steps, motion is also excluded. This example shows that time and space are linked and therefore hard to evaluate separately. As hydrologists we want to understand and predict the motion of water, which means we have to look both in space and in time. In hydrological models we can account for space by using spatially explicit models. With increasing computational power and increased data availability from e.g. satellites, it has become easier to apply models at a higher spatial resolution. Increasing the resolution of hydrological models is also labelled as one of the 'Grand Challenges' in hydrology by Wood et al. (2011) and Bierkens et al. (2014), who call for global modelling at hyperresolution (~1 km and smaller). A literature survey on 242 peer-viewed articles in which the Variable Infiltration Capacity (VIC) model was used, showed that the spatial resolution at which the model is applied has decreased over the past 17 years: From 0.5 to 2 degrees when the model was just developed, to 1/8 and even 1/32 degree nowadays. On the other hand the literature survey showed that the time step at which the model is calibrated and/or validated remained the same over the last 17 years; mainly daily or monthly. Klemeš (1983) stresses the fact that space and time scales are connected, and therefore downscaling the spatial scale would also imply downscaling of the temporal scale. Is it worth the effort of downscaling your model from 1 degree to 1
A Taxonomic Reduced-Space Pollen Model for Paleoclimate Reconstruction
Wahl, E. R.; Schoelzel, C.
2010-12-01
Paleoenvironmental reconstruction from fossil pollen often attempts to take advantage of the rich taxonomic diversity in such data. Here, a taxonomically "reduced-space" reconstruction model is explored that would be parsimonious in introducing parameters needing to be estimated within a Bayesian Hierarchical Modeling context. This work involves a refinement of the traditional pollen ratio method. This method is useful when one (or a few) dominant pollen type(s) in a region have a strong positive correlation with a climate variable of interest and another (or a few) dominant pollen type(s) have a strong negative correlation. When, e.g., counts of pollen taxa a and b (r >0) are combined with pollen types c and d (r logistic generalized linear model (GLM). The GLM can readily model this relationship in the forward form, pollen = g(climate), which is more physically realistic than inverse models often used in paleoclimate reconstruction [climate = f(pollen)]. The specification of the model is: rnum Bin(n,p), where E(r|T) = p = exp(η)/[1+exp(η)], and η = α + β(T); r is the pollen ratio formed as above, rnum is the ratio numerator, n is the ratio denominator (i.e., the sum of pollen counts), the denominator-specific count is (n - rnum), and T is the temperature at each site corresponding to a specific value of r. Ecological and empirical screening identified the model (Spruce+Birch) / (Spruce+Birch+Oak+Hickory) for use in temperate eastern N. America. α and β were estimated using both "traditional" and Bayesian GLM algorithms (in R). Although it includes only four pollen types, the ratio model yields more explained variation ( 80%) in the pollen-temperature relationship of the study region than a 64-taxon modern analog technique (MAT). Thus, the new pollen ratio method represents an information-rich, reduced space data model that can be efficiently employed in a BHM framework. The ratio model can directly reconstruct past temperature by solving the GLM equations
HYBRID WAYS OF DOING: A MODEL FOR TEACHING PUBLIC SPACE
Directory of Open Access Journals (Sweden)
Gabrielle Bendiner-Viani
2010-07-01
Full Text Available This paper addresses an exploratory practice undertaken by the authors in a co-taught class to hybridize theory, research and practice. This experiment in critical transdisciplinary design education took the form of a “critical studio + practice-based seminar on public space”, two interlinked classes co-taught by landscape architect Elliott Maltby and environmental psychologist Gabrielle Bendiner-Viani at the Parsons, The New School for Design. This design process was grounded in the political and social context of the contested East River waterfront of New York City and valued both intensive study (using a range of social science and design methods and a partnership with a local community organization, engaging with the politics, issues and human needs of a complex site. The paper considers how we encouraged interdisciplinary collaboration and dialogue between teachers as well as between liberal arts and design students and developed strategies to overcome preconceived notions of traditional “studio” and “seminar” work. By exploring the challenges and adjustments made during the semester and the process of teaching this class, this paper addresses how we moved from a model of intertwining theory, research and practice, to a hybrid model of multiple ways of doing, a model particularly apt for teaching public space. Through examples developed for and during our course, the paper suggests practical ways of supporting this transdisciplinary hybrid model.
Modeling size effects on fatigue life of a zirconium-based bulk metallic glass under bending
International Nuclear Information System (INIS)
Yuan Tao; Wang Gongyao; Feng Qingming; Liaw, Peter K.; Yokoyama, Yoshihiko; Inoue, Akihisa
2013-01-01
A size effect on the fatigue-life cycles of a Zr 50 Cu 30 Al 10 Ni 10 (at.%) bulk metallic glass has been observed in the four-point-bending fatigue experiment. Under the same bending-stress condition, large-sized samples tend to exhibit longer fatigue lives than small-sized samples. This size effect on the fatigue life cannot be satisfactorily explained by the flaw-based Weibull theories. Based on the experimental results, this study explores possible approaches to modeling the size effects on the bending-fatigue life of bulk metallic glasses, and proposes two fatigue-life models based on the Weibull distribution. The first model assumes, empirically, log-linear effects of the sample thickness on the Weibull parameters. The second model incorporates the mechanistic knowledge of the fatigue behavior of metallic glasses, and assumes that the shear-band density, instead of the flaw density, has significant influence on the bending fatigue-life cycles. Promising predictive results provide evidence of the potential validity of the models and their assumptions.
Modeling of an once through helical coil steam generator of a superheated cycle for sizing analysis
Energy Technology Data Exchange (ETDEWEB)
Kim, Yeon Sik; Sim, Yoon Sub; Kim, Eui Kwang [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)
1997-12-31
A thermal sizing code, named as HSGSA (Helical coil Steam Generator Sizing Analyzer), for a sodium heated helical coil steam generator is developed for KALIMER (Korea Advanced LIquid MEtal Reactor) design. The theoretical modeling of the shell and tube sides is described and relevant correlations are presented. For assessment of HSGSA, a reference plant design case is compared to the calculational outputs from HSGSA simulation. 9 refs., 6 figs. (Author)
Modeling of an once through helical coil steam generator of a superheated cycle for sizing analysis
Energy Technology Data Exchange (ETDEWEB)
Kim, Yeon Sik; Sim, Yoon Sub; Kim, Eui Kwang [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)
1998-12-31
A thermal sizing code, named as HSGSA (Helical coil Steam Generator Sizing Analyzer), for a sodium heated helical coil steam generator is developed for KALIMER (Korea Advanced LIquid MEtal Reactor) design. The theoretical modeling of the shell and tube sides is described and relevant correlations are presented. For assessment of HSGSA, a reference plant design case is compared to the calculational outputs from HSGSA simulation. 9 refs., 6 figs. (Author)
Improved Nuclear Reactor and Shield Mass Model for Space Applications
Robb, Kevin
2004-01-01
New technologies are being developed to explore the distant reaches of the solar system. Beyond Mars, solar energy is inadequate to power advanced scientific instruments. One technology that can meet the energy requirements is the space nuclear reactor. The nuclear reactor is used as a heat source for which a heat-to-electricity conversion system is needed. Examples of such conversion systems are the Brayton, Rankine, and Stirling cycles. Since launch cost is proportional to the amount of mass to lift, mass is always a concern in designing spacecraft. Estimations of system masses are an important part in determining the feasibility of a design. I worked under Michael Barrett in the Thermal Energy Conversion Branch of the Power & Electric Propulsion Division. An in-house Closed Cycle Engine Program (CCEP) is used for the design and performance analysis of closed-Brayton-cycle energy conversion systems for space applications. This program also calculates the system mass including the heat source. CCEP uses the subroutine RSMASS, which has been updated to RSMASS-D, to estimate the mass of the reactor. RSMASS was developed in 1986 at Sandia National Laboratories to quickly estimate the mass of multi-megawatt nuclear reactors for space applications. In response to an emphasis for lower power reactors, RSMASS-D was developed in 1997 and is based off of the SP-100 liquid metal cooled reactor. The subroutine calculates the mass of reactor components such as the safety systems, instrumentation and control, radiation shield, structure, reflector, and core. The major improvements in RSMASS-D are that it uses higher fidelity calculations, is easier to use, and automatically optimizes the systems mass. RSMASS-D is accurate within 15% of actual data while RSMASS is only accurate within 50%. My goal this summer was to learn FORTRAN 77 programming language and update the CCEP program with the RSMASS-D model.
Long-range planning cost model for support of future space missions by the deep space network
Sherif, J. S.; Remer, D. S.; Buchanan, H. R.
1990-01-01
A simple model is suggested to do long-range planning cost estimates for Deep Space Network (DSP) support of future space missions. The model estimates total DSN preparation costs and the annual distribution of these costs for long-range budgetary planning. The cost model is based on actual DSN preparation costs from four space missions: Galileo, Voyager (Uranus), Voyager (Neptune), and Magellan. The model was tested against the four projects and gave cost estimates that range from 18 percent above the actual total preparation costs of the projects to 25 percent below. The model was also compared to two other independent projects: Viking and Mariner Jupiter/Saturn (MJS later became Voyager). The model gave cost estimates that range from 2 percent (for Viking) to 10 percent (for MJS) below the actual total preparation costs of these missions.
Does litter size variation affect models of terrestrial carnivore extinction risk and management?
Directory of Open Access Journals (Sweden)
Eleanor S Devenish-Nelson
Full Text Available Individual variation in both survival and reproduction has the potential to influence extinction risk. Especially for rare or threatened species, reliable population models should adequately incorporate demographic uncertainty. Here, we focus on an important form of demographic stochasticity: variation in litter sizes. We use terrestrial carnivores as an example taxon, as they are frequently threatened or of economic importance. Since data on intraspecific litter size variation are often sparse, it is unclear what probability distribution should be used to describe the pattern of litter size variation for multiparous carnivores.We used litter size data on 32 terrestrial carnivore species to test the fit of 12 probability distributions. The influence of these distributions on quasi-extinction probabilities and the probability of successful disease control was then examined for three canid species - the island fox Urocyon littoralis, the red fox Vulpes vulpes, and the African wild dog Lycaon pictus. Best fitting probability distributions differed among the carnivores examined. However, the discretised normal distribution provided the best fit for the majority of species, because variation among litter-sizes was often small. Importantly, however, the outcomes of demographic models were generally robust to the distribution used.These results provide reassurance for those using demographic modelling for the management of less studied carnivores in which litter size variation is estimated using data from species with similar reproductive attributes.
Cognition in Space Workshop. 1; Metrics and Models
Woolford, Barbara; Fielder, Edna
2005-01-01
"Cognition in Space Workshop I: Metrics and Models" was the first in a series of workshops sponsored by NASA to develop an integrated research and development plan supporting human cognition in space exploration. The workshop was held in Chandler, Arizona, October 25-27, 2004. The participants represented academia, government agencies, and medical centers. This workshop addressed the following goal of the NASA Human System Integration Program for Exploration: to develop a program to manage risks due to human performance and human error, specifically ones tied to cognition. Risks range from catastrophic error to degradation of efficiency and failure to accomplish mission goals. Cognition itself includes memory, decision making, initiation of motor responses, sensation, and perception. Four subgoals were also defined at the workshop as follows: (1) NASA needs to develop a human-centered design process that incorporates standards for human cognition, human performance, and assessment of human interfaces; (2) NASA needs to identify and assess factors that increase risks associated with cognition; (3) NASA needs to predict risks associated with cognition; and (4) NASA needs to mitigate risk, both prior to actual missions and in real time. This report develops the material relating to these four subgoals.
Energy Technology Data Exchange (ETDEWEB)
Ren, Jingli, E-mail: renjl@zzu.edu.cn, E-mail: g.wang@shu.edu.cn; Chen, Cun [School of Mathematics and Statistics, Zhengzhou University, Zhengzhou 450001 (China); Wang, Gang, E-mail: renjl@zzu.edu.cn, E-mail: g.wang@shu.edu.cn [Laboratory for Microstructures, Shanghai University, Shanghai 200444 (China); Cheung, Wing-Sum [Department of Mathematics, The University of HongKong, HongKong (China); Sun, Baoan; Mattern, Norbert [IFW-dresden, Institute for Complex Materials, P.O. Box 27 01 16, D-01171 Dresden (Germany); Siegmund, Stefan [Department of Mathematics, TU Dresden, D-01062 Dresden (Germany); Eckert, Jürgen [IFW-dresden, Institute for Complex Materials, P.O. Box 27 01 16, D-01171 Dresden (Germany); Institute of Materials Science, TU Dresden, D-01062 Dresden (Germany)
2014-07-21
This paper presents a spatiotemporal dynamic model based on the interaction between multiple shear bands in the plastic flow of metallic glasses during compressive deformation. Various sizes of sliding events burst in the plastic deformation as the generation of different scales of shear branches occurred; microscopic creep events and delocalized sliding events were analyzed based on the established model. This paper discusses the spatially uniform solutions and traveling wave solution. The phase space of the spatially uniform system applied in this study reflected the chaotic state of the system at a lower strain rate. Moreover, numerical simulation showed that the microscopic creep events were manifested at a lower strain rate, whereas the delocalized sliding events were manifested at a higher strain rate.
Modeling a space-variant cortical representation for apparent motion.
Wurbs, Jeremy; Mingolla, Ennio; Yazdanbakhsh, Arash
2013-08-06
Receptive field sizes of neurons in early primate visual areas increase with eccentricity, as does temporal processing speed. The fovea is evidently specialized for slow, fine movements while the periphery is suited for fast, coarse movements. In either the fovea or periphery discrete flashes can produce motion percepts. Grossberg and Rudd (1989) used traveling Gaussian activity profiles to model long-range apparent motion percepts. We propose a neural model constrained by physiological data to explain how signals from retinal ganglion cells to V1 affect the perception of motion as a function of eccentricity. Our model incorporates cortical magnification, receptive field overlap and scatter, and spatial and temporal response characteristics of retinal ganglion cells for cortical processing of motion. Consistent with the finding of Baker and Braddick (1985), in our model the maximum flash distance that is perceived as an apparent motion (Dmax) increases linearly as a function of eccentricity. Baker and Braddick (1985) made qualitative predictions about the functional significance of both stimulus and visual system parameters that constrain motion perception, such as an increase in the range of detectable motions as a function of eccentricity and the likely role of higher visual processes in determining Dmax. We generate corresponding quantitative predictions for those functional dependencies for individual aspects of motion processing. Simulation results indicate that the early visual pathway can explain the qualitative linear increase of Dmax data without reliance on extrastriate areas, but that those higher visual areas may serve as a modulatory influence on the exact Dmax increase.
Directory of Open Access Journals (Sweden)
Julies Hariani Sugiaman
2011-03-01
Full Text Available Model study is one of the standard orthodontic components which is important for diagnosis and treatment plan, but in some patients with the high gag reflex, it will be difficult to get this kind of study models. The existence of a new device which is able to show the condition of patients' mouth in three space areas (axial, sagittal, and coronal is expected to be an alternative when a study model is difficult to get. The purpose of this study is to find out whether or not there are any differences on the size of canine's mesiodistal, first and second premolar resulted from CBCT imaging with Moyers analysis on the study models. The method of the research is comparative descriptive. Measurements are made on 10 CBCT imaging results and 10 study models. The mesiodistal size, the result of CBCT imaging is measured by the available computer program and also the mesiodistal size of the study models is measured using a sliding compass, and then the size of canines, first and second premolar teeth resulted from CBCT imaging are compared to the result of Moyers method analysis on the study models. The t-test is used to find out if there is a difference between teeth size value between the CBCT imaging with the study models. The significance is determined based on the p-value t table.
International Nuclear Information System (INIS)
Davis, J. E.; Eddy, M. J.; Sutton, T. M.; Altomari, T. J.
2007-01-01
Solid modeling computer software systems provide for the design of three-dimensional solid models used in the design and analysis of physical components. The current state-of-the-art in solid modeling representation uses a boundary representation format in which geometry and topology are used to form three-dimensional boundaries of the solid. The geometry representation used in these systems is cubic B-spline curves and surfaces - a network of cubic B-spline functions in three-dimensional Cartesian coordinate space. Many Monte Carlo codes, however, use a geometry representation in which geometry units are specified by intersections and unions of half-spaces. This paper describes an algorithm for converting from a boundary representation to a half-space representation. (authors)
Contact behavior modelling and its size effect on proton exchange membrane fuel cell
Qiu, Diankai; Peng, Linfa; Yi, Peiyun; Lai, Xinmin; Janßen, Holger; Lehnert, Werner
2017-10-01
Contact behavior between the gas diffusion layer (GDL) and bipolar plate (BPP) is of significant importance for proton exchange membrane fuel cells. Most current studies on contact behavior utilize experiments and finite element modelling and focus on fuel cells with graphite BPPs, which lead to high costs and huge computational requirements. The objective of this work is to build a more effective analytical method for contact behavior in fuel cells and investigate the size effect resulting from configuration alteration of channel and rib (channel/rib). Firstly, a mathematical description of channel/rib geometry is outlined in accordance with the fabrication of metallic BPP. Based on the interface deformation characteristic and Winkler surface model, contact pressure between BPP and GDL is then calculated to predict contact resistance and GDL porosity as evaluative parameters of contact behavior. Then, experiments on BPP fabrication and contact resistance measurement are conducted to validate the model. The measured results demonstrate an obvious dependence on channel/rib size. Feasibility of the model used in graphite fuel cells is also discussed. Finally, size factor is proposed for evaluating the rule of size effect. Significant increase occurs in contact resistance and porosity for higher size factor, in which channel/rib width decrease.
Review of the Space Mapping Approach to Engineering Optimization and Modeling
DEFF Research Database (Denmark)
Bakr, M. H.; Bandler, J. W.; Madsen, Kaj
2000-01-01
We review the Space Mapping (SM) concept and its applications in engineering optimization and modeling. The aim of SM is to avoid computationally expensive calculations encountered in simulating an engineering system. The existence of less accurate but fast physically-based models is exploited. S......-based Modeling (SMM). These include Space Derivative Mapping (SDM), Generalized Space Mapping (GSM) and Space Mapping-based Neuromodeling (SMN). Finally, we address open points for research and future development....
Directory of Open Access Journals (Sweden)
Zhi Yan
2017-01-01
Full Text Available Piezoelectric nanomaterials (PNs are attractive for applications including sensing, actuating, energy harvesting, among others in nano-electro-mechanical-systems (NEMS because of their excellent electromechanical coupling, mechanical and physical properties. However, the properties of PNs do not coincide with their bulk counterparts and depend on the particular size. A large amount of efforts have been devoted to studying the size-dependent properties of PNs by using experimental characterization, atomistic simulation and continuum mechanics modeling with the consideration of the scale features of the nanomaterials. This paper reviews the recent progresses and achievements in the research on the continuum mechanics modeling of the size-dependent mechanical and physical properties of PNs. We start from the fundamentals of the modified continuum mechanics models for PNs, including the theories of surface piezoelectricity, flexoelectricity and non-local piezoelectricity, with the introduction of the modified piezoelectric beam and plate models particularly for nanostructured piezoelectric materials with certain configurations. Then, we give a review on the investigation of the size-dependent properties of PNs by using the modified continuum mechanics models, such as the electromechanical coupling, bending, vibration, buckling, wave propagation and dynamic characteristics. Finally, analytical modeling and analysis of nanoscale actuators and energy harvesters based on piezoelectric nanostructures are presented.
Yan, Zhi; Jiang, Liying
2017-01-26
Piezoelectric nanomaterials (PNs) are attractive for applications including sensing, actuating, energy harvesting, among others in nano-electro-mechanical-systems (NEMS) because of their excellent electromechanical coupling, mechanical and physical properties. However, the properties of PNs do not coincide with their bulk counterparts and depend on the particular size. A large amount of efforts have been devoted to studying the size-dependent properties of PNs by using experimental characterization, atomistic simulation and continuum mechanics modeling with the consideration of the scale features of the nanomaterials. This paper reviews the recent progresses and achievements in the research on the continuum mechanics modeling of the size-dependent mechanical and physical properties of PNs. We start from the fundamentals of the modified continuum mechanics models for PNs, including the theories of surface piezoelectricity, flexoelectricity and non-local piezoelectricity, with the introduction of the modified piezoelectric beam and plate models particularly for nanostructured piezoelectric materials with certain configurations. Then, we give a review on the investigation of the size-dependent properties of PNs by using the modified continuum mechanics models, such as the electromechanical coupling, bending, vibration, buckling, wave propagation and dynamic characteristics. Finally, analytical modeling and analysis of nanoscale actuators and energy harvesters based on piezoelectric nanostructures are presented.
Exploring the Model Design Space for Battery Health Management
Saha, Bhaskar; Quach, Cuong Chi; Goebel, Kai Frank
2011-01-01
Battery Health Management (BHM) is a core enabling technology for the success and widespread adoption of the emerging electric vehicles of today. Although battery chemistries have been studied in detail in literature, an accurate run-time battery life prediction algorithm has eluded us. Current reliability-based techniques are insufficient to manage the use of such batteries when they are an active power source with frequently varying loads in uncertain environments. The amount of usable charge of a battery for a given discharge profile is not only dependent on the starting state-of-charge (SOC), but also other factors like battery health and the discharge or load profile imposed. This paper presents a Particle Filter (PF) based BHM framework with plug-and-play modules for battery models and uncertainty management. The batteries are modeled at three different levels of granularity with associated uncertainty distributions, encoding the basic electrochemical processes of a Lithium-polymer battery. The effects of different choices in the model design space are explored in the context of prediction performance in an electric unmanned aerial vehicle (UAV) application with emulated flight profiles.
Quantitative Risk Modeling of Fire on the International Space Station
Castillo, Theresa; Haught, Megan
2014-01-01
The International Space Station (ISS) Program has worked to prevent fire events and to mitigate their impacts should they occur. Hardware is designed to reduce sources of ignition, oxygen systems are designed to control leaking, flammable materials are prevented from flying to ISS whenever possible, the crew is trained in fire response, and fire response equipment improvements are sought out and funded. Fire prevention and mitigation are a top ISS Program priority - however, programmatic resources are limited; thus, risk trades are made to ensure an adequate level of safety is maintained onboard the ISS. In support of these risk trades, the ISS Probabilistic Risk Assessment (PRA) team has modeled the likelihood of fire occurring in the ISS pressurized cabin, a phenomenological event that has never before been probabilistically modeled in a microgravity environment. This paper will discuss the genesis of the ISS PRA fire model, its enhancement in collaboration with fire experts, and the results which have informed ISS programmatic decisions and will continue to be used throughout the life of the program.
Application of Interval Predictor Models to Space Radiation Shielding
Crespo, Luis G.; Kenny, Sean P.; Giesy,Daniel P.; Norman, Ryan B.; Blattnig, Steve R.
2016-01-01
This paper develops techniques for predicting the uncertainty range of an output variable given input-output data. These models are called Interval Predictor Models (IPM) because they yield an interval valued function of the input. This paper develops IPMs having a radial basis structure. This structure enables the formal description of (i) the uncertainty in the models parameters, (ii) the predicted output interval, and (iii) the probability that a future observation would fall in such an interval. In contrast to other metamodeling techniques, this probabilistic certi cate of correctness does not require making any assumptions on the structure of the mechanism from which data are drawn. Optimization-based strategies for calculating IPMs having minimal spread while containing all the data are developed. Constraints for bounding the minimum interval spread over the continuum of inputs, regulating the IPMs variation/oscillation, and centering its spread about a target point, are used to prevent data over tting. Furthermore, we develop an approach for using expert opinion during extrapolation. This metamodeling technique is illustrated using a radiation shielding application for space exploration. In this application, we use IPMs to describe the error incurred in predicting the ux of particles resulting from the interaction between a high-energy incident beam and a target.
Scientists as role models in space science outreach
Alexander, D.
The direct participation of scientists significantly enhances the impact of any E/PO effort. This is particularly true when the scientists come from minority or traditionally under-represented groups and, consequently, become role models for a large number of students while presenting positive counter-examples to the usual stereotypes. In this paper I will discuss the impact of scientists as role models through the successful implementation of a set of space physics games and activities, called Solar Week. Targetted at middle-school girls, the key feature of Solar Week is the "Ask a Scientist" section enabling direct interaction between participating students and volunteer scientists. All of the contributing scientists are women, serving as experts in their field and providing role models to whom the students can relate. Solar Week has completed four sessions with a total of some 140 edcuators and 12,000+ students in over 28 states and 9 countries. A major success of the Solar Week program has been the ability of the students to learn more about the scientists as people, through online biographies, and to discuss a variety of topics ranging from science, to careers and common hobbies.
On the analytical modeling of the nonlinear vibrations of pretensioned space structures
Housner, J. M.; Belvin, W. K.
1983-01-01
Pretensioned structures are receiving considerable attention as candidate large space structures. A typical example is a hoop-column antenna. The large number of preloaded members requires efficient analytical methods for concept validation and design. Validation through analyses is especially important since ground testing may be limited due to gravity effects and structural size. The present investigation has the objective to present an examination of the analytical modeling of pretensioned members undergoing nonlinear vibrations. Two approximate nonlinear analysis are developed to model general structural arrangements which include beam-columns and pretensioned cables attached to a common nucleus, such as may occur at a joint of a pretensioned structure. Attention is given to structures undergoing nonlinear steady-state oscillations due to sinusoidal excitation forces. Three analyses, linear, quasi-linear, and nonlinear are conducted and applied to study the response of a relatively simple cable stiffened structure.
NASA 3D Models: James Webb Space Telescope
National Aeronautics and Space Administration — The James Webb Space Telescope (JWST) will be a large infrared telescope with a 6.5-meter primary mirror. The project is working to a 2018 launch date. The JWST will...
Environmental Disturbance Modeling for Large Inflatable Space Structures
National Research Council Canada - National Science Library
Davis, Donald
2001-01-01
Tightening space budgets and stagnating spacelift capabilities are driving the Air Force and other space agencies to focus on inflatable technology as a reliable, inexpensive means of deploying large structures in orbit...
Observational Model for Precision Astrometry with the Space Interferometry Mission
National Research Council Canada - National Science Library
Turyshev, Slava G; Milman, Mark H
2000-01-01
The Space Interferometry Mission (SIM) is a space-based 10-m baseline Michelson optical interferometer operating in the visible waveband that is designed to achieve astrometric accuracy in the single digits of the microarcsecond domain...
Modeling of Dilute Polymer Solutions in Confined Space
DEFF Research Database (Denmark)
Wang, Yanwei
2009-01-01
This thesis deals with modeling of a polymer chain subject to spatial confinement. The properties of confined macromolecules are both of fundamental interest in polymer physics and of practical importance in a variety of applications including chromatographic separation of polymers, and the use...... of polymers to control the stability of colloidal suspensions. Furthermore, recent advances in micro- and nano-structuring techniques have led to the production of fluidic channels of critical dinlension approaching the molecular scales, in which areas understanding the effects of spatial restrictions...... to macromolecules is critical to the design and application of those devices. Our primary interest is to provide an understanding of the separation principle of polymers in size exclusion chromatography (SEC), where under ideal conditions the polymer concentration is low, and detailed enthalpic interactions...
An Empirical Bayes Mixture Model for Effect Size Distributions in Genome-Wide Association Studies
DEFF Research Database (Denmark)
Thompson, Wesley K.; Wang, Yunpeng; Schork, Andrew J.
2015-01-01
-wide association study (GWAS) test statistics. Test statistics corresponding to null associations are modeled as random draws from a normal distribution with zero mean; test statistics corresponding to non-null associations are also modeled as normal with zero mean, but with larger variance. The model is fit via...... analytically and in simulations. We apply this approach to meta-analysis test statistics from two large GWAS, one for Crohn’s disease (CD) and the other for schizophrenia (SZ). A scale mixture of two normals distribution provides an excellent fit to the SZ nonparametric replication effect size estimates. While...... minimizing discrepancies between the parametric mixture model and resampling-based nonparametric estimates of replication effect sizes and variances. We describe in detail the implications of this model for estimation of the non-null proportion, the probability of replication in de novo samples, the local...
International Nuclear Information System (INIS)
Hyer, Daniel E; Hill, Patrick M; Wang, Dongxu; Smith, Blake R; Flynn, Ryan T
2014-01-01
The purpose of this work was to investigate the reduction in lateral dose penumbra that can be achieved when using a dynamic collimation system (DCS) for spot scanning proton therapy as a function of two beam parameters: spot size and spot spacing. This is an important investigation as both values impact the achievable dose distribution and a wide range of values currently exist depending on delivery hardware. Treatment plans were created both with and without the DCS for in-air spot sizes (σ air ) of 3, 5, 7, and 9 mm as well as spot spacing intervals of 2, 4, 6 and 8 mm. Compared to un-collimated treatment plans, the plans created with the DCS yielded a reduction in the mean dose to normal tissue surrounding the target of 26.2–40.6% for spot sizes of 3–9 mm, respectively. Increasing the spot spacing resulted in a decrease in the time penalty associated with using the DCS that was approximately proportional to the reduction in the number of rows in the raster delivery pattern. We conclude that dose distributions achievable when using the DCS are comparable to those only attainable with much smaller initial spot sizes, suggesting that the goal of improving high dose conformity may be achieved by either utilizing a DCS or by improving beam line optics. (note)
Size effect on deformation twinning in face-centred cubic single crystals: Experiments and modelling
International Nuclear Information System (INIS)
Liang, Z.Y.; De Hosson, J.T.M.; Huang, M.X.
2017-01-01
In addition to slip by dislocation glide, deformation twinning in small-sized metallic crystals also exhibits size effect, namely the twinning stress increases with decreasing sample size. In order to understand the underpinning mechanisms responsible for such effect, systematic experiments were carried out on the small-sized single-crystalline pillars of a twinning-induced plasticity steel with a face-centred cubic structure. The flow stress increases considerably with decreasing pillar diameter from 3 to 0.5 μm, demonstrating a substantial size effect with a power exponent of 0.43. Detailed microstructural characterization reveals that the plastic deformation of the present pillars is dominant by twinning, primarily via twin growth, indicating that the size effect should be related to deformation twinning instead of slip by dislocation glide. Subsequent modelling works indicate that twinning can be accomplished by the dissociation of the ion-radiation-induced vacancy Frank loops in the damaged subsurface layer of the pillars, and the size effect is attributed to the ion-radiation-induced compressive stress in the subsurface layer, which decreases with pillar diameter.
Boros, Daniel; Eriksson, Claes
2014-01-01
This thesis investigates whether the estimation of the cost of equity (or the expected return) in the Swedish market should incorporate an adjustment for a company’s size. This is what is commonly known as the size-effect, first presented by Banz (1980) and has later been a part of models for estimating cost of equity, such as Fama & French’s three factor model (1992). The Fama & French model was developed based on empirical research. Since the model was developed, the research on the...
Optimizing Working Space in Laparoscopy: Studies in a porcine model
J. Vlot (John)
2014-01-01
markdownabstract__Abstract__ Adequate working space is essential for safe and effective laparoscopic surgery. However, the factors that determine working space have not been sufficiently studied. Working space can be very limited, especially in children. A literature review was undertaken to
A Simple Size Effect Model for Tension Perpendicular to the Grain
DEFF Research Database (Denmark)
Pedersen, M. U.; Clorius, Christian Odin; Damkilde, Lars
2003-01-01
The strength in tension perpendicular to the grain is known to decrease with an increase in the stressed volume. Usually this size effect is explained on a stochastic basis, that is, an explanation relying on the increased probability of encountering a strength reducing flaw when the volume...... of the material under stress is increased. This paper presents an experimental investigation on specimens with a well-defined structural orientation of the material. The experiments exhibit a large size effect and the nature of the failures encountered suggests that the size effect can be explained...... on a deterministic basis. Arguments for such a simple deterministic explanation of size effect is found in finite element modelling, using the orthotropic stiffness characteristics in the transverse plane of wood....
Two-fluid model with droplet size distribution for condensing steam flows
International Nuclear Information System (INIS)
Wróblewski, Włodzimierz; Dykas, Sławomir
2016-01-01
The process of energy conversion in the low pressure part of steam turbines may be improved using new and more accurate numerical models. The paper presents a description of a model intended for the condensing steam flow modelling. The model uses a standard condensation model. A physical and a numerical model of the mono- and polydispersed wet-steam flow are presented. The proposed two-fluid model solves separate flow governing equations for the compressible, inviscid vapour and liquid phase. The method of moments with a prescribed function is used for the reconstruction of the water droplet size distribution. The described model is presented for the liquid phase evolution in the flow through the de Laval nozzle. - Highlights: • Computational Fluid Dynamics. • Steam condensation in transonic flows through the Laval nozzles. • In-house CFD code – two-phase flow, two-fluid monodispersed and polydispersed model.
How does language model size effects speech recognition accuracy for the Turkish language?
Directory of Open Access Journals (Sweden)
Behnam ASEFİSARAY
2016-05-01
Full Text Available In this paper we aimed at investigating the effect of Language Model (LM size on Speech Recognition (SR accuracy. We also provided details of our approach for obtaining the LM for Turkish. Since LM is obtained by statistical processing of raw text, we expect that by increasing the size of available data for training the LM, SR accuracy will improve. Since this study is based on recognition of Turkish, which is a highly agglutinative language, it is important to find out the appropriate size for the training data. The minimum required data size is expected to be much higher than the data needed to train a language model for a language with low level of agglutination such as English. In the experiments we also tried to adjust the Language Model Weight (LMW and Active Token Count (ATC parameters of LM as these are expected to be different for a highly agglutinative language. We showed that by increasing the training data size to an appropriate level, the recognition accuracy improved on the other hand changes on LMW and ATC did not have a positive effect on Turkish speech recognition accuracy.
A Reparametrization Approach for Dynamic Space-Time Models
Lee, Hyeyoung; Ghosh, Sujit K.
2008-01-01
Researchers in diverse areas such as environmental and health sciences are increasingly working with data collected across space and time. The space-time processes that are generally used in practice are often complicated in the sense that the auto-dependence structure across space and time is non-trivial, often non-separable and non-stationary in space and time. Moreover, the dimension of such data sets across both space and time can be very large leading to computational difficulties due to...
Dynamic Model Averaging in Large Model Spaces Using Dynamic Occam’s Window*
Onorante, Luca; Raftery, Adrian E.
2015-01-01
Bayesian model averaging has become a widely used approach to accounting for uncertainty about the structural form of the model generating the data. When data arrive sequentially and the generating model can change over time, Dynamic Model Averaging (DMA) extends model averaging to deal with this situation. Often in macroeconomics, however, many candidate explanatory variables are available and the number of possible models becomes too large for DMA to be applied in its original form. We propose a new method for this situation which allows us to perform DMA without considering the whole model space, but using a subset of models and dynamically optimizing the choice of models at each point in time. This yields a dynamic form of Occam’s window. We evaluate the method in the context of the problem of nowcasting GDP in the Euro area. We find that its forecasting performance compares well with that of other methods. PMID:26917859
Dynamic Model Averaging in Large Model Spaces Using Dynamic Occam's Window.
Onorante, Luca; Raftery, Adrian E
2016-01-01
Bayesian model averaging has become a widely used approach to accounting for uncertainty about the structural form of the model generating the data. When data arrive sequentially and the generating model can change over time, Dynamic Model Averaging (DMA) extends model averaging to deal with this situation. Often in macroeconomics, however, many candidate explanatory variables are available and the number of possible models becomes too large for DMA to be applied in its original form. We propose a new method for this situation which allows us to perform DMA without considering the whole model space, but using a subset of models and dynamically optimizing the choice of models at each point in time. This yields a dynamic form of Occam's window. We evaluate the method in the context of the problem of nowcasting GDP in the Euro area. We find that its forecasting performance compares well with that of other methods.
Directory of Open Access Journals (Sweden)
Kálal Zbyněk
2014-09-01
Full Text Available The main topic of this study is the mathematical modelling of bubble size distributions in an aerated stirred tank using the population balance method. The air-water system consisted of a fully baffled vessel with a diameter of 0.29 m, which was equipped with a six-bladed Rushton turbine. The secondary phase was introduced through a ring sparger situated under the impeller. Calculations were performed with the CFD software CFX 14.5. The turbulent quantities were predicted using the standard k-ε turbulence model. Coalescence and breakup of bubbles were modelled using the MUSIG method with 24 bubble size groups. For the bubble size distribution modelling, the breakup model by Luo and Svendsen (1996 typically has been used in the past. However, this breakup model was thoroughly reviewed and its practical applicability was questioned. Therefore, three different breakup models by Martínez-Bazán et al. (1999a, b, Lehr et al. (2002 and Alopaeus et al. (2002 were implemented in the CFD solver and applied to the system. The resulting Sauter mean diameters and local bubble size distributions were compared with experimental data.
Nonequilibrium dynamics of spin-boson models from phase-space methods
Piñeiro Orioli, Asier; Safavi-Naini, Arghavan; Wall, Michael L.; Rey, Ana Maria
2017-09-01
An accurate description of the nonequilibrium dynamics of systems with coupled spin and bosonic degrees of freedom remains theoretically challenging, especially for large system sizes and in higher than one dimension. Phase-space methods such as the truncated Wigner approximation (TWA) have the advantage of being easily scalable and applicable to arbitrary dimensions. In this work we adapt the TWA to generic spin-boson models by making use of recently developed algorithms for discrete phase spaces [J. Schachenmayer, A. Pikovski, and A. M. Rey, Phys. Rev. X 5, 011022 (2015), 10.1103/PhysRevX.5.011022]. Furthermore we go beyond the standard TWA approximation by applying a scheme based on the Bogoliubov-Born-Green-Kirkwood-Yvon (BBGKY) hierarchy of equations to our coupled spin-boson model. This allows us, in principle, to study how systematically adding higher-order corrections improves the convergence of the method. To test various levels of approximation we study an exactly solvable spin-boson model, which is particularly relevant for trapped-ion arrays. Using TWA and its BBGKY extension we accurately reproduce the time evolution of a number of one- and two-point correlation functions in several dimensions and for an arbitrary number of bosonic modes.
The effect of food portion sizes on the obesity prevention using system dynamics modelling
Abidin, Norhaslinda Zainal; Zulkepli, Jafri Hj; Zaibidi, Nerda Zura
2014-09-01
The rise in income and population growth have increased the demand for food and induced changes in food habits, food purchasing and consumption patterns in Malaysia. With this transition, one of the plausible causes of weight gain and obesity is the frequent consumption of outside food which is synonymous with bigger portion size. Therefore, the aim of this paper is to develop a system dynamics model to analyse the effect of reducing food portion size on weight and obesity prevention. This study combines the different strands of knowledge comprise of nutrition, physical activity and body metabolism. These elements are synthesized into a system dynamics model called SIMULObese. Findings from this study suggested that changes in eating behavior should not emphasize only on limiting the food portion size consumption. The efforts should also consider other eating events such as controlling the meal frequency and limiting intake of high-calorie food in developing guidelines to prevent obesity.
Development of the ECLSS Sizing Analysis Tool and ARS Mass Balance Model Using Microsoft Excel
McGlothlin, E. P.; Yeh, H. Y.; Lin, C. H.
1999-01-01
The development of a Microsoft Excel-compatible Environmental Control and Life Support System (ECLSS) sizing analysis "tool" for conceptual design of Mars human exploration missions makes it possible for a user to choose a certain technology in the corresponding subsystem. This tool estimates the mass, volume, and power requirements of every technology in a subsystem and the system as a whole. Furthermore, to verify that a design sized by the ECLSS Sizing Tool meets the mission requirements and integrates properly, mass balance models that solve for component throughputs of such ECLSS systems as the Water Recovery System (WRS) and Air Revitalization System (ARS) must be developed. The ARS Mass Balance Model will be discussed in this paper.
Computer modeling of active experiments in space plasmas
International Nuclear Information System (INIS)
Bollens, R.J.
1993-01-01
The understanding of space plasmas is expanding rapidly. This is, in large part, due to the ambitious efforts of scientists from around the world who are performing large scale active experiments in the space plasma surrounding the earth. One such effort was designated the Active Magnetospheric Particle Tracer Explorers (AMPTE) and consisted of a series of plasma releases that were completed during 1984 and 1985. What makes the AMPTE experiments particularly interesting was the occurrence of a dramatic anomaly that was completely unpredicted. During the AMPTE experiment, three satellites traced the solar-wind flow into the earth's magnetosphere. One satellite, built by West Germany, released a series of barium and lithium canisters that were detonated and subsequently photo-ionized via solar radiation, thereby creating an artificial comet. Another satellite, built by Great Britain and in the vicinity during detonation, carried, as did the first satellite, a comprehensive set of magnetic field, particle and wave instruments. Upon detonation, what was observed by the satellites, as well as by aircraft and ground-based observers, was quite unexpected. The initial deflection of the ion clouds was not in the ambient solar wind's flow direction (rvec V) but rather in the direction transverse to the solar wind and the background magnetic field (rvec V x rvec B). This result was not predicted by any existing theories or simulation models; it is the main subject discussed in this dissertation. A large three dimensional computer simulation was produced to demonstrate that this transverse motion can be explained in terms of a rocket effect. Due to the extreme computer resources utilized in producing this work, the computer methods used to complete the calculation and the visualization techniques used to view the results are also discussed
An Empirical Bayes Mixture Model for Effect Size Distributions in Genome-Wide Association Studies.
Directory of Open Access Journals (Sweden)
Wesley K Thompson
2015-12-01
Full Text Available Characterizing the distribution of effects from genome-wide genotyping data is crucial for understanding important aspects of the genetic architecture of complex traits, such as number or proportion of non-null loci, average proportion of phenotypic variance explained per non-null effect, power for discovery, and polygenic risk prediction. To this end, previous work has used effect-size models based on various distributions, including the normal and normal mixture distributions, among others. In this paper we propose a scale mixture of two normals model for effect size distributions of genome-wide association study (GWAS test statistics. Test statistics corresponding to null associations are modeled as random draws from a normal distribution with zero mean; test statistics corresponding to non-null associations are also modeled as normal with zero mean, but with larger variance. The model is fit via minimizing discrepancies between the parametric mixture model and resampling-based nonparametric estimates of replication effect sizes and variances. We describe in detail the implications of this model for estimation of the non-null proportion, the probability of replication in de novo samples, the local false discovery rate, and power for discovery of a specified proportion of phenotypic variance explained from additive effects of loci surpassing a given significance threshold. We also examine the crucial issue of the impact of linkage disequilibrium (LD on effect sizes and parameter estimates, both analytically and in simulations. We apply this approach to meta-analysis test statistics from two large GWAS, one for Crohn's disease (CD and the other for schizophrenia (SZ. A scale mixture of two normals distribution provides an excellent fit to the SZ nonparametric replication effect size estimates. While capturing the general behavior of the data, this mixture model underestimates the tails of the CD effect size distribution. We discuss the
An Empirical Bayes Mixture Model for Effect Size Distributions in Genome-Wide Association Studies.
Thompson, Wesley K; Wang, Yunpeng; Schork, Andrew J; Witoelar, Aree; Zuber, Verena; Xu, Shujing; Werge, Thomas; Holland, Dominic; Andreassen, Ole A; Dale, Anders M
2015-12-01
Characterizing the distribution of effects from genome-wide genotyping data is crucial for understanding important aspects of the genetic architecture of complex traits, such as number or proportion of non-null loci, average proportion of phenotypic variance explained per non-null effect, power for discovery, and polygenic risk prediction. To this end, previous work has used effect-size models based on various distributions, including the normal and normal mixture distributions, among others. In this paper we propose a scale mixture of two normals model for effect size distributions of genome-wide association study (GWAS) test statistics. Test statistics corresponding to null associations are modeled as random draws from a normal distribution with zero mean; test statistics corresponding to non-null associations are also modeled as normal with zero mean, but with larger variance. The model is fit via minimizing discrepancies between the parametric mixture model and resampling-based nonparametric estimates of replication effect sizes and variances. We describe in detail the implications of this model for estimation of the non-null proportion, the probability of replication in de novo samples, the local false discovery rate, and power for discovery of a specified proportion of phenotypic variance explained from additive effects of loci surpassing a given significance threshold. We also examine the crucial issue of the impact of linkage disequilibrium (LD) on effect sizes and parameter estimates, both analytically and in simulations. We apply this approach to meta-analysis test statistics from two large GWAS, one for Crohn's disease (CD) and the other for schizophrenia (SZ). A scale mixture of two normals distribution provides an excellent fit to the SZ nonparametric replication effect size estimates. While capturing the general behavior of the data, this mixture model underestimates the tails of the CD effect size distribution. We discuss the implications of
International Nuclear Information System (INIS)
Cho, Y.; Crosbie, E.A.; Takeda, H.
1981-01-01
The 50-MeV H - injection line for the RCS at Argonne National Laboratory has 16 quadrupole and eight bending magnets. Horizontal and vertical profiles can be obtained at 12 wire scanner positions. Size information from these profiles can be used to determine the three ellipses parameters in each plane required to describe the transverse phase space. These locations that have dispersion permit the momentum error to be used as a fourth fitting parameter. The assumed accuracy of the size measurements provides an error matrix that predicts the rms errors of the fitted parameters
International Nuclear Information System (INIS)
Cho, Y.; Crosbie, E.A.; Takeda, H.
1981-01-01
The 50-Mev H/sup -/ injection line for the RCS at Argonne National Laboratory has 16 quadrupole and eight bending magnets. Horizontal and vertical profiles can be obtained at 12 wire scanner positions. Size information from these profiles can be used to determine the three ellipses parameters in each plane required to describe the transverse phase space. Those locations that have dispersion permit the momentum error to be used as a fourth fitting parameter. The assumed accuracy of the size measurements provides an error matrix that predicts the rms errors of the fitted parameters. 3 refs
Effects of sample size on estimates of population growth rates calculated with matrix models.
Directory of Open Access Journals (Sweden)
Ian J Fiske
Full Text Available BACKGROUND: Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (lambda calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of lambda-Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of lambda due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of lambda. METHODOLOGY/PRINCIPAL FINDINGS: Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating lambda for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of lambda with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. CONCLUSIONS/SIGNIFICANCE: We found significant bias at small sample sizes when survival was low (survival = 0.5, and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high
Effects of sample size on estimates of population growth rates calculated with matrix models.
Fiske, Ian J; Bruna, Emilio M; Bolker, Benjamin M
2008-08-28
Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (lambda) calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of lambda-Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of lambda due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of lambda. Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating lambda for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of lambda with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. We found significant bias at small sample sizes when survival was low (survival = 0.5), and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high elasticities.
DEFF Research Database (Denmark)
Petersen, Nanna; Stocks, S.; Gernaey, Krist
2008-01-01
fermentations conducted in 550 L pilot scale tanks were characterized with respect to particle size distribution, biomass concentration, and rheological properties. The rheological properties were described using the Herschel-Bulkley model. Estimation of all three parameters in the Herschel-Bulkley model (yield...... in filamentous fermentations. It was therefore chosen to fix this parameter to the average value thereby decreasing the standard deviation of the estimates of the remaining theological parameters significantly. Using a PLSR model, a reasonable prediction of apparent viscosity (mu(app)), yield stress (tau......(y)), and consistency index (K), could be made from the size distributions, biomass concentration, and process information. This provides a predictive method with a high predictive power for the rheology of fermentation broth, and with the advantages over previous models that tau(y) and K can be predicted as well as mu...
Representative Model of the Learning Process in Virtual Spaces Supported by ICT
Capacho, José
2014-01-01
This paper shows the results of research activities for building the representative model of the learning process in virtual spaces (e-Learning). The formal basis of the model are supported in the analysis of models of learning assessment in virtual spaces and specifically in Dembo´s teaching learning model, the systemic approach to evaluating…
International Nuclear Information System (INIS)
Ford, I.J.; Clement, C.F.
1990-03-01
The sodium aerosol which forms in the cover gas space of a Fast Reactor couples the processes of heat and mass transfer to and from the bounding surfaces and affects the thermal performance of the cavity. This report describes extensions to previously separate models of heat transfer and aerosol formation and removal in the cover gas space, and the linking of the two calculations in a consistent manner. The extensions made to the theories include thermophoretic aerosol removal, radiative-driven redistribution in aerosol sizes, and the side-wall influence on the bulk cavity temperature. The link between aerosol properties and boundary layer saturations is also examined, especially in the far-from-saturated limit. The models can be used in the interpretation of cover gas space experiments and some example calculations are given. (author)
Tekoglu, Cihan; Onck, Patrick R.
2008-01-01
In view of size effects in cellular solids, we critically compare the analytical results of generalized continuum theories with the computation a I results of discrete models. Representatives are studied from two classes of generalized continuum theories: the strain divergence theory from the class
Modeling, design and experimental validation of a small-sized magnetic gear
Zanis, R.; Borisavljevic, A.; Jansen, J.W.; Lomonova, E.A.
2013-01-01
A magnetostatic analytical model is created to analyze and design a small-sized magnetic gear for a robotic application. Through a parameter variation study, it is found that the inner rotor magnet height is highly influential to the torque, and based on which, the design is performed. Several
Advanced modeling of the size poly-dispersion of boiling flows
International Nuclear Information System (INIS)
Ruyer, Pierre; Seiler, Nathalie
2008-01-01
Full text of publication follows: This work has been performed within the Institut de Radioprotection et de Surete Nucleaire that leads research programs concerning safety analysis of nuclear power plants. During a LOCA (Loss Of Coolant Accident), in-vessel pressure decreases and temperature increases, leading to the onset of nucleate boiling. The present study focuses on the numerical simulation of the local topology of the boiling flow. There is experimental evidence of a local and statistical large spectra of possible bubble sizes. The relative importance of the correct description of this poly-dispersion in size is due to the dependency of (i) main hydrodynamic forces, like lift, as well as of (ii) transfer area with respect to the individual bubble size. We study the corresponding CFD model in the framework of an ensemble averaged description of the dispersed two-phase flow. The transport equations of the main statistical moment densities of the population size distribution are derived and models for the mass, momentum and heat transfers at the bubble scale as well as for bubble coalescence are achieved. This model introduced within NEPTUNE-CFD code of the NEPTUNE thermal-hydraulic platform, a joint project of CEA, EDF, IRSN and AREVA, has been tested on boiling flows obtained on the DEBORA facility of the CEA at Grenoble. These numerical simulations provide a validation and attest the impact of the proposed model. (authors) [fr
Finite size effects in a model for platicity of amorphous composites
DEFF Research Database (Denmark)
Tyukodi, Botond; Lemarchand, Claire A.; Hansen, Jesper Schmidt
2016-01-01
We discuss the plastic behavior of an amorphous matrix reinforced by hard particles. A mesoscopic depinning-like model accounting for Eshelby elastic interactions is implemented. Only the effect of a plastic disorder is considered. Numerical results show a complex size dependence of the effective...
Modelling Visual Quality of Kalanchoe Blossfeldiana: Influence of Cultivar and Pot Size
Carvalho, S.M.P.; Almeida, J.; Eveleens-Clark, B.A.; Bakker, M.J.; Heuvelink, E.
2008-01-01
An explanatory model for predicting kalanchoe plant height and cropping duration has been developed for one cultivar and one pot size, as described in earlier papers. In two experiments (winter and summer) seven contrasting cultivars (`Anatole¿, `Debbie¿, `Delia¿, `Mie¿, `Pandora¿, `Tenorio¿ and
The modified turning bands (MTB) model for space-time rainfall. I. Model definition and properties
Mellor, Dale
1996-02-01
A new stochastic model of space-time rainfall, the Modified Turning Bands (MTB) model, is proposed which reproduces, in particular, the movements and developments of rainbands, cluster potential regions and raincells, as well as their respective interactions. The ensemble correlation structure is unsuitable for practical estimation of the model parameters because the model is not ergodic in this statistic, and hence it cannot easily be measured from a single real storm. Thus, some general theory on the internal covariance structure of a class of stochastic models is presented, of which the MTB model is an example. It is noted that, for the MTB model, the internal covariance structure may be measured from a single storm, and can thus be used for model identification.
Shen, Xiaoteng; Maa, Jerome P.-Y.
2017-11-01
In estuaries and coastal waters, floc size and its statistical distributions of cohesive sediments are of primary importance, due to their effects on the settling velocity and thus deposition rates of cohesive aggregates. The development of a robust flocculation model that includes the predictions of floc size distributions (FSDs), however, is still in a research stage. In this study, a one-dimensional longitudinal (1-DL) flocculation model along a streamtube is developed. This model is based on solving the population balance equation to find the FSDs by using the quadrature method of moments. To validate this model, a laboratory experiment is carried out to produce an advection transport-dominant environment in a cylindrical tank. The flow field is generated by a marine pump mounted at the bottom center, with its outlet facing upward. This setup generates an axially symmetric flow which is measured by an acoustic Doppler velocimeter (ADV). The measurement results provide the hydrodynamic input data required for this 1-DL model. The other measurement results, the FSDs, are acquired by using an automatic underwater camera system and the resulting images are analyzed to validate the predicted FSDs. This study shows that the FSDs as well as their representative sizes can be efficiently and reasonably simulated by this 1-DL model.
Statistical shear lag model - unraveling the size effect in hierarchical composites.
Wei, Xiaoding; Filleter, Tobin; Espinosa, Horacio D
2015-05-01
Numerous experimental and computational studies have established that the hierarchical structures encountered in natural materials, such as the brick-and-mortar structure observed in sea shells, are essential for achieving defect tolerance. Due to this hierarchy, the mechanical properties of natural materials have a different size dependence compared to that of typical engineered materials. This study aimed to explore size effects on the strength of bio-inspired staggered hierarchical composites and to define the influence of the geometry of constituents in their outstanding defect tolerance capability. A statistical shear lag model is derived by extending the classical shear lag model to account for the statistics of the constituents' strength. A general solution emerges from rigorous mathematical derivations, unifying the various empirical formulations for the fundamental link length used in previous statistical models. The model shows that the staggered arrangement of constituents grants composites a unique size effect on mechanical strength in contrast to homogenous continuous materials. The model is applied to hierarchical yarns consisting of double-walled carbon nanotube bundles to assess its predictive capabilities for novel synthetic materials. Interestingly, the model predicts that yarn gauge length does not significantly influence the yarn strength, in close agreement with experimental observations. Copyright © 2015 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.
Sectional modeling of nanoparticle size and charge distributions in dusty plasmas
International Nuclear Information System (INIS)
Agarwal, Pulkit; Girshick, Steven L
2012-01-01
Sectional models of the dynamics of aerosol populations are well established in the aerosol literature but have received relatively less attention in numerical models of dusty plasmas, where most modeling studies have assumed the existence of monodisperse dust particles. In the case of plasmas in which nanoparticles nucleate and grow, significant polydispersity can exist in particle size distributions, and stochastic charging can cause particles of given size to have a broad distribution of charge states. Sectional models, while computationally expensive, are well suited to treating such distributions. This paper presents an overview of sectional modeling of nanodusty plasmas, and presents examples of simulation results that reveal important qualitative features of the spatiotemporal evolution of such plasmas, many of which could not be revealed by models that consider only monodisperse dust particles and average particle charge. These features include the emergence of bimodal particle populations consisting of very small neutral particles and larger negatively charged particles, the effects of size and charge distributions on coagulation, spreading and structure of the particle cloud, and the dynamics of dusty plasma afterglows. (paper)
A socio-hydrologic model of coupled water-agriculture dynamics with emphasis on farm size.
Brugger, D. R.; Maneta, M. P.
2015-12-01
Agricultural land cover dynamics in the U.S. are dominated by two trends: 1) total agricultural land is decreasing and 2) average farm size is increasing. These trends have important implications for the future of water resources because 1) growing more food on less land is due in large part to increased groundwater withdrawal and 2) larger farms can better afford both more efficient irrigation and more groundwater access. However, these large-scale trends are due to individual farm operators responding to many factors including climate, economics, and policy. It is therefore difficult to incorporate the trends into watershed-scale hydrologic models. Traditional scenario-based approaches are valuable for many applications, but there is typically no feedback between the hydrologic model and the agricultural dynamics and so limited insight is gained into the how agriculture co-evolves with water resources. We present a socio-hydrologic model that couples simplified hydrologic and agricultural economic dynamics, accounting for many factors that depend on farm size such as irrigation efficiency and returns to scale. We introduce an "economic memory" (EM) state variable that is driven by agricultural revenue and affects whether farms are sold when land market values exceed expected returns from agriculture. The model uses a Generalized Mixture Model of Gaussians to approximate the distribution of farm sizes in a study area, effectively lumping farms into "small," "medium," and "large" groups that have independent parameterizations. We apply the model in a semi-arid watershed in the upper Columbia River Basin, calibrating to data on streamflow, total agricultural land cover, and farm size distribution. The model is used to investigate the sensitivity of the coupled system to various hydrologic and economic scenarios such as increasing market value of land, reduced surface water availability, and increased irrigation efficiency in small farms.
Nature of size effects in compact models of field effect transistors
Energy Technology Data Exchange (ETDEWEB)
Torkhov, N. A., E-mail: trkf@mail.ru [Tomsk State University, Tomsk 634050 (Russian Federation); Scientific-Research Institute of Semiconductor Devices, Tomsk 634050 (Russian Federation); Tomsk State University of Control Systems and Radioelectronics, Tomsk 634050 (Russian Federation); Babak, L. I.; Kokolov, A. A.; Salnikov, A. S.; Dobush, I. M. [Tomsk State University of Control Systems and Radioelectronics, Tomsk 634050 (Russian Federation); Novikov, V. A., E-mail: novikovvadim@mail.ru; Ivonin, I. V. [Tomsk State University, Tomsk 634050 (Russian Federation)
2016-03-07
Investigations have shown that in the local approximation (for sizes L < 100 μm), AlGaN/GaN high electron mobility transistor (HEMT) structures satisfy to all properties of chaotic systems and can be described in the language of fractal geometry of fractional dimensions. For such objects, values of their electrophysical characteristics depend on the linear sizes of the examined regions, which explain the presence of the so-called size effects—dependences of the electrophysical and instrumental characteristics on the linear sizes of the active elements of semiconductor devices. In the present work, a relationship has been established for the linear model parameters of the equivalent circuit elements of internal transistors with fractal geometry of the heteroepitaxial structure manifested through a dependence of its relative electrophysical characteristics on the linear sizes of the examined surface areas. For the HEMTs, this implies dependences of their relative static (A/mm, mA/V/mm, Ω/mm, etc.) and microwave characteristics (W/mm) on the width d of the sink-source channel and on the number of sections n that leads to a nonlinear dependence of the retrieved parameter values of equivalent circuit elements of linear internal transistor models on n and d. Thus, it has been demonstrated that the size effects in semiconductors determined by the fractal geometry must be taken into account when investigating the properties of semiconductor objects on the levels less than the local approximation limit and designing and manufacturing field effect transistors. In general, the suggested approach allows a complex of problems to be solved on designing, optimizing, and retrieving the parameters of equivalent circuits of linear and nonlinear models of not only field effect transistors but also any arbitrary semiconductor devices with nonlinear instrumental characteristics.
Liu, Jinxing; El Sayed, Tamer S.
2012-01-01
Micro-voids of varying sizes exist in most metals and alloys. Both experiments and numerical studies have demonstrated the critical influence of initial void sizes on void growth. The classical Gurson-Tvergaard-Needleman model summarizes
Optimizing phonon space in the phonon-coupling model
Tselyaev, V.; Lyutorovich, N.; Speth, J.; Reinhard, P.-G.
2017-08-01
We present a new scheme to select the most relevant phonons in the phonon-coupling model, named here the time-blocking approximation (TBA). The new criterion, based on the phonon-nucleon coupling strengths rather than on B (E L ) values, is more selective and thus produces much smaller phonon spaces in the TBA. This is beneficial in two respects: first, it curbs the computational cost, and second, it reduces the danger of double counting in the expansion basis of the TBA. We use here the TBA in a form where the coupling strength is regularized to keep the given Hartree-Fock ground state stable. The scheme is implemented in a random-phase approximation and TBA code based on the Skyrme energy functional. We first explore carefully the cutoff dependence with the new criterion and can work out a natural (optimal) cutoff parameter. Then we use the freshly developed and tested scheme for a survey of giant resonances and low-lying collective states in six doubly magic nuclei looking also at the dependence of the results when varying the Skyrme parametrization.
A model for emergence of space and time
Energy Technology Data Exchange (ETDEWEB)
Ambjørn, J., E-mail: ambjorn@nbi.dk [The Niels Bohr Institute, Copenhagen University, Blegdamsvej 17, DK-2100 Copenhagen Ø (Denmark); Institute for Mathematics, Astrophysics and Particle Physics (IMAPP), Radbaud University Nijmegen, Heyendaalseweg 135, 6525 AJ, Nijmegen (Netherlands); Watabiki, Y., E-mail: watabiki@th.phys.titech.ac.jp [Tokyo Institute of Technology, Dept. of Physics, High Energy Theory Group, 2-12-1 Oh-okayama, Meguro-ku, Tokyo 152-8551 (Japan)
2015-10-07
We study string field theory (third quantization) of the two-dimensional model of quantum geometry called generalized CDT (“causal dynamical triangulations”). Like in standard non-critical string theory the so-called string field Hamiltonian of generalized CDT can be associated with W-algebra generators through the string mode expansion. This allows us to define an “absolute” vacuum. “Physical” vacua appear as coherent states created by vertex operators acting on the absolute vacuum. Each coherent state corresponds to specific values of the coupling constants of generalized CDT. The cosmological “time” only exists relatively to a given “physical” vacuum and comes into existence before space, which is created because the “physical” vacuum is unstable. Thus each CDT “universe” is created as a “Big Bang” from the absolute vacuum, its time evolution is governed by the CDT string field Hamiltonian with given coupling constants, and one can imagine interactions between CDT universes with different coupling constants (“fourth quantization”)
A model for emergence of space and time
Directory of Open Access Journals (Sweden)
J. Ambjørn
2015-10-01
Full Text Available We study string field theory (third quantization of the two-dimensional model of quantum geometry called generalized CDT (“causal dynamical triangulations”. Like in standard non-critical string theory the so-called string field Hamiltonian of generalized CDT can be associated with W-algebra generators through the string mode expansion. This allows us to define an “absolute” vacuum. “Physical” vacua appear as coherent states created by vertex operators acting on the absolute vacuum. Each coherent state corresponds to specific values of the coupling constants of generalized CDT. The cosmological “time” only exists relatively to a given “physical” vacuum and comes into existence before space, which is created because the “physical” vacuum is unstable. Thus each CDT “universe” is created as a “Big Bang” from the absolute vacuum, its time evolution is governed by the CDT string field Hamiltonian with given coupling constants, and one can imagine interactions between CDT universes with different coupling constants (“fourth quantization”
Retail gentrification. Staged spaces and the gourmet market model
Directory of Open Access Journals (Sweden)
Luz de Lourdes Cordero Gómez del Campo
2017-12-01
Full Text Available Retail gentrification is understood as a process in which commercial activity is transformed to meet the needs of a sector of the population with higher incomes resulting in the displacement of merchants and products, seen from the implementation of the model of the gourmet market. This process, which is seen in the interest of copying the commercial formats of successful cases from gourmet markets such as the San Miguel Market in Madrid or the Borough Market in London, is linked to an offer aimed at satisfying consumption demands produced by a sector of the population that although not being equivalent concepts, different authors identify as cultural omnivores or creative class, coinciding in that these groups have a high cultural and economic capital. In the present work it is discussed how in Mexico City in the absence of the transformation of public markets into gourmet markets and with the inauguration of the Roma Market in 2014, the staging of commercial spaces labeled as gourmet markets has intensified and they are inserted in neighborhoods where they seek to generate development and links with the community, but because of their prices and the characteristics of the products they offer, they are beyond the reach of the local population.
Sellaoui, Lotfi; Mechi, Nesrine; Lima, Éder Cláudio; Dotto, Guilherme Luiz; Ben Lamine, Abdelmottaleb
2017-10-01
Based on statistical physics elements, the equilibrium adsorption of diclofenac (DFC) and nimesulide (NM) on activated carbon was analyzed by a multilayer model with saturation. The paper aimed to describe experimentally and theoretically the adsorption process and study the effect of adsorbate size using the model parameters. From numerical simulation, the number of molecules per site showed that the adsorbate molecules (DFC and NM) were mostly anchored in both sides of the pore walls. The receptor sites density increase suggested that additional sites appeared during the process, to participate in DFC and NM adsorption. The description of the adsorption energy behavior indicated that the process was physisorption. Finally, by a model parameters correlation, the size effect of the adsorbate was deduced indicating that the molecule dimension has a negligible effect on the DFC and NM adsorption.
A prediction model for the effective thermal conductivity of mono-sized pebble beds
Energy Technology Data Exchange (ETDEWEB)
Wang, Xiaoliang; Zheng, Jie; Chen, Hongli, E-mail: hlchen1@ustc.edu.cn
2016-02-15
Highlights: • One new method to couple the contact area with bed strain is developed. • The constant coefficient to correlate the effect of gas flow is determined. • This model is valid for various cases, and its advantages are showed obviously. - Abstract: A model is presented here to predict the effective thermal conductivity of porous medium packed with mono-sized spherical pebbles, and it is valid when pebbles’ size is far less than the characteristic length of porous medium just like the fusion pebble beds. In this model, the influences of parameters such as properties of pebble and gas materials, bed porosity, pebble size, gas flow, contact area, thermal radiation, contact resistance, etc. are all taken into account, and one method to couple the contact areas with bed strains is also developed and implemented preliminarily. Compared with available theoretical models, CFD numerical simulations and experimental data, this model is verified to be successful to forecast the bed effective thermal conductivity in various cases and its advantages are also showed obviously. Especially, the convection in pebble beds is focused on and a constant coefficient C to correlate the effect of gas flow is determined for the fully developed region of beds by numerical simulation, which is close to some experimental data.
International Nuclear Information System (INIS)
Olcan, Ceyda
2015-01-01
Highlights: • An analytical optimal sizing model is proposed for PV water pumping systems. • The objectives are chosen as deficiency of power supply and life-cycle costs. • The crop water requirements are estimated for a citrus tree yard in Antalya. • The optimal tilt angles are calculated for fixed, seasonal and monthly changes. • The sizing results showed the validity of the proposed analytical model. - Abstract: Stand-alone photovoltaic (PV) water pumping systems effectively use solar energy for irrigation purposes in remote areas. However the random variability and unpredictability of solar energy makes difficult the penetration of PV implementations and complicate the system design. An optimal sizing of these systems proves to be essential. This paper recommends a techno-economic optimization model to determine optimally the capacity of the components of PV water pumping system using a water storage tank. The proposed model is developed regarding the reliability and cost indicators, which are the deficiency of power supply probability and life-cycle costs, respectively. The novelty is that the proposed optimization model is analytically defined for two-objectives and it is able to find a compromise solution. The sizing of a stand-alone PV water pumping system comprises a detailed analysis of crop water requirements and optimal tilt angles. Besides the necessity of long solar radiation and temperature time series, the accurate forecasts of water supply needs have to be determined. The calculation of the optimal tilt angle for yearly, seasonally and monthly frequencies results in higher system efficiency. It is, therefore, suggested to change regularly the tilt angle in order to maximize solar energy output. The proposed optimal sizing model incorporates all these improvements and can accomplish a comprehensive optimization of PV water pumping systems. A case study is conducted considering the irrigation of citrus trees yard located in Antalya, Turkey
A cosmological model with compact space sections and low mass density
International Nuclear Information System (INIS)
Fagundes, H.V.
1982-01-01
A general relativistic cosmological model is presented, which has closed space sections and mass density below a critical density similar to that of Friedmann's models. The model may predict double images of cosmic sources. (Author) [pt
Detailed measurements and modelling of thermo active components using a room size test facility
DEFF Research Database (Denmark)
Weitzmann, Peter; Svendsen, Svend
2005-01-01
measurements in an office sized test facility with thermo active ceiling and floor as well as modelling of similar conditions in a computer program designed for analysis of building integrated heating and cooling systems. A method for characterizing the cooling capacity of thermo active components is described...... typically within 1-2K of the measured results. The simulation model, whose room model splits up the radiative and convective heat transfer between room and surfaces, can also be used to predict the dynamical conditions, where especially the temperature rise during the day is important for designing...
Comprehensive Laser-induced Incandescence (LII) modeling for soot particle sizing
Lisanti, Joel
2015-03-30
To evaluate the current state of the art in LII particle sizing, a comprehensive model for predicting the temporal incandescent response of combustion-generated soot to absorption of a pulsed laser is presented. The model incorporates particle heating through laser absorption, thermal annealing, and oxidation at the surface as well as cooling through sublimation and photodesorption, radiation, conduction and thermionic emission. Thermodynamic properties and the thermal accommodation coefficient utilized in the model are temperature dependent. In addition, where appropriate properties are also phase dependent, thereby accounting for annealing effects during laser heating and particle cooling.
DEFF Research Database (Denmark)
Mantzouni, Irene; Sørensen, Helle; O'Hara, Robert B.
2010-01-01
and Beverton and Holt stock–recruitment (SR) models were extended by applying hierarchical methods, mixed-effects models, and Bayesian inference to incorporate the influence of these ecosystem factors on model parameters representing cod maximum reproductive rate and carrying capacity. We identified......Understanding how temperature affects cod (Gadus morhua) ecology is important for forecasting how populations will develop as climate changes in future. The effects of spawning-season temperature and habitat size on cod recruitment dynamics have been investigated across the North Atlantic. Ricker...
Megan Friggens; Carol Raish; Deborah Finch; Alice McSweeney
2015-01-01
The southwest has experienced dramatic population increases over the last 30 years, a trend that is expected to continue. Open space conservation is important both from the standpoint of preserving ecosystem services as well as maintaining quality of life for urban populations. Federal agencies manage a large proportion of the public land in the Southwestern U.S. We...
Desvillettes, Laurent
2010-01-01
We study a continuous coagulation-fragmentation model with constant kernels for reacting polymers (see [M. Aizenman and T. Bak, Comm. Math. Phys., 65 (1979), pp. 203-230]). The polymers are set to diffuse within a smooth bounded one-dimensional domain with no-flux boundary conditions. In particular, we consider size-dependent diffusion coefficients, which may degenerate for small and large cluster-sizes. We prove that the entropy-entropy dissipation method applies directly in this inhomogeneous setting. We first show the necessary basic a priori estimates in dimension one, and second we show faster-than-polynomial convergence toward global equilibria for diffusion coefficients which vanish not faster than linearly for large sizes. This extends the previous results of [J.A. Carrillo, L. Desvillettes, and K. Fellner, Comm. Math. Phys., 278 (2008), pp. 433-451], which assumes that the diffusion coefficients are bounded below. © 2009 Society for Industrial and Applied Mathematics.
Size effects and strain localization in atomic-scale cleavage modeling
International Nuclear Information System (INIS)
Elsner, B A M; Müller, S
2015-01-01
In this work, we study the adhesion and decohesion of Cu(1 0 0) surfaces using density functional theory (DFT) calculations. An upper stress to surface decohesion is obtained via the universal binding energy relation (UBER), but the model is limited to rigid separation of bulk-terminated surfaces. When structural relaxations are included, an unphysical size effect arises if decohesion is considered to occur as soon as the strain energy equals the energy of the newly formed surfaces. We employ the nudged elastic band (NEB) method to show that this size effect is opposed by a size-dependency of the energy barriers involved in the transition. Further, we find that the transition occurs via a localization of bond strain in the vicinity of the cleavage plane, which resembles the strain localization at the tip of a sharp crack that is predicted by linear elastic fracture mechanics. (paper)
Diffusion with space memory modelled with distributed order space fractional differential equations
Directory of Open Access Journals (Sweden)
M. Caputo
2003-06-01
Full Text Available Distributed order fractional differential equations (Caputo, 1995, 2001; Bagley and Torvik, 2000a,b were fi rst used in the time domain; they are here considered in the space domain and introduced in the constitutive equation of diffusion. The solution of the classic problems are obtained, with closed form formulae. In general, the Green functions act as low pass fi lters in the frequency domain. The major difference with the case when a single space fractional derivative is present in the constitutive equations of diffusion (Caputo and Plastino, 2002 is that the solutions found here are potentially more fl exible to represent more complex media (Caputo, 2001a. The difference between the space memory medium and that with the time memory is that the former is more fl exible to represent local phenomena while the latter is more fl exible to represent variations in space. Concerning the boundary value problem, the difference with the solution of the classic diffusion medium, in the case when a constant boundary pressure is assigned and in the medium the pressure is initially nil, is that one also needs to assign the fi rst order space derivative at the boundary.
Modelling the effects of penetrance and family size on rates of sporadic and familial disease.
Al-Chalabi, Ammar; Lewis, Cathryn M
2011-01-01
Many complex diseases show a diversity of inheritance patterns ranging from familial disease, manifesting with autosomal dominant inheritance, through to simplex families in which only one person is affected, manifesting as apparently sporadic disease. The role of ascertainment bias in generating apparent patterns of inheritance is often overlooked. We therefore explored the role of two key parameters that influence ascertainment, penetrance and family size, in rates of observed familiality. We develop a mathematical model of familiality of disease, with parameters for penetrance, mutation frequency and family size, and test this in a complex disease: amyotrophic lateral sclerosis. Monogenic, high-penetrance variants can explain patterns of inheritance in complex diseases and account for a large proportion of those with no apparent family history. With current demographic trends, rates of familiality will drop further. For example, a variant with penetrance 0.5 will cause apparently sporadic disease in 12% of families of size 10, but 80% of families of size 1. A variant with penetrance 0.9 has only an 11% chance of appearing sporadic in families of a size similar to those of Ireland in the past, compared with 57% in one-child families like many in China. These findings have implications for genetic counselling, disease classification and the design of gene-hunting studies. The distinction between familial and apparently sporadic disease should be considered artificial. Copyright © 2011 S. Karger AG, Basel.
On the structure of the space of geometric product-form models
Bayer, Nimrod; Boucherie, Richardus J.
2002-01-01
This article deals with Markovian models defined on a finite-dimensional discrete state space and possess a stationary state distribution of a product-form. We view the space of such models as a mathematical object and explore its structure. We focus on models on an orthant [script Z]+n, which are
Directory of Open Access Journals (Sweden)
Tae-Yub Kwon
2014-01-01
Full Text Available Dental modeling resins have been developed for use in areas where highly precise resin structures are needed. The manufacturers claim that these polymethyl methacrylate/methyl methacrylate (PMMA/MMA resins show little or no shrinkage after polymerization. This study examined the polymerization shrinkage of five dental modeling resins as well as one temporary PMMA/MMA resin (control. The morphology and the particle size of the prepolymerized PMMA powders were investigated by scanning electron microscopy and laser diffraction particle size analysis, respectively. Linear polymerization shrinkage strains of the resins were monitored for 20 minutes using a custom-made linometer, and the final values (at 20 minutes were converted into volumetric shrinkages. The final volumetric shrinkage values for the modeling resins were statistically similar (P>0.05 or significantly larger (P<0.05 than that of the control resin and were related to the polymerization kinetics (P<0.05 rather than the PMMA bead size (P=0.335. Therefore, the optimal control of the polymerization kinetics seems to be more important for producing high-precision resin structures rather than the use of dental modeling resins.
Modeling the light-travel-time effect on the far-infrared size of IRC +10216
Wright, Edward L.; Baganoff, Frederick K.
1995-01-01
Models of the far-infrared emission from the large circumstellar dust envelope surrounding the carbon star IRC +10216 are used to assess the importance of the light-travel-time effect (LTTE) on the observed size of the source. The central star is a long-period variable with an average period of 644 +/- 17 days and a peak-to-peak amplitude of two magnituds, so a large light-travel-time effect is seen at 1 min radius. An attempt is made to use the LTTE to reconcile the discrepancy between the observations of Fazio et al. and Lester et al. regarding the far-infrared source size. This discrepancy is reviewed in light of recent, high-spatial-resolution observations at 11 microns by Danchi et al. We conclude that IRC +10216 has been resolved on the arcminute scale by Fazio et al. Convolution of the model intensity profile at 61 microns with the 60 sec x 90 sec Gaussian beam of Fazio et al. yields an observed source size full width at half maximum (FWHM) that ranges from approximately 67 sec to 75 sec depending on the phase of the star and the assumed distance to the source. Using a simple r(exp -2) dust distribution and the 106 deg phase of the Fazio et al. observations, the LTTE model reaches a peak size of 74.3 sec at a distance of 300 pc. This agrees favorably with the 78 sec x 6 sec size measured by Fazio et al. Finally, a method is outlined for using the LTTE as a distance indicator to IRC +10216 and other stars with extended mass outflows.
Characteristic size and mass of galaxies in the Bose–Einstein condensate dark matter model
Directory of Open Access Journals (Sweden)
Jae-Weon Lee
2016-05-01
Full Text Available We study the characteristic length scale of galactic halos in the Bose–Einstein condensate (or scalar field dark matter model. Considering the evolution of the density perturbation we show that the average background matter density determines the quantum Jeans mass and hence the spatial size of galaxies at a given epoch. In this model the minimum size of galaxies increases while the minimum mass of the galaxies decreases as the universe expands. The observed values of the mass and the size of the dwarf galaxies are successfully reproduced with the dark matter particle mass m≃5×10−22 eV. The minimum size is about 6×10−3m/Hλc and the typical rotation velocity of the dwarf galaxies is O(H/m c, where H is the Hubble parameter and λc is the Compton wave length of the particle. We also suggest that ultra compact dwarf galaxies are the remnants of the dwarf galaxies formed in the early universe.
Modeling and sizing a Storage System coupled with intermittent renewable power generation
International Nuclear Information System (INIS)
Bridier, Laurent
2016-01-01
This thesis aims at presenting an optimal management and sizing of an Energy Storage System (ESS) paired up with Intermittent Renewable Energy Sources (IReN). Firstly, we developed a technical-economic model of the system which is associated with three typical scenarios of utility grid power supply: hourly smoothing based on a one-day-ahead forecast (S1), guaranteed power supply (S2) and combined scenarios (S3). This model takes the form of a large-scale non-linear optimization program. Secondly, four heuristic strategies are assessed and lead to an optimized management of the power output with storage according to the reliability, productivity, efficiency and profitability criteria. This ESS optimized management is called 'Adaptive Storage Operation' (ASO). When compared to a mixed integer linear program (MILP), this optimized operation that is practicable under operational conditions gives rapidly near-optimal results. Finally, we use the ASO in ESS optimal sizing for each renewable energy: wind, wave and solar (PV). We determine the minimal sizing that complies with each scenario, by inferring the failure rate, the viable feed-in tariff of the energy, and the corresponding compliant, lost or missing energies. We also perform sensitivity analysis which highlights the importance of the ESS efficiency and of the forecasting accuracy and the strong influence of the hybridization of renewables on ESS technical-economic sizing. (author) [fr
Modelling size and structure of nanoparticles formed from drying of submicron solution aerosols
International Nuclear Information System (INIS)
Bandyopadhyay, Arpan A.; Pawar, Amol A.; Venkataraman, Chandra; Mehra, Anurag
2015-01-01
Drying of submicron solution aerosols, under controlled conditions, has been explored to prepare nanoparticles for drug delivery applications. A computational model of solution drop evaporation is developed to study the evolution of solute gradients inside the drop and predict the size and shell thickness of precipitating nanoparticles. The model considers evaporation as a two-stage process involving droplet shrinkage and shell growth. It was corroborated that droplet evaporation rate controls the solute distribution within a droplet and the resulting particle structure (solid or shell type). At higher gas temperatures, rapid build-up of solute near drop surface from high evaporation rates results in early attainment of critical supersaturation solubility and a steeper solute gradient, which favours formation of larger, shell-type particles. At lower gas temperatures, formation of smaller, solid nanoparticles is indicated. The computed size and shell thickness are in good agreement with experimentally prepared lipid nanoparticles. This study indicates that solid or shell structure of precipitated nanoparticles is strongly affected by evaporation rate, while initial solute concentration in the precursor solution and atomized droplet size affect shell thickness. For the gas temperatures considered, evaporative cooling leads to droplet temperature below the melting point of the lipid solute. Thus, we conclude that control over nanoparticle size and structure, of thermolabile precursor materials suitable for drug delivery, can be achieved by controlling evaporation rates, through selection of aerosol processing conditions
Modelling size and structure of nanoparticles formed from drying of submicron solution aerosols
Energy Technology Data Exchange (ETDEWEB)
Bandyopadhyay, Arpan A.; Pawar, Amol A.; Venkataraman, Chandra; Mehra, Anurag, E-mail: mehra@iitb.ac.in [Indian Institute of Technology Bombay, Department of Chemical Engineering (India)
2015-01-15
Drying of submicron solution aerosols, under controlled conditions, has been explored to prepare nanoparticles for drug delivery applications. A computational model of solution drop evaporation is developed to study the evolution of solute gradients inside the drop and predict the size and shell thickness of precipitating nanoparticles. The model considers evaporation as a two-stage process involving droplet shrinkage and shell growth. It was corroborated that droplet evaporation rate controls the solute distribution within a droplet and the resulting particle structure (solid or shell type). At higher gas temperatures, rapid build-up of solute near drop surface from high evaporation rates results in early attainment of critical supersaturation solubility and a steeper solute gradient, which favours formation of larger, shell-type particles. At lower gas temperatures, formation of smaller, solid nanoparticles is indicated. The computed size and shell thickness are in good agreement with experimentally prepared lipid nanoparticles. This study indicates that solid or shell structure of precipitated nanoparticles is strongly affected by evaporation rate, while initial solute concentration in the precursor solution and atomized droplet size affect shell thickness. For the gas temperatures considered, evaporative cooling leads to droplet temperature below the melting point of the lipid solute. Thus, we conclude that control over nanoparticle size and structure, of thermolabile precursor materials suitable for drug delivery, can be achieved by controlling evaporation rates, through selection of aerosol processing conditions.
Zhang, Bo; Liu, Wei; Zhang, Zhiwei; Qu, Yanping; Chen, Zhen; Albert, Paul S
2017-08-01
Joint modeling and within-cluster resampling are two approaches that are used for analyzing correlated data with informative cluster sizes. Motivated by a developmental toxicity study, we examined the performances and validity of these two approaches in testing covariate effects in generalized linear mixed-effects models. We show that the joint modeling approach is robust to the misspecification of cluster size models in terms of Type I and Type II errors when the corresponding covariates are not included in the random effects structure; otherwise, statistical tests may be affected. We also evaluate the performance of the within-cluster resampling procedure and thoroughly investigate the validity of it in modeling correlated data with informative cluster sizes. We show that within-cluster resampling is a valid alternative to joint modeling for cluster-specific covariates, but it is invalid for time-dependent covariates. The two methods are applied to a developmental toxicity study that investigated the effect of exposure to diethylene glycol dimethyl ether.
Preliminary results from a four-working space, double-acting piston, Stirling engine controls model
Daniele, C. J.; Lorenzo, C. F.
1980-01-01
A four working space, double acting piston, Stirling engine simulation is being developed for controls studies. The development method is to construct two simulations, one for detailed fluid behavior, and a second model with simple fluid behaviour but containing the four working space aspects and engine inertias, validate these models separately, then upgrade the four working space model by incorporating the detailed fluid behaviour model for all four working spaces. The single working space (SWS) model contains the detailed fluid dynamics. It has seven control volumes in which continuity, energy, and pressure loss effects are simulated. Comparison of the SWS model with experimental data shows reasonable agreement in net power versus speed characteristics for various mean pressure levels in the working space. The four working space (FWS) model was built to observe the behaviour of the whole engine. The drive dynamics and vehicle inertia effects are simulated. To reduce calculation time, only three volumes are used in each working space and the gas temperature are fixed (no energy equation). Comparison of the FWS model predicted power with experimental data shows reasonable agreement. Since all four working spaces are simulated, the unique capabilities of the model are exercised to look at working fluid supply transients, short circuit transients, and piston ring leakage effects.
Equivalent circuit modeling of space charge dominated magnetically insulated transmission lines
Energy Technology Data Exchange (ETDEWEB)
Hiraoka, Kazuki; Nakajima, Mitsuo; Horioka, Kazuhiko
1997-12-31
A new equivalent circuit model for space charge dominated MITLs (Magnetically Insulated Transmission Lines) was developed. MITLs under high power operation are dominated with space charge current flowing between anode and cathode. Conventional equivalent circuit model does not account for space charge effects on power flow. The model was modified to discuss the power transportation through the high power MITLs. With this model, it is possible to estimate the effects of space charge current on the power flow efficiency, without using complicated particle code simulations. (author). 3 figs., 3 refs.
Widowski, T. M; Caston, L. J; Hunniford, M. E; Cooley, L; Torrey, S
2017-01-01
Abstract There are few published data on the effects of housing laying hens at different densities in large furnished cages (FC; a.k.a. enriched colony cages). The objective of this study was to determine the effects of housing laying hens at 2 space allowances (SA) in 2 sizes of FC on measures of production and well-being. At 18 wk of age, 1,218 LSL-Lite hens were housed in cages furnished with a curtained nesting area, perches, and scratch mat, and stocked at either 520 cm2 (Low) or 748 cm2 (High) total floor space. This resulted in 4 group sizes: 40 vs. 28 birds in smaller FC (SFC) and 80 vs. 55 in larger FC (LFC). Data were collected from 20 to 72 wks of age. There was no effect of cage size (P = 0.21) or SA (P = 0.37) on hen day egg production, egg weight (PSize = 0.90; PSA = 0.73), or eggshell deformation (PSize = 0.14; PSA = 0.053), but feed disappearance was higher in SFC than LFC (P = 0.005). Mortality to 72 wk was not affected by cage size (P = 0.78) or SA (P = 0.55). BW (P = 0.006) and BW CV (P = 0.008) increased with age but were not affected by treatment. Feather cleanliness was poorer in FC with low SA vs. high (P hens housed at the lower space allowance may be compromised according to some welfare assessment criteria. PMID:29050408
A variational constitutive model for the distribution and interactions of multi-sized voids
Liu, Jinxing
2013-07-29
The evolution of defects or voids, generally recognized as the basic failure mechanism in most metals and alloys, has been intensively studied. Most investigations have been limited to spatially periodic cases with non-random distributions of the radii of the voids. In this study, we use a new form of the incompressibility of the matrix to propose the formula for the volumetric plastic energy of a void inside a porous medium. As a consequence, we are able to account for the weakening effect of the surrounding voids and to propose a general model for the distribution and interactions of multi-sized voids. We found that the single parameter in classical Gurson-type models, namely void volume fraction is not sufficient for the model. The relative growth rates of voids of different sizes, which can in principle be obtained through physical or numerical experiments, are required. To demonstrate the feasibility of the model, we analyze two cases. The first case represents exactly the same assumption hidden in the classical Gurson\\'s model, while the second embodies the competitive mechanism due to void size differences despite in a much simpler manner than the general case. Coalescence is implemented by allowing an accelerated void growth after an empirical critical porosity in a way that is the same as the Gurson-Tvergaard-Needleman model. The constitutive model presented here is validated through good agreements with experimental data. Its capacity for reproducing realistic failure patterns is shown by simulating a tensile test on a notched round bar. © 2013 The Author(s).
Explicit all-atom modeling of realistically sized ligand-capped nanocrystals
Kaushik, Ananth P.
2012-01-01
We present a study of an explicit all-atom representation of nanocrystals of experimentally relevant sizes (up to 6 nm), capped with alkyl chain ligands, in vacuum. We employ all-atom molecular dynamics simulation methods in concert with a well-tested intermolecular potential model, MM3 (molecular mechanics 3), for the studies presented here. These studies include determining the preferred conformation of an isolated single nanocrystal (NC), pairs of isolated NCs, and (presaging studies of superlattice arrays) unit cells of NC superlattices. We observe that very small NCs (3 nm) behave differently in a superlattice as compared to larger NCs (6 nm and above) due to the conformations adopted by the capping ligands on the NC surface. Short ligands adopt a uniform distribution of orientational preferences, including some that lie against the face of the nanocrystal. In contrast, longer ligands prefer to interdigitate. We also study the effect of changing ligand length and ligand coverage on the NCs on the preferred ligand configurations. Since explicit all-atom modeling constrains the maximum system size that can be studied, we discuss issues related to coarse-graining the representation of the ligands, including a comparison of two commonly used coarse-grained models. We find that care has to be exercised in the choice of coarse-grained model. The data provided by these realistically sized ligand-capped NCs, determined using explicit all-atom models, should serve as a reference standard for future models of coarse-graining ligands using united atom models, especially for self-assembly processes. © 2012 American Institute of Physics.
Vaeth, Michael; Skovlund, Eva
2004-06-15
For a given regression problem it is possible to identify a suitably defined equivalent two-sample problem such that the power or sample size obtained for the two-sample problem also applies to the regression problem. For a standard linear regression model the equivalent two-sample problem is easily identified, but for generalized linear models and for Cox regression models the situation is more complicated. An approximately equivalent two-sample problem may, however, also be identified here. In particular, we show that for logistic regression and Cox regression models the equivalent two-sample problem is obtained by selecting two equally sized samples for which the parameters differ by a value equal to the slope times twice the standard deviation of the independent variable and further requiring that the overall expected number of events is unchanged. In a simulation study we examine the validity of this approach to power calculations in logistic regression and Cox regression models. Several different covariate distributions are considered for selected values of the overall response probability and a range of alternatives. For the Cox regression model we consider both constant and non-constant hazard rates. The results show that in general the approach is remarkably accurate even in relatively small samples. Some discrepancies are, however, found in small samples with few events and a highly skewed covariate distribution. Comparison with results based on alternative methods for logistic regression models with a single continuous covariate indicates that the proposed method is at least as good as its competitors. The method is easy to implement and therefore provides a simple way to extend the range of problems that can be covered by the usual formulas for power and sample size determination. Copyright 2004 John Wiley & Sons, Ltd.
A Markov model for the temporal dynamics of balanced random networks of finite size
Lagzi, Fereshteh; Rotter, Stefan
2014-01-01
The balanced state of recurrent networks of excitatory and inhibitory spiking neurons is characterized by fluctuations of population activity about an attractive fixed point. Numerical simulations show that these dynamics are essentially nonlinear, and the intrinsic noise (self-generated fluctuations) in networks of finite size is state-dependent. Therefore, stochastic differential equations with additive noise of fixed amplitude cannot provide an adequate description of the stochastic dynamics. The noise model should, rather, result from a self-consistent description of the network dynamics. Here, we consider a two-state Markovian neuron model, where spikes correspond to transitions from the active state to the refractory state. Excitatory and inhibitory input to this neuron affects the transition rates between the two states. The corresponding nonlinear dependencies can be identified directly from numerical simulations of networks of leaky integrate-and-fire neurons, discretized at a time resolution in the sub-millisecond range. Deterministic mean-field equations, and a noise component that depends on the dynamic state of the network, are obtained from this model. The resulting stochastic model reflects the behavior observed in numerical simulations quite well, irrespective of the size of the network. In particular, a strong temporal correlation between the two populations, a hallmark of the balanced state in random recurrent networks, are well represented by our model. Numerical simulations of such networks show that a log-normal distribution of short-term spike counts is a property of balanced random networks with fixed in-degree that has not been considered before, and our model shares this statistical property. Furthermore, the reconstruction of the flow from simulated time series suggests that the mean-field dynamics of finite-size networks are essentially of Wilson-Cowan type. We expect that this novel nonlinear stochastic model of the interaction between